1
|
Organisational culture: variation across hospitals and connection to patient safety climate. Qual Saf Health Care 2011; 19:592-6. [PMID: 21127115 DOI: 10.1136/qshc.2009.039511] [Citation(s) in RCA: 41] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
CONTEXT Bureaucratic organisational culture is less favourable to quality improvement, whereas organisations with group (teamwork) culture are better aligned for quality improvement. OBJECTIVE To determine if an organisational group culture shows better alignment with patient safety climate. DESIGN Cross-sectional administration of questionnaires. Setting 40 Hospital Corporation of America hospitals. PARTICIPANTS 1406 nurses, ancillary staff, allied staff and physicians. MAIN OUTCOME MEASURES Competing Values Measure of Organisational Culture, Safety Attitudes Questionnaire (SAQ), Safety Climate Survey (SCSc) and Information and Analysis (IA). RESULTS The Cronbach alpha was 0.81 for the group culture scale and 0.72 for the hierarchical culture scale. Group culture was positively correlated with SAQ and its subscales (from correlation coefficient r = 0.44 to 0.55, except situational recognition), ScSc (r = 0.47) and IA (r = 0.33). Hierarchical culture was negatively correlated with the SAQ scales, SCSc and IA. Among the 40 hospitals, 37.5% had a hierarchical dominant culture, 37.5% a dominant group culture and 25% a balanced culture. Group culture hospitals had significantly higher safety climate scores than hierarchical culture hospitals. The magnitude of these relationships was not affected after adjusting for provider job type and hospital characteristics. CONCLUSIONS Hospitals vary in organisational culture, and the type of culture relates to the safety climate within the hospital. In combination with prior studies, these results suggest that a healthcare organisation's culture is a critical factor in the development of its patient safety climate and in the successful implementation of quality improvement initiatives.
Collapse
|
2
|
The SQUIRE (Standards for QUality Improvement Reporting Excellence) guidelines for quality improvement reporting: explanation and elaboration. Qual Saf Health Care 2008; 17 Suppl 1:i13-32. [PMID: 18836062 PMCID: PMC2602740 DOI: 10.1136/qshc.2008.029058] [Citation(s) in RCA: 282] [Impact Index Per Article: 17.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
As the science of quality improvement in health care advances, the importance of sharing its accomplishments through the published literature increases. Current reporting of improvement work in health care varies widely in both content and quality. It is against this backdrop that a group of stakeholders from a variety of disciplines has created the Standards for QUality Improvement Reporting Excellence, which we refer to as the SQUIRE publication guidelines or SQUIRE statement. The SQUIRE statement consists of a checklist of 19 items that authors need to consider when writing articles that describe formal studies of quality improvement. Most of the items in the checklist are common to all scientific reporting, but virtually all of them have been modified to reflect the unique nature of medical improvement work. This "Explanation and Elaboration" document (E & E) is a companion to the SQUIRE statement. For each item in the SQUIRE guidelines the E & E document provides one or two examples from the published improvement literature, followed by an analysis of the ways in which the example expresses the intent of the guideline item. As with the E & E documents created to accompany other biomedical publication guidelines, the purpose of the SQUIRE E & E document is to assist authors along the path from completion of a quality improvement project to its publication. The SQUIRE statement itself, this E & E document, and additional information about reporting improvement work can be found at http://www.squire-statement.org.
Collapse
|
3
|
Abstract
BACKGROUND Patient complaints are associated with increased malpractice risk but it is unclear if complaints might be associated with medical complications. The purpose of this study was to determine whether an association exists between patient complaints and surgical complications. METHODS A retrospective analysis of 16,713 surgical admissions was conducted over a 54 month period at a single academic medical center. Surgical complications were identified using administrative data. The primary outcome measure was unsolicited patient complaints. RESULTS During the study period 0.9% of surgical admissions were associated with a patient complaint. 19% of admissions associated with a patient complaint included a postoperative complication compared with 12.5% of admissions without a patient complaint (p = 0.01). After adjusting for surgical specialty, co-morbid illnesses and length of stay, admissions with complications had an odds ratio of 1.74 (95% confidence interval 1.01 to 2.98) of being associated with a complaint compared with admissions without complications. CONCLUSIONS Admissions with surgical complications are more likely to be associated with a complaint than surgical admissions without complications. Further research is necessary to determine if patient complaints might serve as markers for poor clinical outcomes.
Collapse
|
4
|
Using real time process measurements to reduce catheter related bloodstream infections in the intensive care unit. Qual Saf Health Care 2006; 14:295-302. [PMID: 16076796 PMCID: PMC1744064 DOI: 10.1136/qshc.2004.013516] [Citation(s) in RCA: 45] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
PROBLEM Measuring a process of care in real time is essential for continuous quality improvement (CQI). Our inability to measure the process of central venous catheter (CVC) care in real time prevented CQI efforts aimed at reducing catheter related bloodstream infections (CR-BSIs) from these devices. DESIGN A system was developed for measuring the process of CVC care in real time. We used these new process measurements to continuously monitor the system, guide CQI activities, and deliver performance feedback to providers. SETTING Adult medical intensive care unit (MICU). KEY MEASURES FOR IMPROVEMENT Measured process of CVC care in real time; CR-BSI rate and time between CR-BSI events; and performance feedback to staff. STRATEGIES FOR CHANGE An interdisciplinary team developed a standardized, user friendly nursing checklist for CVC insertion. Infection control practitioners scanned the completed checklists into a computerized database, thereby generating real time measurements for the process of CVC insertion. Armed with these new process measurements, the team optimized the impact of a multifaceted intervention aimed at reducing CR-BSIs. EFFECTS OF CHANGE The new checklist immediately provided real time measurements for the process of CVC insertion. These process measures allowed the team to directly monitor adherence to evidence-based guidelines. Through continuous process measurement, the team successfully overcame barriers to change, reduced the CR-BSI rate, and improved patient safety. Two years after the introduction of the checklist the CR-BSI rate remained at a historic low. LESSONS LEARNT Measuring the process of CVC care in real time is feasible in the ICU. When trying to improve care, real time process measurements are an excellent tool for overcoming barriers to change and enhancing the sustainability of efforts. To continually improve patient safety, healthcare organizations should continually measure their key clinical processes in real time.
Collapse
|
5
|
The hemochromatosis C282Y allele: a risk factor for hepatic veno-occlusive disease after hematopoietic stem cell transplantation. Bone Marrow Transplant 2005; 35:1155-64. [PMID: 15834437 DOI: 10.1038/sj.bmt.1704943] [Citation(s) in RCA: 34] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
Hepatic veno-occlusive disease (HVOD) is a serious complication of hematopoietic stem cell transplantation (HSCT). Since the liver is a major site of iron deposition in HFE-associated hemochromatosis, and iron has oxidative toxicity, we hypothesized that HFE genotype might influence the risk of HVOD after myeloablative HSCT. We determined HFE genotypes in 166 HSCT recipients who were evaluated prospectively for HVOD. We also tested whether a common variant of the rate-limiting urea cycle enzyme, carbamyl-phosphate synthetase (CPS), previously observed to protect against HVOD in this cohort, modified the effect of HFE genotype. Risk of HVOD was significantly higher in carriers of at least one C282Y allele (RR=3.7, 95% CI 1.2-12.1) and increased progressively with C282Y allelic dose (RR=1.7, 95% CI 0.4-6.8 in heterozygotes; RR=8.6, 95% CI 1.5-48.5 in homozygotes). The CPS A allele, which encodes a more efficient urea cycle enzyme, reduced the risk of HVOD associated with HFE C282Y. We conclude that HFE C282Y is a risk factor for HVOD and that CPS polymorphisms may counteract its adverse effects. Knowledge of these genotypes and monitoring of iron stores may facilitate risk-stratification and testing of strategies to prevent HVOD, such as iron chelation and pharmacologic support of the urea cycle.
Collapse
|
6
|
Cost perspectives of laparoscopic and open appendectomy. Surg Endosc 2004; 19:374-8. [PMID: 15624056 DOI: 10.1007/s00464-004-8724-1] [Citation(s) in RCA: 31] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2004] [Accepted: 09/16/2004] [Indexed: 01/29/2023]
Abstract
BACKGROUND Despite multiple studies comparing laparoscopic and open appendectomies, the clinically and economically superior procedure still is in question. A cost analysis was performed using both institutional and societal perspectives. METHODS A decision analytic model was developed to evaluate laparoscopic and open appendectomies. The institutional perspective addressed direct health care costs, whereas the societal perspective addressed direct and indirect health care costs. Baseline values and ranges were taken from randomized controlled trials, meta-analyses, and Medicare databases. RESULTS From the institutional perspective, open appendectomy is the least expensive strategy, with an expected cost of $5,171, as compared with $6,118 for laparoscopic appendectomy. The laparoscopic approach is less expensive if open appendectomy wound infection rates exceed 23%. From the societal perspective, laparoscopic appendectomy is the least expensive strategy, with an expected cost of $10,400, as compared with $12,055 for open appendectomy. CONCLUSIONS The decision analysis demonstrated an economic advantage to the hospital of open appendectomy. In contrast, laparoscopic appendectomy represents a better economic choice for the patient.
Collapse
|
7
|
Health-related quality of life before and after solid organ transplantation. Measurement consideration, reported outcomes, and future directions. MINERVA CHIR 2002; 57:257-71. [PMID: 12029219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/25/2023]
Abstract
The initial focus in organ transplantation clinical research was demonstrating acceptable technical and survival outcomes. Both patient and graft survival have reached well-documented, laudable levels, and solid organ (liver, heart, kidney, lung) transplantation procedures are now relatively common. As with any complex medical procedure that entails relatively high risk, financial costs, and life-long follow-up care, reliable and valid assessments of the "quality" of the extended life years are of interest to patients, their families, policy makers, and payers. This review focuses on health-related quality of life (HRQOL) and functional performance in adults following solid organ transplantation, with an emphasis on: 1) instruments and methods; 2) outcomes in liver, heart, kidney, and lung transplant recipients; and 3) future research directions. Practical considerations for developing longitudinal HRQOL assessment strategies are reviewed. The current emphasis on modeling demographic and clinical factors that promote or limit optimal HRQOL is illustrated. These lines of research will help identify potential interventions designed to promote better HRQOL in organ transplant recipients.
Collapse
|
8
|
Delirium in mechanically ventilated patients: validity and reliability of the confusion assessment method for the intensive care unit (CAM-ICU). JAMA 2001; 286:2703-10. [PMID: 11730446 DOI: 10.1001/jama.286.21.2703] [Citation(s) in RCA: 1968] [Impact Index Per Article: 85.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
Abstract
CONTEXT Delirium is a common problem in the intensive care unit (ICU). Accurate diagnosis is limited by the difficulty of communicating with mechanically ventilated patients and by lack of a validated delirium instrument for use in the ICU. OBJECTIVES To validate a delirium assessment instrument that uses standardized nonverbal assessments for mechanically ventilated patients and to determine the occurrence rate of delirium in such patients. DESIGN AND SETTING Prospective cohort study testing the Confusion Assessment Method for ICU Patients (CAM-ICU) in the adult medical and coronary ICUs of a US university-based medical center. PARTICIPANTS A total of 111 consecutive patients who were mechanically ventilated were enrolled from February 1, 2000, to July 15, 2000, of whom 96 (86.5%) were evaluable for the development of delirium and 15 (13.5%) were excluded because they remained comatose throughout the investigation. MAIN OUTCOME MEASURES Occurrence rate of delirium and sensitivity, specificity, and interrater reliability of delirium assessments using the CAM-ICU, made daily by 2 critical care study nurses, compared with assessments by delirium experts using Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, criteria. RESULTS A total of 471 daily paired evaluations were completed. Compared with the reference standard for diagnosing delirium, 2 study nurses using the CAM-ICU had sensitivities of 100% and 93%, specificities of 98% and 100%, and high interrater reliability (kappa = 0.96; 95% confidence interval, 0.92-0.99). Interrater reliability measures across subgroup comparisons showed kappa values of 0.92 for those aged 65 years or older, 0.99 for those with suspected dementia, or 0.94 for those with Acute Physiology and Chronic Health Evaluation II scores at or above the median value of 23 (all P<.001). Comparing sensitivity and specificity between patient subgroups according to age, suspected dementia, or severity of illness showed no significant differences. The mean (SD) CAM-ICU administration time was 2 (1) minutes. Reference standard diagnoses of delirium, stupor, and coma occurred in 25.2%, 21.3%, and 28.5% of all observations, respectively. Delirium occurred in 80 (83.3%) patients during their ICU stay for a mean (SD) of 2.4 (1.6) days. Delirium was even present in 39.5% of alert or easily aroused patient observations by the reference standard and persisted in 10.4% of patients at hospital discharge. CONCLUSIONS Delirium, a complication not currently monitored in the ICU setting, is extremely common in mechanically ventilated patients. The CAM-ICU appears to be rapid, valid, and reliable for diagnosing delirium in the ICU setting and may be a useful instrument for both clinical and research purposes.
Collapse
|
9
|
Nonenhanced limited CT in children suspected of having appendicitis: prospective comparison of attending and resident interpretations. Radiology 2001; 221:755-9. [PMID: 11719672 DOI: 10.1148/radiol.2213010379] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
PURPOSE To prospectively compare resident and attending radiologic interpretations of nonenhanced limited computed tomographic (CT) scans obtained in children suspected of having appendicitis. MATERIALS AND METHODS Seventy-five consecutive children underwent nonenhanced limited CT for suspected appendicitis. The scans were prospectively interpreted by a resident and an attending radiologist, each unaware of the other's interpretation. The probability that the findings indicated a diagnosis of appendicitis, level of certainty in the interpretation, and presence of an alternate diagnosis were statistically analyzed. RESULTS Nineteen children (25%) had appendicitis. The area under the receiver operating characteristic curve was not significantly different between residents (0.97 +/- 0.02) and attendings (0.95 +/- 0.04). The percentage agreement between residents and attendings was 91% (kappa = 0.73 +/- 0.095). The average level of certainty tended to be higher for attendings (93% +/- 15) than residents (89% +/- 12). The sensitivity, specificity, and accuracy of resident interpretations were 63%, 96%, and 88%, respectively, compared with those of attending interpretations--95%, 98%, and 97%, respectively. Residents and attendings noted alternate diagnoses in 30% of children without appendicitis. CONCLUSION A high level of agreement exists between resident and attending radiologists in the interpretation of nonenhanced limited CT scans in children suspected of having appendicitis. Residents, however, tend to be less confident in their interpretations.
Collapse
|
10
|
Abstract
OBJECTIVE To determine predictors of influenza virus vaccination status in children who are hospitalized during the influenza season. METHODS A cross-sectional study was conducted among children who were hospitalized with fever between 6 months and 3 years of age or with respiratory symptoms between 6 months and 18 years of age. The 1999 to 2000 influenza vaccination status of hospitalized children and potential factors that influence decisions to vaccinate were obtained from a questionnaire administered to parents/guardians. RESULTS Influenza vaccination rates for hospitalized children with and without high-risk medical conditions were 31% and 14%, respectively. For both groups of children, the vaccination status was strongly influenced by recommendations from physicians. More than 70% of children were vaccinated if a physician had recommended the influenza vaccine, whereas only 3% were vaccinated if a physician had not. Lack of awareness that children can receive the influenza vaccine was a commonly cited reason for nonvaccination. CONCLUSIONS A minority of hospitalized children with high-risk conditions had received the influenza vaccine. However, parents' recalling that a clinician had recommended the vaccine had a positive impact on the vaccination status of children.
Collapse
|
11
|
The effects of rejection episodes, obesity, and osteopenia on functional performance and health-related quality of life after heart transplantation. Transplant Proc 2001; 33:3533-5. [PMID: 11750505 DOI: 10.1016/s0041-1345(01)02424-1] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
12
|
|
13
|
Management of chronic asthma. EVIDENCE REPORT/TECHNOLOGY ASSESSMENT (SUMMARY) 2001:1-10. [PMID: 15523743 PMCID: PMC4781501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 05/01/2023]
|
14
|
Abstract
OBJECTIVES Hepatitis C is the leading cause of chronic hepatitis in the United States. Little information is available regarding how persons with hepatitis C view health with their disease. We studied patients' perceptions about the value of hepatitis C health states and evaluated whether physicians understand their patients' perspectives about this disease. METHODS A total of 50 consecutive persons with hepatitis C were surveyed when they presented as new patients to a hepatology practice. Subjects provided utility assessments (preference values) for five hepatitis C health states and for treatment side effects. They also stated their threshold for accepting antiviral therapy. Five hepatologists used the same scales to estimate their patients' responses. RESULTS On average, patients believed that hepatitis C without symptoms was associated with an 11% reduction in preference value from that of life without infection, and the most serious condition (severe symptoms, cirrhosis) was believed to carry a 73% decrement. Patients judged the side effects of antiviral therapy quite unfavorably, and their median stated threshold for accepting treatment was a cure rate of 80%. Physicians' estimates were not significantly associated with patients' preference values for hepatitis C health states, treatment side effects, or with patients' thresholds for accepting treatment. In multivariate analysis, patients' stated thresholds for taking treatment were significantly associated with their decisions regarding therapy (beta = -2.72+/-1.21, p = 0.025). CONCLUSIONS There was little agreement between patients' preference values about hepatitis C and their physicians' estimates of those values. Utility analysis could facilitate shared decision making about hepatitis C.
Collapse
|
15
|
Evaluation of delirium in critically ill patients: validation of the Confusion Assessment Method for the Intensive Care Unit (CAM-ICU). Crit Care Med 2001; 29:1370-9. [PMID: 11445689 DOI: 10.1097/00003246-200107000-00012] [Citation(s) in RCA: 1462] [Impact Index Per Article: 63.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
OBJECTIVE To develop and validate an instrument for use in the intensive care unit to accurately diagnose delirium in critically ill patients who are often nonverbal because of mechanical ventilation. DESIGN Prospective cohort study. SETTING The adult medical and coronary intensive care units of a tertiary care, university-based medical center. PATIENTS Thirty-eight patients admitted to the intensive care units. MEASUREMENTS AND MAIN RESULTS We designed and tested a modified version of the Confusion Assessment Method for use in intensive care unit patients and called it the CAM-ICU. Daily ratings from intensive care unit admission to hospital discharge by two study nurses and an intensivist who used the CAM-ICU were compared against the reference standard, a delirium expert who used delirium criteria from the Diagnostic and Statistical Manual of Mental Disorders (fourth edition). A total of 293 daily, paired evaluations were completed, with reference standard diagnoses of delirium in 42% and coma in 27% of all observations. To include only interactive patient evaluations and avoid repeat-observer bias for patients studied on multiple days, we used only the first-alert or lethargic comparison evaluation in each patient. Thirty-three of 38 patients (87%) developed delirium during their intensive care unit stay, mean duration of 4.2 +/- 1.7 days. Excluding evaluations of comatose patients because of lack of characteristic delirium features, the two critical care study nurses and intensivist demonstrated high interrater reliability for their CAM-ICU ratings with kappa statistics of 0.84, 0.79, and 0.95, respectively (p <.001). The two nurses' and intensivist's sensitivities when using the CAM-ICU compared with the reference standard were 95%, 96%, and 100%, respectively, whereas their specificities were 93%, 93%, and 89%, respectively. CONCLUSIONS The CAM-ICU demonstrated excellent reliability and validity when used by nurses and physicians to identify delirium in intensive care unit patients. The CAM-ICU may be a useful instrument for both clinical and research purposes to monitor delirium in this challenging patient population.
Collapse
|
16
|
The impact of delirium in the intensive care unit on hospital length of stay. Intensive Care Med 2001; 27:1892-900. [PMID: 11797025 PMCID: PMC7095464 DOI: 10.1007/s00134-001-1132-2] [Citation(s) in RCA: 690] [Impact Index Per Article: 30.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2001] [Accepted: 09/14/2001] [Indexed: 12/16/2022]
Abstract
STUDY OBJECTIVE To determine the relationship between delirium in the intensive care unit (ICU) and outcomes including length of stay in the hospital. DESIGN A prospective cohort study. SETTING The adult medical ICU of a tertiary care, university-based medical center. PARTICIPANTS The study population consisted of 48 patients admitted to the ICU, 24 of whom received mechanical ventilation. MEASUREMENTS All patients were evaluated for the development and persistence of delirium on a daily basis by a geriatric or psychiatric specialist with expertise in delirium assessment using the Diagnostic Statistical Manual IV (DSM-IV) criteria of the American Psychiatric Association, the reference standard for delirium ratings. Primary outcomes measured were length of stay in the ICU and hospital. RESULTS The mean onset of delirium was 2.6 days (S.D.+/-1.7), and the mean duration was 3.4+/-1.9 days. Of the 48 patients, 39 (81.3%) developed delirium, and of these 29 (60.4%) developed the complication while still in the ICU. The duration of delirium was associated with length of stay in the ICU ( r=0.65, P=0.0001) and in the hospital ( r=0.68, P<0.0001). Using multivariate analysis, delirium was the strongest predictor of length of stay in the hospital ( P=0.006) even after adjusting for severity of illness, age, gender, race, and days of benzodiazepine and narcotic drug administration. CONCLUSIONS In this patient cohort, the majority of patients developed delirium in the ICU, and delirium was the strongest independent determinant of length of stay in the hospital. Further study and monitoring of delirium in the ICU and the risk factors for its development are warranted.
Collapse
|
17
|
Abstract
OBJECTIVE To describe functional health and health-related quality of life (QOL) before and after transplantation; to compare and contrast outcomes among liver, heart, lung, and kidney transplant patients, and compare these outcomes with selected norms; and to explore whether physiologic performance, demographics, and other clinical variables are predictors of posttransplantation overall subjective QOL. SUMMARY BACKGROUND DATA There is increasing demand for outcomes analysis, including health-related QOL, after medical and surgical interventions. Because of the high cost, interest in transplantation outcomes is particularly intense. With technical surgical experience and improved immunosuppression, survival after solid organ transplantation has matured to acceptable levels. More sensitive measures of outcomes are necessary to evaluate further developments in clinical transplantation, including data on objective functional outcome and subjective QOL. METHODS The Karnofsky Performance Status was assessed objectively for patients before transplantation and up to 4 years after transplantation, and scores were compared by repeated measures analysis of variance. Subjective evaluation of QOL over time was obtained using the Short Form-36 (SF-36) and the Psychosocial Adjustment to Illness Scale (PAIS). These data were analyzed using multivariate and univariate analysis of variance. A summary model of health-related QOL was tested by path analysis. RESULTS Tools were administered to 100 liver, 94 heart, 112 kidney, and 65 lung transplant patients. Mean age at transplantation was 48 years; 36% of recipients were female. The Karnofsky Performance Status before transplantation was 37 +/- 1 for lung, 38 +/- 2 for heart, 53 +/- 3 for liver, and 75 +/- 1 for kidney recipients. After transplantation, the scores improved to 67 +/- 1 at 3 months, 77 +/- 1 at 6 months, 82 +/- 1 at 12 months, 86 +/- 1 at 24 months, 84 +/- 2 at 36 months, and 83 +/- 3 at 48 months. When patients were stratified by initial performance score as disabled or able, both groups merged in terms of performance by 6 months after liver and heart transplantation; kidney transplant patients maintained their stratification 2 years after transplantation. The SF-36 physical and mental component scales improved after transplantation. The PAIS score improved globally. Path analysis demonstrated a direct effect on the posttransplant Karnofsky score by time after transplantation and diabetes, with trends evident for education and preoperative serum creatinine level. Although neither time after transplantation nor diabetes was directly predictive of a composite QOL score that incorporated all 15 subjective domains, recent Karnofsky score and education level were directly predictive of the QOL composite score. CONCLUSIONS Different types of transplant patients have a different health-related QOL before transplantation. Performance improved after transplantation for all four types of transplants, but the trajectories were not the same. Subjective QOL measured by the SF-36 and the PAIS also improved after transplantation. Path analysis shows the important predictors of health-related QOL. These data provide clearly defined and widely useful QOL outcome benchmarks for different types of solid organ transplants.
Collapse
|
18
|
Abstract
BACKGROUND Whereas studies have shown higher mortality rates in patients with do-not-resuscitate (DNR) orders, most have not accounted for confounding factors related to the use of DNR orders and/or factors related to the risk of death. OBJECTIVE To determine the relationship between the use of DNR orders and in-hospital mortality, adjusting for severity of illness and other covariates. DESIGN Retrospective cohort study. PATIENTS There were 13,337 consecutive stroke admissions to 30 hospitals in 1991 to 1994. MEASURES To decrease selection bias, propensity scores reflecting the likelihood of a DNR order were developed. Scores were based on nine demographic and clinical variables independently related to use of DNR orders. The odds of death in patients with DNR orders were then determined using logistic regression, adjustment for propensity scores, severity of illness, and other factors. RESULTS DNR orders were used in 22% (n = 2,898) of patients. In analyses examining DNR orders written at any time during hospitalization, unadjusted in-hospital mortality rates were higher in patients with DNR orders than in patients without orders (40% vs. 2%, P<0.001); the adjusted odds of death was 33.9 (95% CI, 27.4-42.0). The adjusted odds of death remained higher in analyses that only considered orders written during the first 2 days (OR 3.7; 95% CI, 3.2-4.4) or the first day (OR 2.4; 95% CI, 2.0-2.9). In stratified analyses, adjusted odds of death tended to be higher in patients with lower propensity scores. CONCLUSION The risk of death was substantially higher in patients with DNR orders after adjusting for propensity scores and other covariates. Whereas the increased risk may reflect patient preferences for less intensive care or unmeasured prognostic factors, the current findings highlight the need for more direct evaluations of the quality and appropriateness of care of patients with DNR orders.
Collapse
|
19
|
Improving health care, Part 5: Applying the Dartmouth clinical improvement model to community health. THE JOINT COMMISSION JOURNAL ON QUALITY IMPROVEMENT 1998; 24:679-703. [PMID: 9868613 DOI: 10.1016/s1070-3241(16)30415-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
BACKGROUND Traditional approaches to community health initiatives provide guidance on community mobilization, health assessment, planning, and intervention. Yet direction in how to frame the action steps to implement and measure results is often missing. Many community health initiatives find implementation overwhelming and ineffectual. FRAMEWORK FOR COMMUNITY HEALTH-THE CLINICAL IMPROVEMENT MODEL: The process--outcome methodology of continuous quality improvement (CQI) can translate large community aims into manageable projects. The sequential application of the clinical improvement model and the Community Health Value Compass for measuring outcomes-in state of health, quality of life, satisfaction, and costs-provides a link between data and action, thereby producing accountability for the community health initiative. USING THE CLINICAL IMPROVEMENT MODEL IN TWIN FALLS: Healthy Magic Valley (Twin Falls, Idaho) is the vision for long-term improvement in health status and reduction of health risks for the Southcentral Idaho Health Network. Since 1996 the Twin Falls Community Health Collaborative and SAFE KIDS Coalition have used the Value Compass model and CQI methods to decrease the rate of motor vehicle collisions, serious injuries, and deaths involving teens, while reducing the health, educational, legal, and financial consequences associated with teen-involved motor vehicle collisions. In 1993 the Twin Falls collaborative convened to apply CQI methods to the health of the community. The team has since met periodically to address the issues of community health, using the Dartmouth value compass model since 1996. Each sequential application of the process-outcome CQI framework exposes a blueprint for action and the unfolding of a health improvement strategy. The interventions should affect one or more dimensions of the value compass for teenage driving and motor vehicle collisions. CASE STUDY OF THE CLINICAL IMPROVEMENT MODEL: The motor vehicle death in October 1997 of a high school football player, who was not wearing a seat belt, led to a call to action for injury prevention. Implementation of a local community health initiative on seat belt use started in 1998. A strategy was developed to address implementation of the project among high school teens (for immediate impact) and elementary school children (for long-term impact) and to promote collaboration between the school and the rest of the community. RESULTS Observed use of seat belts increased from January to September 1998. Data on fatality rates; injury rates; percentages of teens in crashes, of teens injured, and of teen collisions involving use of alcohol; and comprehensive costs are also monitored. DISCUSSION Once coalitions are built and priorities set, the Dartmouth clinical improvement model presents a method that emphasizes measuring the benefits to the individual members of the community. A portfolio composed of a value compass for each health improvement initiative provides ongoing feedback for guiding subsequent strategic planning by the governing community health network.
Collapse
|
20
|
A firm trial of interdisciplinary rounds on the inpatient medical wards: an intervention designed using continuous quality improvement. Med Care 1998; 36:AS4-12. [PMID: 9708578 DOI: 10.1097/00005650-199808001-00002] [Citation(s) in RCA: 120] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES In August 1993 a group of house staff and nursing staff at MetroHealth Medical Center formed a quality improvement team to evaluate the process of medical care on the inpatient wards. Using standard continuous quality improvement (CQI) methods, a team of medical interns, nurses, and other health professionals involved in patient care on the medicine inpatient service designed interdisciplinary, daily work rounds to improve the care of patients on the inpatient wards. METHODS The authors conducted a randomized, controlled firm trial of the impact of interdisciplinary rounds on the inpatient medicine services. The trial lasted 6 months (November 1993-April 1994) and included 1,102 admissions randomly assigned to experimental or control teams by the pre-existing firm system. Of the 1,102 admissions included in the study, 535 were randomized to medical services with traditional rounds and 567 to medical services with interdisciplinary rounds. The outcomes studied included length of stay (LOS), total hospital charges, provider satisfaction, and ancillary service efficiency. RESULTS Unadjusted analysis for log-transformed data showed lower length of stay and total charges for the interdisciplinary group. The mean LOS for interdisciplinary rounds was 5.46 days, compared with 6.06 days for traditional care (P = 0.006), whereas mean total charges were $6,681 and $8,090 (P = 0.002) for the two groups, respectively. After multivariate regression analysis using a propensity score that included gender, age, marital status, admission source, diagnosis-related group (DRG) weight, and primary diagnosis by International Classification of Diseases, Ninth Revision (ICD-9) cluster, these differences remained statistically significant. CONCLUSIONS Previous studies of interdisciplinary teams have failed to show statistically significant cost savings. This study involving more patients shows both cost and LOS decreases with the use of interdisciplinary teams. At the end of the 6-month trial, interdisciplinary rounds were instituted on all medicine inpatient services.
Collapse
|
21
|
Abstract
BACKGROUND & AIMS Survival of patients with end-stage liver disease is variable and difficult to predict. A two-phase prospective cohort study was conducted at five teaching hospitals to develop and evaluate a model for prediction of death. METHODS Five hundred thirty-eight hospitalized patients with a history of chronic liver disease and two or more signs of decompensation were studied. RESULTS The cumulative incidence of death was 30% at 30 days and 50% at 6 months. In 295 patients in phase I, time till death was independently associated (P < 0.01) with five factors measured on study day 3: renal insufficiency, cognitive dysfunction, ventilatory insufficiency, age > or = 65 years, and prothrombin time > or = 16 seconds. These risk factors stratified 243 patients in phase II into three groups with cumulative incidences of death at 30 days of 12%, 40%, and 74%, respectively. Integration of the prognostic model with physicians' predictions led to improved estimates of the probability of death. Although performance of liver transplantation after study entry was independently associated with enhanced survival, the intensity of other acute therapies was not. CONCLUSIONS Five risk factors were associated with the risk of death in patients with end-stage liver disease and provided a quantitative basis to complement physicians' prognostic estimates.
Collapse
|
22
|
Variation in the use of do-not-resuscitate orders in patients with stroke. ARCHIVES OF INTERNAL MEDICINE 1997; 157:1841-7. [PMID: 9290543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
OBJECTIVES To identify sociodemographic and clinical characteristics associated with the use of do-not-resuscitate (DNR) orders in hospitalized patients with stroke. To examine whether the use of DNR orders varies across hospitals. METHODS This observational cohort study used data collected for 13337 consecutive eligible patients with a primary diagnosis of stroke. These patients were discharged in 1991 through 1994 from 30 hospitals in a large metropolitan area. Study data were abstracted from patients' hospital records using standard forms. Admission severity of illness was measured using a validated multivariable model. Sociodemographic and clinical factors independently associated with the use of DNR orders were identified using stepwise logistic regression. RESULTS Do-not-resuscitate orders were written for 2898 patients (22%). Patient characteristics independently (P < .01) associated with increased use of DNR orders included increasing age (odds ratio [OR], 1.06 per year); admission from a skilled nursing facility (OR, 2.44) or through the emergency department (OR, 1.49); cancer (OR, 2.73), intracerebral hemorrhage (OR, 2.12), coma (OR, 7.47), or lethargy or stupor on admission neurological assessment (OR, 3.38); and increasing admission severity (OR; 1.29 per decile). In contrast, African American race was associated with lower use of DNR orders (OR, 0.54). Although substantial variation in the use of DNR orders was observed across hospitals, with rates ranging from 12% to 32%, adjusting for the above patient characteristics eliminated much of this variation, including differences between major teaching and other hospitals and between hospitals with and without religious affiliations. CONCLUSIONS In our community-based analysis of patients with stroke, the use of DNR orders was common and was strongly related to several patient characteristics. These factors explained much of the variation across hospitals. While our analysis did not account for differences in patient preferences for treatment, the differences we observed in the use of DNR orders across sociodemographic groups are suggestive of variations in care and may have important implications for the cost and quality of hospital care.
Collapse
|
23
|
Abstract
OBJECTIVE To examine the association between the use of right heart catheterization (RHC) during the first 24 hours of care in the intensive care unit (ICU) and subsequent survival, length of stay, intensity of care, and cost of care. DESIGN Prospective cohort study. SETTING Five US teaching hospitals between 1989 and 1994. SUBJECTS A total of 5735 critically ill adult patients receiving care in an ICU for 1 of 9 prespecified disease categories. MAIN OUTCOME MEASURES Survival time, cost of care, intensity of care, and length of stay in the ICU and hospital, determined from the clinical record and from the National Death Index. A propensity score for RHC was constructed using multivariable logistic regression. Case-matching and multivariable regression modeling techniques were used to estimate the association of RHC with specific outcomes after adjusting for treatment selection using the propensity score. Sensitivity analysis was used to estimate the potential effect of an unidentified or missing covariate on the results. RESULTS By case-matching analysis, patients with RHC had an increased 30-day mortality (odds ratio, 1.24; 95% confidence interval, 1.03-1.49). The mean cost (25th, 50th, 75th percentiles) per hospital stay was $49 300 ($17 000, $30 500, $56 600) with RHC and $35 700 ($11 300, $20 600, $39 200) without RHC. Mean length of stay in the ICU was 14.8 (5, 9, 17) days with RHC and 13.0 (4, 7, 14) days without RHC. These findings were all confirmed by multivariable modeling techniques. Subgroup analysis did not reveal any patient group or site for which RHC was associated with improved outcomes. Patients with higher baseline probability of surviving 2 months had the highest relative risk of death following RHC. Sensitivity analysis suggested that a missing covariate would have to increase the risk of death 6-fold and the risk of RHC 6-fold for a true beneficial effect of RHC to be misrepresented as harmful. CONCLUSION In this observational study of critically ill patients, after adjustment for treatment selection bias, RHC was associated with increased mortality and increased utilization of resources. The cause of this apparent lack of benefit is unclear. The results of this analysis should be confirmed in other observational studies. These findings justify reconsideration of a randomized controlled trial of RHC and may guide patient selection for such a study.
Collapse
|
24
|
Abstract
The purpose of this study is to describe the involvement of nurses in the decision-making process of seriously ill hospitalized adults. Nurses (696) completed interviews with 1,427 patients. Patient, surrogate, and physician interviews were also completed. Patients and surrogates perceive the nurse as more influential in decision making than does the nurse or physician. Many nurses reported having no (31%) or little (36%) knowledge of their patients' preferences, and 53% of the nurses did not advocate for their patients' preferences. Only 50% of the nurses reported educating their patients about the treatment plan chosen or discussing treatment options with their patients, and few (17%) discuss prognosis. This study indicates nurses are not actively involved in the decision-making process of their patients, especially older or more experienced nurses and those working in intensive care units.
Collapse
|
25
|
Abstract
OBJECTIVE To compare the costs of alternative strategies for the treatment of duodenal ulcer. DESIGN A cost comparison using decision analysis. METHODS A decision model was used to compare the costs per cure of an endoscopically documented duodenal ulcer for three initial treatment strategies: 1) H2-receptor antagonist therapy for 8 weeks, 2) antibiotic therapy for Helicobacter pylori infection plus H2-receptor antagonist therapy, and 3) urease test-based treatment. For symptomatic recurrences, secondary treatment strategies included empiric retreatment with the same or other regimen, and treatment based on repeat endoscopy-guided urease test or biopsy, with an assumption of subsequent cure. The cohort modeled for this analysis consisted of patients at low risk for a malignant ulcer. Probability estimates were derived from published clinical trials, cohort studies, and expert opinion. Side effects from combination therapy with antibiotics and H2-receptor antagonists and resulting costs were included from the perspective of a group practice model health maintenance organization. RESULTS For all secondary treatment strategies, initial therapy with antibiotics for H. pylori infection plus an H2-receptor antagonist resulted in the lowest average costs per symptomatic cure when the prevalence or likelihood of H. pylori infection exceeded 66% to 76%; the costs ranged from $284 for secondary (re)treatment with empiric antibiotic and H2-receptor antagonist therapy to $398 for endoscopy-guided secondary treatment. Initial treatment with an H2-receptor antagonist resulted in the highest costs, ranging from $372 for secondary treatment with empiric antibiotic and H2-receptor antagonist therapy to $679 for endoscopy-guided secondary treatment. The results were not sensitive to the rates of duodenal ulcer recurrence after either treatment, to the cost of either treatment, or to prevalence of H. pylori. CONCLUSIONS This cost analysis indicates that, regardless of the secondary treatment used for ulcer recurrence, initial therapy with antibiotics for H. pylori infection plus an H2-receptor antagonist provides the lowest costs per symptomatic cure. These cost savings and the lower recurrence rates associated with this treatment favor eradication of H. pylori as part of the initial treatment of duodenal ulcer.
Collapse
|
26
|
Improving compliance with breast cancer screening in older women. Results of a randomized controlled trial. ARCHIVES OF INTERNAL MEDICINE 1995; 155:717-22. [PMID: 7695460] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Abstract
BACKGROUND To compare three approaches for improving compliance with breast cancer screening in older women. METHODS Randomized controlled trial using three parallel group practices at a public hospital. Subjects included women aged 65 years and older (n = 803) who were seen by residents (n = 66) attending the ambulatory clinic from October 1, 1989, through March 31, 1990. All provider groups received intensive education in breast cancer screening. The control group received no further intervention. Staff in the second group offered education to patients at their visit. In addition, flowsheets were used in the "Prevention Team" group and staff had their tasks redefined to facilitate compliance. RESULTS Medical records were reviewed to determine documented offering/receipt of clinical breast examination and mammography. A subgroup of women without previous clinical breast examination (n = 540) and without previous mammography (n = 471) were analyzed to determine the effect of the intervention. During the intervention period, women without a previous clinical breast examination were offered an examination significantly more often in the Prevention Team group than in the control group, adjusting for age, race, and comorbidity and for physicians' gender and training level. The patients in the Prevention Team group were offered clinical breast examination (31.5%) more frequently than those in the patient education or control groups, but this was not significant after adjusting for the above covariates. Likewise, mammography was offered more frequently to patients in the Prevention Team and in the patient education group than to patients in the control group, after adjusting for the factors above using logistic regression. CONCLUSIONS The results provide support for patient education and organizational changes that involve nonphysician personnel to enhance breast cancer screening among older women, particularly those without previous screening.
Collapse
|
27
|
The covariance decomposition of the probability score and its use in evaluating prognostic estimates. SUPPORT Investigators. Med Decis Making 1995; 15:120-31. [PMID: 7783572 DOI: 10.1177/0272989x9501500204] [Citation(s) in RCA: 92] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
The probability score (PS) or Brier score has been used in a large number of studies in which physician judgment performance was assessed. However, the covariance decomposition of the PS has not previously been used to evaluate medical judgment. The authors introduce the technique and demonstrate it by analyzing prognostic estimates of three groups: physicians, their patients, and the patients' decision-making surrogates. The major components of the covariance decomposition--bias, slope, and scatter--are displayed in covariance graphs for each of the three groups. The decomposition reveals that whereas the physicians have the best overall estimation performance, their bias and their scatter are not always superior to those of the other two groups. This is primarily due to two factors. First, the physicians' prognostic estimates are pessimistic. Second, the patients place the large majority of their estimates in the most optimistic category, thereby achieving low scatter. The authors suggest that the calculational simplicity of this decomposition, its informativeness, and the intuitive nature of its components make it a useful tool with which to analyze medical judgment.
Collapse
|
28
|
Abstract
OBJECTIVE To compare three approaches for improving compliance with influenza and pneumococcal vaccination of elderly patients. DESIGN Randomized controlled trial using three parallel group practices at a public urban teaching hospital. SETTING Public teaching hospital. SUBJECTS All patients 65 years of age and older (n = 1202) seen by resident physicians (n = 66) attending three ambulatory medical practices from October 1, 1989 to March 31, 1990. INTERVENTIONS All three provider groups received intensive education in immunization standards. The control group received no further intervention. Staff in the second group offered education to patients at their visits. In the third group, the prevention team, a flowsheet was used, patient education offered, and staff had their tasks redefined to facilitate compliance; for vaccinations, eg, nurses could vaccinate independent of MD initiative. MEASUREMENTS AND MAIN RESULTS Medical records were reviewed for the 1202 patients seen, including 756 patients seen during both the 1988-89 and 1989-90 influenza seasons, to determine documented offering and receipt of vaccinations. During the intervention period (1989-90), influenza vaccinations were offered significantly more frequently to prevention team patients (68.3%) than to patients in either the patient education (50.4%) or control (47.6%) groups (P = 0.006), even after adjusting for the patients' prior vaccination status, age, gender, race, and high-risk co-morbidity and for physicians' level of training. Likewise, pneumococcal vaccinations were offered more frequently to previously unvaccinated prevention team patients (28.3%) than to patient education (6.5%) or control (5.4%) group patients (P = 0.001), even after adjusting for the factors using multivariate analysis. Compliance rates did not differ between patient education and control subjects for either vaccine. Pre-intervention physician surveys documented higher perceived than actual compliance for both vaccines, with 89.0% and 52.8% of physicians believing that they complied with influenza and pneumococcal vaccination guidelines, respectively. CONCLUSIONS The results of this trial provide strong support for organizational changes that involve non-physician personnel to enhance vaccination rates among older adults.
Collapse
|
29
|
Risk factors for postthyroidectomy hypocalcemia. Surgery 1994; 116:641-7; discussion 647-8. [PMID: 7940161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
BACKGROUND Hypocalcemia is a common sequela of thyroidectomy; however, its causative factors have not been completely delineated. METHODS A prospective study of 60 patients who underwent unilateral (n = 15) or bilateral (n = 45) thyroidectomy between 1990 and 1993 was completed to determine the incidence and risk factors for hypocalcemia. Free thyroxine, thyrotropin, and alkaline phosphatase levels were obtained before operation in all patients, together with preoperative and postoperative ionized calcium, parathyroid hormone (PTH), calcitonin, and 1,25-dihydroxyvitamin D3 levels. All patients were examined for age, gender, extent of thyroidectomy, initial versus reoperative neck surgery, weight and pathologic characteristics of resected thyroid tissue, substernal thyroid extension, and parathyroid resection and autotransplantation. RESULTS Hypocalcemia, defined by an ionized calcium level less than 4.5 mg/dl, occurred in 28 patients (47%), including nine (15%) symptomatic patients who required vitamin D and/or calcium for 2 to 6 weeks. In no patient did permanent hypoparathyroidism develop. With a multivariate logistic regression analysis, factors that were predictive of postoperative hypocalcemia included an elevated free thyroxine level (p = 0.003), cancer (p = 0.010), and substernal extension (p = 0.046). CONCLUSIONS Postoperative decline in parathyroid hormone was not an independent risk factor for hypocalcemia, indicating that other factors besides parathyroid injury, ischemia, or removal are involved in the pathogenesis of postthyroidectomy hypocalcemia. An elevated free thyroxine level, substernal thyroid disease, and carcinoma are risk factors for postthyroidectomy hypocalcemia, and their presence should warrant routine postoperative calcium measurement. In the absence of these risk factors, routine postoperative measurement of serum calcium is unnecessary.
Collapse
|
30
|
A meta-analysis of methods to prevent venous thromboembolism following total hip replacement. JAMA 1994; 271:1780-5. [PMID: 7515115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
OBJECTIVE While several methods of prophylaxis have been shown to reduce the risk of venous thromboembolism following total hip replacement, the safest and most effective agent is unclear. To clarify this issue, we performed a meta-analysis of the randomized trials of methods used to prevent venous thromboembolism following total hip replacement. DATA SOURCE English-language human studies articles from 1966 through 1993 were obtained from a MEDLINE database search with indexing terms including thromboembolism, hip replacement or hip prosthesis, and randomized controlled trials. Additional references were obtained from study bibliographies. STUDY SELECTION The following criteria were used to select studies for inclusion: study design--randomized clinical trial; study population--patients undergoing elective total hip replacement; interventions--aspirin, warfarin, dextran, heparin, low-molecular-weight heparin, compression stockings; and outcomes--venous thromboembolism, major hemorrhage. DATA EXTRACTION Methodological and descriptive data from each study were abstracted by one author who was blinded to quantitative outcomes data. DATA SYNTHESIS Ninety-one treatment groups and 25 control groups were identified from 56 trials. Four treatment groups were excluded because of rarely used combinations. Trial populations were clinically homogeneous. When compared with the control arm, all treatments except aspirin reduced the risk of all deep venous thromboses (risk differences range, 0.18 to 0.31; all P values < .05). All treatments except aspirin reduced the risk of proximal venous thrombosis (risk differences range, 0.09 to 0.18; all P values < .05). Only low-molecular-weight heparin and stockings reduced the risk of pulmonary embolism, both with risk differences equal to 0.02. The crude risks of clinically important bleeding as defined by the individual trials were 0% for stockings, 0.3% for controls, and 1.8% for low-molecular-weight heparin. CONCLUSIONS The results suggest that low-molecular-weight heparin and compression stockings have the greatest relative efficacy in preventing venous thromboembolism following total hip replacement. Low-molecular-weight heparin may be more effective, though at a small risk of clinically important bleeding.
Collapse
|
31
|
Abstract
Prior to right-heart catheterization of 846 patients, 198 study physicians estimated values of pulmonary capillary wedge pressure (WP), cardiac index (Cl), and systemic vascular resistance index (VRI). The physicians also expressed their confidence in these estimates. Actual values of WP, Cl, and VRI as determined by catheterization enabled the authors to evaluate the quality of the physicians' judgments. The discrimination of the judgments was modest; areas under the ROC curves for WP, Cl, and VRI were 0.724, 0.681, and 0.656, respectively. Calculated using clinically relevant cutoff values, sensitivities were 64%, 50%, and 64%, and specificities were 71%, 75%, and 63%, respectively. Calibration of the estimates of WP, Cl, and VRI was also modest; physicians tended to overestimate low values and underestimate high values. Physicians were generally confident of their estimates, but there was no relation between confidence and accuracy. Experienced physicians were no more accurate than less experienced ones, although they were significantly more confident. The authors conclude that physicians should not use their levels of confidence in their subjective estimates of cardiac function in deciding whether to base therapy on these estimates.
Collapse
|
32
|
The effect of patient gender on the prevalence and recognition of alcoholism on a general medicine inpatient service. J Gen Intern Med 1992; 7:38-45. [PMID: 1548546 DOI: 10.1007/bf02599100] [Citation(s) in RCA: 47] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Abstract
OBJECTIVES 1) to determine the rate of alcoholism among general internal medicine inpatients, 2) to assess the recognition and referral rates of these patients by their physicians, 3) to determine the effect of patient gender on physician recognition of alcoholism, and 4) to compare the observed alcoholism rates with rates reported in frequently cited studies, controlling for gender distribution. DESIGN Cross-sectional study, face-to-face interviews. SETTING A large, county-owned metropolitan teaching hospital. PATIENTS/PARTICIPANTS Adult patients admitted to an inpatient general medical firm. From among 95 consecutive admissions, 78 patients (81%) entered the study. INTERVENTION The Michigan Alcoholism Screening Test (MAST) was administered to all study subjects. Chart reviews provided evidence of physician recognition and referral of patients with alcoholism. The observed rate of alcoholism was compared with rates reported in frequently cited studies after stratifying by type of service sampled and alcoholism assessment method used. Rates were then standardized for gender using the direct method. MEASUREMENTS AND MAIN RESULTS Twenty-two patients (28%) were found to be alcoholic by MAST criteria (scores of 5 or higher). Scores in the range indicative of alcoholism were observed more frequently among the 36 men than among the 42 women (p = 0.002) and varied by age group. Only the interaction between gender and age group was significant (p = 0.023). Sixteen of the 22 patients (73%) with alcoholism by MAST criteria were identified as alcoholic by physician evaluation. Physicians were significantly more likely to identify as alcoholic those patients with MAST scores higher than 29 and tended to more readily identify men who had alcoholism than women. Among physician-identified patients, only about one in five was referred for rehabilitation. The standardized alcoholism rate found (291/1,000) ranked about halfway between the highest and the lowest standardized rates from nine other studies of medicine inpatient services (465/1,000 and 112/1,000). CONCLUSIONS Patient gender affected the prevalence of alcoholism and influenced its recognition by physicians. Alcoholism by MAST criteria was found in one in eight female and nearly one in two male inpatients. Physician recognition was higher for men and for more severely affected patients. An understanding of gender effects is essential to the appropriate interpretation of the results of screening tests for alcoholism and to understanding differences in reported crude rates of alcoholism among studies. Supplementing clinical impressions with the routine use of standardized methods for detecting alcoholism is recommended.
Collapse
|
33
|
Altered urinary beta 2-microglobulin excretion as an index of nephrotoxicity. KIDNEY INTERNATIONAL. SUPPLEMENT 1991; 34:S18-20. [PMID: 1762326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
The experimental and clinical evidence indicate that beta 2-microglobulin (beta 2m) is actively reabsorbed from the glomerular filtrate by receptors on the brush border located in the proximal third of the proximal tubule. Increased beta 2m excretion in the absence of increased filtered load of beta 2m is indicative of nephrotoxicity. The data presented show that urine beta 2m increases and creatinine concentrations decrease within four hours of administration of diatrizoate megalumine (DMG). In 9 of the 20 patients, the urinary excretion of beta 2m (U beta 2m) increased to clearly abnormal values. In 12 of the 20 patients, the beta 2m excretion expressed as mg per g creatinine (Cr), increased from normal (less than 0.30) to an abnormal beta 2m excretion rate. The increased beta 2m excretion per g Cr occurring immediately after DMG administration lead us to conclude that this effect occurs when the nephrotoxic agent is present in the kidney. Based on these data we believe that the onset of abnormal urinary beta 2m excretion coincides with the presence of the causative agent. This criterion therefore, should prove to be useful in determining the time to conduct studies designed to search for the causative agent(s) in Balkan endemic nephropathy.
Collapse
|
34
|
Comparison of physician judgment and decision aids for ordering chest radiographs for pneumonia in outpatients. Ann Emerg Med 1991; 20:1215-9. [PMID: 1952308 DOI: 10.1016/s0196-0644(05)81474-x] [Citation(s) in RCA: 45] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
STUDY OBJECTIVES To compare physician judgment in the use of chest radiographs for diagnosing pneumonia with decision rules developed by Diehr, Singal, Heckerling, and Gennis. DESIGN Propsective observational investigation with preradiograph survey of physicians' intent to order chest radiographs for patients presenting with respiratory complaints. All patients had uniform clinical data collected, including chest radiographs and sufficient information to retrospectively apply the four clinical prediction rules. SETTING The emergency department and medical outpatient clinic of a major urban teaching hospital. PARTICIPANTS Adult patients presenting with recent history of acute cough or exacerbation of chronic cough plus either fever, sputum production, or hemoptysis. RESULTS Of 290 patients, 21 (7%) had pneumonia. The sensitivity of physician judgment (0.86) exceeded that of all four decision rules. The specificity of the Diehr (0.67), Heckerling (0.67), and Gennis (0.76) rules exceeded that of physician judgment (0.58). The accuracy of the Gennis (0.76) and Heckerling (0.68) rules also exceeded that of the physicians (0.60). DISCUSSION Physicians' diagnostic and therapeutic decisions were characterized by high sensitivity but lower specificity for ordering chest radiographs to diagnose pneumonia. The higher specificity and accuracy of two of the decision rules suggest that they may have a role in patient evaluation.
Collapse
|
35
|
The acute effect of phenylpropanolamine and brompheniramine on blood pressure in controlled hypertension: a randomized double-blind crossover trial. J Gen Intern Med 1991; 6:503-6. [PMID: 1684991 DOI: 10.1007/bf02598217] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
STUDY OBJECTIVE To determine the acute effect of phenylpropanolamine, 75 mg, and brompheniramine, 12 mg, in combination (PPA/B) on blood pressure in patients with controlled hypertension, using ambulatory blood pressure monitoring (ABPM). DESIGN Randomized double-blind crossover trial. SETTING Outpatient clinic at one medical center. PARTICIPANTS 13 healthy volunteers aged 36 to 64 years, receiving medication for hypertension. INTERVENTIONS Following 24-hour baseline ABPM, participants were randomized to receive either placebo or PPA/B every 12 hours for three doses, while ABPM continued. After a 24-hour washout period, all participants received the crossover regimen. MEASUREMENTS AND MAIN RESULTS No clinically important or statistically significant difference was noted for mean systolic and diastolic blood pressures during the baseline (125/75), PPA/B (127/72), and placebo (126/73) phases of the study. Within the first four hours of treatment, the mean change in systolic blood pressure from baseline between PPA/B and placebo phases was 1.7 mm Hg (95% CI -5.3 to 8.7), and mean change in diastolic blood pressure was 0.9 mm Hg (95% CI -1.6 to 3.5), excluding a first-dose pressor effect. CONCLUSION When used as recommended, PPA/B, a commonly used over-the-counter cold medication, has no significant acute effect on blood pressure in patients with controlled hypertension.
Collapse
|
36
|
Effects of cytokine combinations on acute phase protein production in two human hepatoma cell lines. JOURNAL OF IMMUNOLOGY (BALTIMORE, MD. : 1950) 1991; 146:3032-7. [PMID: 1707930] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
We evaluated the effects of binary combinations of four cytokines on production of the positive acute phase proteins alpha-1 antichymotrypsin, haptoglobin and fibrinogen, and the negative acute phase proteins albumin and alpha-fetoprotein (AFP) in two human hepatoma cell lines. The effects of the cytokine combinations on the five proteins varied; each protein exhibited a unique and specific pattern of response to the cytokine combinations. In Hep G2 cells, antichymotrypsin was induced by all four cytokines, IL-6, IL-1, TNF-alpha, and transforming growth factor beta 1 alone, and their effects in binary combinations could be attributed to additive or minimally synergistic interactions. Fibrinogen was induced only by IL-6 and this induction was inhibited by IL-1 alpha, TNF-alpha or transforming growth factor beta 1. Haptoglobin was also induced only by IL-6, but TNF-alpha was the only cytokine that inhibited this induction at all concentrations of IL-6. Each of the four cytokines alone down regulated production of AFP and albumin. However, binary combinations of the four cytokines were simply additive, for the most part, in inhibiting AFP production, whereas the inhibitory effects of combinations of cytokines on albumin production differed significantly from simple additive effects. These observations, taken together with studies of effects of cytokine combinations on other acute phase proteins, indicate that the various acute phase proteins respond differently to different combinations of cytokines and that the potential exists for highly specific regulation of synthesis of individual plasma proteins by cytokine interactions. These findings imply that the acute phase response in vivo represents the integrated sum of multiple, separately regulated changes in gene expression.
Collapse
|
37
|
Effects of cytokine combinations on acute phase protein production in two human hepatoma cell lines. THE JOURNAL OF IMMUNOLOGY 1991. [DOI: 10.4049/jimmunol.146.9.3032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
Abstract
Abstract
We evaluated the effects of binary combinations of four cytokines on production of the positive acute phase proteins alpha-1 antichymotrypsin, haptoglobin and fibrinogen, and the negative acute phase proteins albumin and alpha-fetoprotein (AFP) in two human hepatoma cell lines. The effects of the cytokine combinations on the five proteins varied; each protein exhibited a unique and specific pattern of response to the cytokine combinations. In Hep G2 cells, antichymotrypsin was induced by all four cytokines, IL-6, IL-1, TNF-alpha, and transforming growth factor beta 1 alone, and their effects in binary combinations could be attributed to additive or minimally synergistic interactions. Fibrinogen was induced only by IL-6 and this induction was inhibited by IL-1 alpha, TNF-alpha or transforming growth factor beta 1. Haptoglobin was also induced only by IL-6, but TNF-alpha was the only cytokine that inhibited this induction at all concentrations of IL-6. Each of the four cytokines alone down regulated production of AFP and albumin. However, binary combinations of the four cytokines were simply additive, for the most part, in inhibiting AFP production, whereas the inhibitory effects of combinations of cytokines on albumin production differed significantly from simple additive effects. These observations, taken together with studies of effects of cytokine combinations on other acute phase proteins, indicate that the various acute phase proteins respond differently to different combinations of cytokines and that the potential exists for highly specific regulation of synthesis of individual plasma proteins by cytokine interactions. These findings imply that the acute phase response in vivo represents the integrated sum of multiple, separately regulated changes in gene expression.
Collapse
|
38
|
A risk-benefit analysis of elective bilateral oophorectomy: effect of changes in compliance with estrogen therapy on outcome. Am J Obstet Gynecol 1991; 164:165-74. [PMID: 1986605 DOI: 10.1016/0002-9378(91)90649-c] [Citation(s) in RCA: 58] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
A bilateral oophorectomy at the time of elective hysterectomy is often performed to prevent ovarian cancer. The assumption that endogenous estrogen can be easily replaced with supplemental medication fosters the decision for routine oophorectomy. Published reports on the use of postmenopausal estrogen indicate that compliance is less than perfect. This fact could affect the overall outcome. Decision analysis techniques with Markov cohort modeling were used to evaluate the policy of elective bilateral oophorectomy. Results from studies judged methodologically sound were combined to determine values representing the influence of estrogen on coronary heart disease, breast cancer, and osteoporotic fracture. The decision tree also explicitly incorporated patient compliance. When compliance with estrogen therapy is assumed to be perfect, oophorectomy yields longer life expectancy than retaining the ovaries. When actual drug-taking behavior is considered, retaining the ovaries results in longer survival. This analysis highlights the importance of including the effects of patient compliance with treatment recommendations when the impact of a health policy decision such as prophylactic surgery is assessed.
Collapse
|
39
|
90068857 A study of combined continuous ethinyl estradiol and norethindrone acetate for postmenopausal hormone replacement. Maturitas 1990. [DOI: 10.1016/0378-5122(90)90023-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
40
|
Prevention of hepatocyte injury and lipid peroxidation by iron chelators and alpha-tocopherol in isolated iron-loaded rat hepatocytes. Hepatology 1990; 12:31-9. [PMID: 2373483 DOI: 10.1002/hep.1840120107] [Citation(s) in RCA: 38] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
These experiments were performed to characterize the relationship between lipid peroxidation and hepatocyte viability in iron overload. Hepatocytes were isolated from rats with chronic dietary iron overload and the effects of in vitro iron chelation on lipid peroxidation, cell viability and ultrastructure were studied over a 4-hr incubation period. Cell viability was significantly reduced at 3 and 4 hr in iron-loaded hepatocytes compared with controls and was preceded by an increase in iron-dependent lipid peroxidation. Similarly, extensive degenerative ultrastructural changes were observed in iron-loaded hepatocytes compared with controls after 4 hr of incubation. In vitro iron chelation with either deferoxamine or apotransferrin protected against lipid peroxidation, loss of viability and ultrastructural damage in iron-loaded hepatocytes. The addition of an antioxidant, alpha-tocopherol, also protected against lipid peroxidation and preserved cell viability over a 4-hr incubation. The protective effects of iron chelators and alpha-tocopherol support a strong association between iron-dependent lipid peroxidation and hepatocellular injury in iron overload.
Collapse
|
41
|
A study of combined continuous ethinyl estradiol and norethindrone acetate for postmenopausal hormone replacement. Am J Obstet Gynecol 1990; 162:438-46. [PMID: 2309827 DOI: 10.1016/0002-9378(90)90402-s] [Citation(s) in RCA: 43] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
In a blinded, prospective, dose-response pilot study of continuous estrogen-progestin replacement therapy, 77 thin, nonsmoking, white women, who were 12 to 60 months postmenopausal and had normal medical histories, were randomly assigned to receive one of five dose combinations of daily ethinyl estradiol and norethindrone acetate (20 micrograms and 1.0 mg, 10 micrograms and 1.0 mg, 10 micrograms and 0.5 mg, 5 micrograms and 1.0 mg, and 5 micrograms and 0.5 mg) or conjugated estrogens 0.625 mg on days 1 to 25 and medroxyprogesterone acetate 10 mg on days 16 to 25. An additional 10 women meeting the same criteria served as a comparison group by taking calcium only. During 12 months of therapy, continuous users had significantly less vaginal bleeding and spotting than did sequential users. As compared with baseline values, bone metabolism and computerized tomographic measurements of vertebral trabecular bone density at month 12 indicated reduced bone turnover and increased density in hormone users. Endometrial biopsy specimens were negative for hyperplasia and neoplasia. The continuous ethinyl estradiol-norethindrone acetate tablet, even at the lowest doses studied, provided the same salutary effects on bone, endometrium, and postmenopausal symptoms as sequential therapy while minimizing annoying vaginal bleeding and spotting.
Collapse
|
42
|
|
43
|
Abstract
The lens model recently has been extended to consider multiple outcomes and sequential use of clinical information. The authors have used this extended model 1) to describe the relationship between clinical information and physicians' assessments of hemodynamic status, 2) to describe the empirical relationship between clinical information and physiologic measures of hemodynamic status, and 3) to compare physicians' use of information with its empirical utility. Physicians prospectively provided estimates of cardiac index and pulmonary capillary wedge pressure for 440 intensive care unit patients prior to right heart catheterization. The correlation between physicians' estimates and measured hemodynamic status was lower than that between clinical information and hemodynamic status (0.42 versus 0.67). Only 7% of physicians' judgement was related to subsequent ancillary testing. Empirically, subsequent ancillary testing contributed 30% to the explanation of hemodynamic status. The lens model describes limitations of physician judgement in estimating left ventricular function and helps explain how patient features relate to measured hemodynamic status.
Collapse
|
44
|
Abstract
Five hundred ninety students receiving primary care in a university health service were surveyed anonymously in 1985-86 to study their self-reported sexual behavior and knowledge and attitudes about acquired immunodeficiency syndrome (AIDS). Most students (75%) were heterosexual; 3% were homosexual, 3% bisexual, and 15% had never been sexually active. Many students (32%) had greater than or equal to 2 sexual partners in the past year, but only 23% of these had changed their sexual practices because of concern about AIDS. Some students with high-risk sexual behavior were not very knowledgeable: among homosexual or bisexual men, those with greater than or equal to 6 recent sexual partners knew less than others (P less than 0.001). Overall, less knowledgeable students had more personal concerns about AIDS, favored limiting the social activities of people infected with human immunodeficiency virus (HIV), and favored screening for HIV-antibody; these associations between knowledge and attitudes were significant even when controlling for demographic characteristics and sexual behavior with multiple linear regression. The authors conclude that many students receiving primary care reported sexual behavior that could spread HIV, and that less knowledgeable students had particular concerns and attitudes about AIDS.
Collapse
|