1
|
Prognostic score-based methods for estimating center effects based on survival probability: Application to post-kidney transplant survival. Stat Med 2024. [PMID: 38780593 DOI: 10.1002/sim.10092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 03/25/2024] [Accepted: 04/16/2024] [Indexed: 05/25/2024]
Abstract
In evaluating the performance of different facilities or centers on survival outcomes, the standardized mortality ratio (SMR), which compares the observed to expected mortality has been widely used, particularly in the evaluation of kidney transplant centers. Despite its utility, the SMR may exaggerate center effects in settings where survival probability is relatively high. An example is one-year graft survival among U.S. kidney transplant recipients. We propose a novel approach to estimate center effects in terms of differences in survival probability (ie, each center versus a reference population). An essential component of the method is a prognostic score weighting technique, which permits accurately evaluating centers without necessarily specifying a correct survival model. Advantages of our approach over existing facility-profiling methods include a metric based on survival probability (greater clinical relevance than ratios of counts/rates); direct standardization (valid to compare between centers, unlike indirect standardization based methods, such as the SMR); and less reliance on correct model specification (since the assumed model is used to generate risk classes as opposed to fitted-value based 'expected' counts). We establish the asymptotic properties of the proposed weighted estimator and evaluate its finite-sample performance under a diverse set of simulation settings. The method is then applied to evaluate U.S. kidney transplant centers with respect to graft survival probability.
Collapse
|
2
|
The 10-Year Effects of Intensive Lifestyle Intervention on Kidney Outcomes. Kidney Med 2024; 6:100814. [PMID: 38689836 PMCID: PMC11059390 DOI: 10.1016/j.xkme.2024.100814] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/02/2024] Open
Abstract
Rationale & Objective Limited data exist on longitudinal kidney outcomes after nonsurgical obesity treatments. We investigated the effects of intensive lifestyle intervention on kidney function over 10 years. Study Design Post hoc analysis of Action for Health in Diabetes (Look AHEAD) randomized controlled trial. Setting & Participants We studied 4,901 individuals with type 2 diabetes and body mass index of ≥25 kg/m2 enrolled in Look AHEAD (2001-2015). The original Look AHEAD trial excluded individuals with 4+ urine dipstick protein, serum creatinine level of >1.4 mg/dL (women), 1.5 mg/dL (men), or dialysis dependence. Exposures Intensive lifestyle intervention versus diabetes support and education (ie, usual care). Outcome Primary outcome was estimated glomerular filtration rate (eGFR, mL/min/1.73 m2) slope. Secondary outcomes were mean eGFR, slope, and mean urine albumin to creatinine ratio (UACR, mg/mg). Analytical Approach Linear mixed-effects models with random slopes and intercepts to evaluate the association between randomization arms and within-individual repeated measures of eGFR and UACR. We tested for effect modification by baseline eGFR. Results At baseline, mean eGFR was 89, and 83% had a normal UACR. Over 10 years, there was no difference in eGFR slope (+0.064 per year; 95% CI: -0.036 to 0.16; P = 0.21) between arms. Slope or mean UACR did not differ between arms. Baseline eGFR, categorized as eGFR of <80, 80-100, or >100, did not modify the intervention's effect on eGFR slope or mean. Limitations Loss of muscle may confound creatinine-based eGFR. Conclusions In patients with type 2 diabetes and preserved kidney function, intensive lifestyle intervention did not change eGFR slope over 10 years. Among participants with baseline eGFR <80, lifestyle intervention had a slightly higher longitudinal mean eGFR than usual care. Further studies evaluating the effects of intensive lifestyle intervention in people with kidney disease are needed.
Collapse
|
3
|
Scoring donor lungs for graft failure risk: The Lung Donor Risk Index (LDRI). Am J Transplant 2024; 24:839-849. [PMID: 38266712 DOI: 10.1016/j.ajt.2024.01.022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 01/11/2024] [Accepted: 01/12/2024] [Indexed: 01/26/2024]
Abstract
Lung transplantation lags behind other solid organ transplants in donor lung utilization due, in part, to uncertainty regarding donor quality. We sought to develop an easy-to-use donor risk metric that, unlike existing metrics, accounts for a rich set of donor factors. Our study population consisted of n = 26 549 adult lung transplant recipients abstracted from the United Network for Organ Sharing Standard Transplant Analysis and Research file. We used Cox regression to model graft failure (GF; earliest of death or retransplant) risk based on donor and transplant factors, adjusting for recipient factors. We then derived and validated a Lung Donor Risk Index (LDRI) and developed a pertinent online application (https://shiny.pmacs.upenn.edu/LDRI_Calculator/). We found 12 donor/transplant factors that were independently predictive of GF: age, race, insulin-dependent diabetes, the difference between donor and recipient height, smoking, cocaine use, cytomegalovirus seropositivity, creatinine, human leukocyte antigen (HLA) mismatch, ischemia time, and donation after circulatory death. Validation showed the LDRI to have GF risk discrimination that was reasonable (C = 0.61) and higher than any of its predecessors. The LDRI is intended for use by transplant centers, organ procurement organizations, and regulatory agencies and to benefit patients in decision-making. Unlike its predecessors, the proposed LDRI could gain wide acceptance because of its granularity and similarity to the Kidney Donor Risk Index.
Collapse
|
4
|
Impact of delayed listing after initiating kidney transplant evaluation on transplant outcomes. Clin Transplant 2024; 38:e15319. [PMID: 38683684 DOI: 10.1111/ctr.15319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2023] [Revised: 03/20/2024] [Accepted: 04/08/2024] [Indexed: 05/02/2024]
Abstract
OBJECTIVE Longer end-stage renal disease time has been associated with inferior kidney transplant outcomes. However, the contribution of transplant evaluation is uncertain. We explored the relationship between time from evaluation to listing (ELT) and transplant outcomes. METHODS This retrospective study included 2535 adult kidney transplants from 2000 to 2015. Kaplan-Meier survival curves, log-rank tests, and Cox regression models were used to compare transplant outcomes. RESULTS Patient survival for both deceased donor (DD) recipients (p < .001) and living donor (LD) recipients (p < .0001) was significantly higher when ELT was less than 3 months. The risks of ELT appeared to be mediated by other risks in DD recipients, as adjusted models showed no associated risk of graft loss or death in DD recipients. For LD recipients, ELT remained a risk factor for patient death after covariate adjustment. Each month of ELT was associated with an increased risk of death (HR = 1.021, p = .04) but not graft loss in LD recipients in adjusted models. CONCLUSIONS Kidney transplant recipients with longer ELT times had higher rates of death after transplant, and ELT was independently associated with an increased risk of death for LD recipients. Investigations on the impact of pretransplant evaluation on post-transplant outcomes can inform transplant policy and practice.
Collapse
|
5
|
Rise in First-Time ERCP For Benign Indications >1 Year After Cholecystectomy Is Associated With Worse Outcomes. Clin Gastroenterol Hepatol 2024:S1542-3565(24)00309-4. [PMID: 38599308 DOI: 10.1016/j.cgh.2024.03.027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Revised: 03/20/2024] [Accepted: 03/21/2024] [Indexed: 04/12/2024]
Abstract
BACKGROUND & AIMS Greater availability of less invasive biliary imaging to rule out choledocholithiasis should reduce the need for diagnostic endoscopic retrograde cholangiopancreatography (ERCP) in patients who have a remote history of cholecystectomy. The primary aims were to determine the incidence, characteristics, and outcomes of individuals who undergo first-time ERCP >1 year after cholecystectomy (late-ERCP). METHODS Data from a commercial insurance claim database (Optum Clinformatics) identified 583,712 adults who underwent cholecystectomy, 4274 of whom underwent late-ERCP, defined as first-time ERCP for nonmalignant indications >1 year after cholecystectomy. Outcomes were exposure and temporal trends in late-ERCP, biliary imaging utilization, and post-ERCP outcomes. Multivariable logistic regression was used to examine patient characteristics associated with undergoing late-ERCP. RESULTS Despite a temporal increase in the use of noninvasive biliary imaging (35.9% in 2004 to 65.6% in 2021; P < .001), the rate of late-ERCP increased 8-fold (0.5-4.2/1000 person-years from 2005 to 2021; P < .001). Although only 44% of patients who underwent late-ERCP had gallstone removal, there were high rates of post-ERCP pancreatitis (7.1%), hospitalization (13.1%), and new chronic opioid use (9.7%). Factors associated with late-ERCP included concomitant disorder of gut-brain interaction (odds ratio [OR], 6.48; 95% confidence interval [CI], 5.88-6.91) and metabolic dysfunction steatotic liver disease (OR, 3.27; 95% CI, 2.79-3.55) along with use of anxiolytic (OR, 3.45; 95% CI, 3.19-3.58), antispasmodic (OR, 1.60; 95% CI, 1.53-1.72), and chronic opioids (OR, 6.24; 95% CI, 5.79-6.52). CONCLUSIONS The rate of late-ERCP postcholecystectomy is increasing significantly, particularly in patients with comorbidities associated with disorder of gut-brain interaction and mimickers of choledocholithiasis. Late-ERCPs are associated with disproportionately higher rates of adverse events, including initiation of chronic opioid use.
Collapse
|
6
|
Living donor liver transplantation in the United States for alcohol-associated liver disease and nonalcoholic steatohepatitis: An evaluation in the current era. Liver Transpl 2024; 30:446-450. [PMID: 37773053 DOI: 10.1097/lvt.0000000000000268] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Accepted: 09/20/2023] [Indexed: 09/30/2023]
|
7
|
Effectiveness and safety of prophylactic anticoagulation among hospitalized patients with inflammatory bowel disease. Blood Adv 2024; 8:1272-1280. [PMID: 38163322 PMCID: PMC10918481 DOI: 10.1182/bloodadvances.2023011756] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Revised: 12/01/2023] [Accepted: 12/01/2023] [Indexed: 01/03/2024] Open
Abstract
ABSTRACT Hospitalized patients with inflammatory bowel disease (IBD) are at increased risk of venous thromboembolism (VTE). We aimed to evaluate the effectiveness and safety of prophylactic anticoagulation compared with no anticoagulation in hospitalized patients with IBD. We conducted a retrospective cohort study using a hospital-based database. We included patients with IBD who had a length of hospital stay ≥2 days between 1 January 2016 and 31 December 2019. We excluded patients who had other indications for anticoagulation, users of direct oral anticoagulants, warfarin, therapeutic-intensity heparin, and patients admitted for surgery. We defined exposure to prophylactic anticoagulation using charge codes. The primary effectiveness outcome was VTE. The primary safety outcome was bleeding. We used propensity score matching to reduce potential differences between users and nonusers of anticoagulants and Cox proportional-hazards regression to estimate adjusted hazard ratios (HRs) and 95% confidence intervals (CIs). The analysis included 56 194 matched patients with IBD (users of anticoagulants, n = 28 097; nonusers, n = 28 097). In the matched sample, prophylactic use of anticoagulants (vs no use) was associated with a lower rate of VTE (HR, 0.62; 95% CI, 0.41-0.94) and with no difference in the rate of bleeding (HR, 1.05; 95% CI, 0.87-1.26). In this study of hospitalized patients with IBD, prophylactic use of heparin was associated with a lower rate of VTE without increasing bleeding risk compared with no anticoagulation. Our results suggest potential benefits of prophylactic anticoagulation to reduce the burden of VTE in hospitalized patients with IBD.
Collapse
|
8
|
Living Donor Liver Transplantation for Adults With High Model for End-stage Liver Disease Score: The US Experience. Transplantation 2024; 108:713-723. [PMID: 37635282 PMCID: PMC10899524 DOI: 10.1097/tp.0000000000004767] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/29/2023]
Abstract
BACKGROUND Outcomes after living-donor liver transplantation (LDLT) at high Model for End-stage Liver Disease (MELD) scores are not well characterized in the United States. METHODS This was a retrospective cohort study using Organ Procurement and Transplantation Network data in adults listed for their first liver transplant alone between 2002 and 2021. Cox proportional hazards models evaluated the association of MELD score (<20, 20-24, 25-29, and ≥30) and patient/graft survival after LDLT and the association of donor type (living versus deceased) on outcomes stratified by MELD. RESULTS There were 4495 LDLTs included with 5.9% at MELD 25-29 and 1.9% at MELD ≥30. LDLTs at MELD 25-29 and ≥30 LDLT have substantially increased since 2010 and 2015, respectively. Patient survival at MELD ≥30 was not different versus MELD <20: adjusted hazard ratio 1.67 (95% confidence interval, 0.96-2.88). However, graft survival was worse: adjusted hazard ratio (aHR) 1.69 (95% confidence interval, 1.07-2.68). Compared with deceased-donor liver transplant, LDLT led to superior patient survival at MELD <20 (aHR 0.92; P = 0.024) and 20-24 (aHR 0.70; P < 0.001), equivalent patient survival at MELD 25-29 (aHR 0.97; P = 0.843), but worse graft survival at MELD ≥30 (aHR 1.68, P = 0.009). CONCLUSIONS Although patient survival remains acceptable, the benefits of LDLT may be lost at MELD ≥30.
Collapse
|
9
|
Performance of risk prediction models for post-liver transplant patient and graft survival over time. Liver Transpl 2024:01445473-990000000-00315. [PMID: 38265295 DOI: 10.1097/lvt.0000000000000326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Accepted: 12/11/2023] [Indexed: 01/25/2024]
Abstract
Given liver transplantation organ scarcity, selection of recipients and donors to maximize post-transplant benefit is paramount. Several scores predict post-transplant outcomes by isolating elements of donor and recipient risk, including the donor risk index, Balance of Risk, pre-allocation score to predict survival outcomes following liver transplantation/survival outcomes following liver transplantation (SOFT), improved donor-to-recipient allocation score for deceased donors only/improved donor-to-recipient allocation score for both deceased and living donors (ID2EAL-D/-DR), and survival benefit (SB) models. No studies have examined the performance of these models over time, which is critical in an ever-evolving transplant landscape. This was a retrospective cohort study of liver transplantation events in the UNOS database from 2002 to 2021. We used Cox regression to evaluate model discrimination (Harrell's C) and calibration (testing of calibration curves) for post-transplant patient and graft survival at specified post-transplant timepoints. Sub-analyses were performed in the modern transplant era (post-2014) and for key donor-recipient characteristics. A total of 112,357 transplants were included. The SB and SOFT scores had the highest discrimination for short-term patient and graft survival, including in the modern transplant era, where only the SB model had good discrimination (C ≥ 0.60) for all patient and graft outcome timepoints. However, these models had evidence of poor calibration at 3- and 5-year patient survival timepoints. The ID2EAL-DR score had lower discrimination but adequate calibration at all patient survival timepoints. In stratified analyses, SB and SOFT scores performed better in younger (< 40 y) and higher Model for End-Stage Liver Disease (≥ 25) patients. All prediction scores had declining discrimination over time, and scores relying on donor factors alone had poor performance. Although the SB and SOFT scores had the best overall performance, all models demonstrated declining performance over time. This underscores the importance of periodically updating and/or developing new prediction models to reflect the evolving transplant field. Scores relying on donor factors alone do not meaningfully inform post-transplant risk.
Collapse
|
10
|
Venovenous Extracorporeal Membrane Oxygenation Initiation for Pediatric Acute Respiratory Distress Syndrome With Cardiovascular Instability is Associated With an Immediate and Sustained Decrease in Vasoactive-Inotropic Scores. Pediatr Crit Care Med 2024; 25:e41-e46. [PMID: 37462429 PMCID: PMC10768839 DOI: 10.1097/pcc.0000000000003325] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/23/2023]
Abstract
OBJECTIVE To determine the association of venovenous extracorporeal membrane oxygenation (VV-ECMO) initiation with changes in vasoactive-inotropic scores (VISs) in children with pediatric acute respiratory distress syndrome (PARDS) and cardiovascular instability. DESIGN Retrospective cohort study. SETTING Single academic pediatric ECMO center. PATIENTS Children (1 mo to 18 yr) treated with VV-ECMO (2009-2019) for PARDS with need for vasopressor or inotropic support at ECMO initiation. MEASUREMENTS AND MAIN RESULTS Arterial blood gas values, VIS, mean airway pressure (mPaw), and oxygen saturation (Sp o2 ) values were recorded hourly relative to the start of ECMO flow for 24 hours pre-VV-ECMO and post-VV-ECMO cannulation. A sharp kink discontinuity regression analysis clustered by patient tested the difference in VISs and regression line slopes immediately surrounding cannulation. Thirty-two patients met inclusion criteria: median age 6.6 years (interquartile range [IQR] 1.5-11.7), 22% immunocompromised, and 75% had pneumonia or sepsis as the cause of PARDS. Pre-ECMO characteristics included: median oxygenation index 45 (IQR 35-58), mPaw 32 cm H 2o (IQR 30-34), 97% on inhaled nitric oxide, and 81% on an advanced mode of ventilation. Median VIS immediately before VV-ECMO cannulation was 13 (IQR 8-25) with an overall increasing VIS trajectory over the hours before cannulation. VISs decreased and the slope of the regression line reversed immediately surrounding the time of cannulation (robust p < 0.0001). There were pre-ECMO to post-ECMO cannulation decreases in mPaw (32 vs 20 cm H 2o , p < 0.001) and arterial P co2 (64.1 vs 50.1 mm Hg, p = 0.007) and increases in arterial pH (7.26 vs 7.38, p = 0.001), arterial base excess (2.5 vs 5.2, p = 0.013), and SpO 2 (91% vs 95%, p = 0.013). CONCLUSIONS Initiation of VV-ECMO was associated with an immediate and sustained reduction in VIS in PARDS patients with cardiovascular instability. This VIS reduction was associated with decreased mPaw and reduced respiratory and/or metabolic acidosis as well as improved oxygenation.
Collapse
|
11
|
Development and Validation of Claims-Based Definitions to Identify Incident and Prevalent Inflammatory Bowel Disease in Administrative Healthcare Databases. Inflamm Bowel Dis 2023; 29:1993-1996. [PMID: 37043675 PMCID: PMC10697409 DOI: 10.1093/ibd/izad053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Indexed: 04/14/2023]
Abstract
BACKGROUND To facilitate inflammatory bowel disease (IBD) research in the United States, we developed and validated claims-based definitions to identify incident and prevalent IBD diagnoses using administrative healthcare claims data among multiple payers. METHODS We used data from Medicare, Medicaid, and the HealthCore Integrated Research Database (Anthem commercial and Medicare Advantage claims). The gold standard for validation was review of medical records. We evaluated 1 incidence and 4 prevalence algorithms based on a combination of International Classification of Diseases codes, National Drug Codes, and Current Procedural Terminology codes. The claims-based incident diagnosis date needed to be within ±90 days of that recorded in the medical record to be valid. RESULTS We reviewed 111 charts of patients with a potentially incident diagnosis. The positive predictive value (PPV) of the claims algorithm was 91% (95% confidence interval [CI], 81%-97%). We reviewed 332 charts to validate prevalent case definition algorithms. The PPV was 94% (95% CI, 86%-98%) for ≥2 IBD diagnoses and presence of prescriptions for IBD medications, 92% (95% CI, 85%-97%) for ≥2 diagnoses without any medications, 78% (95% CI, 67%-87%) for a single diagnosis and presence of an IBD medication, and 35% (95% CI, 25%-46%) for 1 physician diagnosis and no IBD medications. CONCLUSIONS Through a combination of diagnosis, procedural, and medication codes in insurance claims data, we were able to identify incident and prevalent IBD cases with high accuracy. These algorithms can be useful for the ascertainment of IBD cases in future studies.
Collapse
|
12
|
Deceased Organ Donor Management and Organ Distribution From Organ Procurement Organization-Based Recovery Facilities Versus Acute-Care Hospitals. Prog Transplant 2023; 33:283-292. [PMID: 37941335 PMCID: PMC10691289 DOI: 10.1177/15269248231212918] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2023]
Abstract
Introduction: Organ recovery facilities address the logistical challenges of hospital-based deceased organ donor management. While more organs are transplanted from donors in facilities, differences in donor management and donation processes are not fully characterized. Research Question: Does deceased donor management and organ transport distance differ between organ procurement organization (OPO)-based recovery facilities versus hospitals? Design: Retrospective analysis of Organ Procurement and Transplant Network data, including adults after brain death in 10 procurement regions (April 2017-June 2021). The primary outcomes were ischemic times of transplanted hearts, kidneys, livers, and lungs. Secondary outcomes included transport distances (between the facility or hospital and the transplant program) for each transplanted organ. Results: Among 5010 deceased donors, 51.7% underwent recovery in an OPO-based recovery facility. After adjustment for recipient and system factors, mean differences in ischemic times of any transplanted organ were not significantly different between donors in facilities and hospitals. Transplanted hearts recovered from donors in facilities were transported further than hearts from hospital donors (median 255 mi [IQR 27, 475] versus 174 [IQR 42, 365], P = .002); transport distances for livers and kidneys were significantly shorter (P < .001 for both). Conclusion: Organ recovery procedures performed in OPO-based recovery facilities were not associated with differences in ischemic times in transplanted organs from organs recovered in hospitals, but differences in organ transport distances exist. Further work is needed to determine whether other observed differences in donor management and organ distribution meaningfully impact donation and transplantation outcomes.
Collapse
|
13
|
Poor functional status at the time of waitlist for pediatric lung transplant is associated with worse pretransplant outcomes. J Heart Lung Transplant 2023; 42:1735-1742. [PMID: 37437825 PMCID: PMC10776805 DOI: 10.1016/j.healun.2023.07.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Revised: 06/14/2023] [Accepted: 07/04/2023] [Indexed: 07/14/2023] Open
Abstract
BACKGROUND Whether functional status is associated with survival to pediatric lung transplant is unknown. We hypothesized that completely dependent functional status at waitlist registration, defined using Lansky Play Performance Scale (LPPS), would be associated with worse outcomes. METHODS Retrospective cohort study of pediatric lung transplant registrants utilizing United Network for Organ Sharing's Standard Transplant Analysis and Research files (2005-2020). Primary exposure was completely dependent functional status, defined as LPPS score of 10-40. Primary outcome was waitlist removal for death/deterioration with cause-specific hazard ratio (CSHR) regression. Subdistribution hazard regression (SHR, Fine and Gray) was used for the secondary outcome of waitlist removal due to transplant/improvement with a competing risk of death/deterioration. Confounders included: sex, age, race, diagnosis, ventilator dependence, extracorporeal membrane oxygenation, year, and listing center volume. RESULTS A total of 964 patients were included (63.5% ≥ 12 years, 50.2% cystic fibrosis [CF]). Median waitlist days were 95; 20.1% were removed for death/deterioration and 68.2% for transplant/improvement. Completely dependent functional status was associated with removal due to death/deterioration (adjusted CSHR 5.30 [95% CI 2.86-9.80]). This association was modified by age (interaction p = 0.0102), with a larger effect for age ≥12 years, and particularly strong for CF. In the Fine and Gray model, completely dependent functional status did not affect the risk of removal due to transplant/improvement with a competing risk of death/deterioration (adjusted SHR 1.08 [95% CI 0.77-1.49]). CONCLUSIONS Pediatric lung transplant registrants with the worst functional status had worse pretransplant outcomes, especially for adolescents and CF patients. Functional status at waitlist registration may be a modifiable risk factor to improve survival to lung transplant.
Collapse
|
14
|
Multiply robust causal inference of the restricted mean survival time difference. Stat Methods Med Res 2023; 32:2386-2404. [PMID: 37965684 DOI: 10.1177/09622802231211009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2023]
Abstract
The hazard ratio (HR) remains the most frequently employed metric in assessing treatment effects on survival times. However, the difference in restricted mean survival time (RMST) has become a popular alternative to the HR when the proportional hazards assumption is considered untenable. Moreover, independent of the proportional hazards assumption, many comparative effectiveness studies aim to base contrasts on survival probability rather than on the hazard function. Causal effects based on RMST are often estimated via inverse probability of treatment weighting (IPTW). However, this approach generally results in biased results when the assumed propensity score model is misspecified. Motivated by the need for more robust techniques, we propose an empirical likelihood-based weighting approach that allows for specifying a set of propensity score models. The resulting estimator is consistent when the postulated model set contains a correct model; this property has been termed multiple robustness. In this report, we derive and evaluate a multiply robust estimator of the causal between-treatment difference in RMST. Simulation results confirm its robustness. Compared with the IPTW estimator from a correct model, the proposed estimator tends to be less biased and more efficient in finite samples. Additional simulations reveal biased results from a direct application of machine learning estimation of propensity scores. Finally, we apply the proposed method to evaluate the impact of intrapartum group B streptococcus antibiotic prophylaxis on the risk of childhood allergic disorders using data derived from electronic medical records from the Children's Hospital of Philadelphia and census data from the American Community Survey.
Collapse
|
15
|
Incidence, Prevalence, and Racial and Ethnic Distribution of Inflammatory Bowel Disease in the United States. Gastroenterology 2023; 165:1197-1205.e2. [PMID: 37481117 PMCID: PMC10592313 DOI: 10.1053/j.gastro.2023.07.003] [Citation(s) in RCA: 17] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 06/30/2023] [Accepted: 07/07/2023] [Indexed: 07/24/2023]
Abstract
BACKGROUND & AIMS We sought to estimate the incidence, prevalence, and racial-ethnic distribution of physician-diagnosed inflammatory bowel disease (IBD) in the United States. METHODS The study used 4 administrative claims data sets: a 20% random sample of national fee-for-service Medicare data (2007 to 2017); Medicaid data from Florida, New York, Pennsylvania, Ohio, and California (1999 to 2012); and commercial health insurance data from Anthem beneficiaries (2006 to 2018) and Optum's deidentified Clinformatics Data Mart (2000 to 2017). We used validated combinations of medical diagnoses, diagnostic procedures, and prescription medications to identify incident and prevalent diagnoses. We computed pooled age-, sex-, and race/ethnicity-specific insurance-weighted estimates and pooled estimates standardized to 2018 United States Census estimates with 95% confidence intervals (CIs). RESULTS The age- and sex-standardized incidence of IBD per 100,000 person-years was 10.9 (95% CI, 10.6-11.2). The incidence of IBD peaked in the third decade of life, decreased to a relatively stable level across the fourth to eighth decades, and declined further. The age-, sex- and insurance-standardized prevalence of IBD was 721 per 100,000 population (95% CI, 717-726). Extrapolated to the 2020 United States Census, an estimated 2.39 million Americans are diagnosed with IBD. The prevalence of IBD per 100,000 population was 812 (95% CI, 802-823) in White, 504 (95% CI, 482-526) in Black, 403 (95% CI, 373-433) in Asian, and 458 (95% CI, 440-476) in Hispanic Americans. CONCLUSIONS IBD is diagnosed in >0.7% of Americans. The incidence peaks in early adulthood and then plateaus at a lower rate. The disease is less commonly diagnosed in Black, Asian, and Hispanic Americans.
Collapse
|
16
|
Oral health outcomes in an HIV cohort with comorbidities- implementation roadmap for a longitudinal prospective observational study. BMC Oral Health 2023; 23:763. [PMID: 37848867 PMCID: PMC10580527 DOI: 10.1186/s12903-023-03527-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Accepted: 10/10/2023] [Indexed: 10/19/2023] Open
Abstract
BACKGROUND Long-term antiretroviral therapy (ART) perpetually suppresses HIV load and has dramatically altered the prognosis of HIV infection, such that HIV is now regarded as a chronic disease. Side effects of ART in Patients With HIV (PWH), has introduced new challenges including "metabolic" (systemic) and oral complications. Furthermore, inflammation persists despite great viral load suppression and normal levels of CD4+ cell count. The impact of ART on the spectrum of oral diseases among PWH is often overlooked relative to other systemic complications. There is paucity of data on oral complications associated with ART use in PWH. This is in part due to limited prospective longitudinal studies designed to better understand the range of oral abnormalities observed in PWH on ART. METHODS We describe here the study design, including processes associated with subject recruitment and retention, study visit planning, oral health assessments, bio-specimen collection and preprocessing procedures, and data management and statistical plan. DISCUSSION We present a procedural roadmap that could be modelled to assess the extent and progression of oral diseases associated with ART in PWH. We also highlight the rigors and challenges associated with our ongoing participant recruitment and retention. A rigorous prospective longitudinal study requires proper planning and execution. A great benefit is that large data sets are collected and biospecimen repository can be used to answer more questions in future studies including genetic, microbiome and metabolome-based studies. TRIAL REGISTRATION National Institute of Health Clinical Trials Registration (NCT) #: NCT04645693.
Collapse
|
17
|
Oral Health Outcomes In An HIV Cohort With Comorbidities- Implementation Roadmap For A Longitudinal Prospective Observational Study. RESEARCH SQUARE 2023:rs.3.rs-3390162. [PMID: 37886466 PMCID: PMC10602089 DOI: 10.21203/rs.3.rs-3390162/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/28/2023]
Abstract
Long-term antiretroviral therapy (ART) perpetually suppresses HIV load and has dramatically altered the prognosis of HIV infection, such that HIV is now regarded as a chronic disease. Side effects of ART in Patients With HIV (PWH), has introduced new challenges including "metabolic" (systemic) and oral complications. Furthermore, inflammation persists despite great viral load suppression and normal levels of CD4+ cell count. The impact of ART on the spectrum of oral diseases among PWH is often overlooked relative to other systemic complications. There is paucity of data on oral complications associated with ART use in PWH. This is in part due to limited prospective longitudinal studies designed to better understand the range of oral abnormalities observed in PWH on ART. Our group designed and implemented a prospective observational longitudinal study to address this gap. We present a procedural roadmap that could be modelled to assess the extent and progression of oral diseases associated with ART in PWH. We described here the processes associated with subject recruitment and retention, study visit planning, oral health assessments, bio-specimen collection and preprocessing procedures, and data management. We also highlighted the rigors and challenges associated with participant recruitment and retention.
Collapse
|
18
|
Cognitive function, self-management, and outcomes among liver transplant recipients: LivCog, a multicenter, prospective study. Hepatol Commun 2023; 7:e0259. [PMID: 37916863 PMCID: PMC10545399 DOI: 10.1097/hc9.0000000000000259] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Accepted: 07/13/2023] [Indexed: 11/03/2023] Open
Abstract
Liver transplantation is a life-saving option for decompensated cirrhosis. Liver transplant recipients require advanced self-management skills, intact cognitive skills, and care partner support to improve long-term outcomes. Gaps remain in understanding post-liver transplant cognitive and health trajectories, and patient factors such as self-management skills, care partner support, and sleep. Our aims are to (1) assess pre-liver transplant to post-liver transplant cognitive trajectories and identify risk factors for persistent cognitive impairment; (2) evaluate associations between cognitive function and self-management skills, health behaviors, functional health status, and post-transplant outcomes; and (3) investigate potential mediators and moderators of associations between cognitive function and post-liver transplant outcomes. LivCog is a longitudinal, prospective observational study that will enroll 450 adult liver transplant recipients and their caregivers/care partners. The duration of the study is 5 years with 24 additional months of patient follow-up. Data will be collected from participants at 1, 3, 12, and 24 months post-transplant. Limited pre-liver transplant data will also be collected from waitlisted candidates. Data collection methods include interviews, surveys, cognitive assessments, and actigraphy/sleep diary measures. Patient measurements include sociodemographic characteristics, pretransplant health status, cognitive function, physical function, perioperative measures, medical history, transplant history, self-management skills, patient-reported outcomes, health behaviors, and clinical outcomes. Caregiver measures assess sociodemographic variables, health literacy, health care navigation skills, self-efficacy, care partner preparedness, nature and intensity of care, care partner burden, and community participation. By elucidating various health trajectories from pre-liver transplant to 2 years post-liver transplant, LivCog will be able to better characterize recipients at higher risk of cognitive impairment and compromised self-management. Findings will inform interventions targeting health behaviors, self-management, and caregiver supports to optimize outcomes.
Collapse
|
19
|
Patient randomised controlled trial of technology enabled strategies to promote treatment adherence in liver transplantation: rationale and design of the TEST trial. BMJ Open 2023; 13:e075172. [PMID: 37723108 PMCID: PMC10510935 DOI: 10.1136/bmjopen-2023-075172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Accepted: 08/25/2023] [Indexed: 09/20/2023] Open
Abstract
BACKGROUND AND AIMS Liver transplantation is a life-saving procedure for end-stage liver disease. However, post-transplant medication regimens are complex and non-adherence is common. Post-transplant medication non-adherence is associated with graft rejection, which can have long-term adverse consequences. Transplant centres are equipped with clinical staff that monitor patients post-transplant; however, digital health tools and proactive immunosuppression adherence monitoring has potential to improve outcomes. METHODS AND ANALYSIS This is a patient-randomised prospective clinical trial at three transplant centres in the Northeast, Midwest and South to investigate the effects of a remotely administered adherence programme compared with usual care. The programme monitors potential non-adherence largely levering text message prompts and phenotypes the nature of the non-adhere as cognitive, psychological, medical, social or economic. Additional reminders for medications, clinical appointments and routine self-management support are incorporated to promote adherence to the entire medical regimen. The primary study outcome is medication adherence via 24-hour recall; secondary outcomes include additional medication adherence (ASK-12 self-reported scale, regimen knowledge scales, tacrolimus values), quality of life, functional health status and clinical outcomes (eg, days hospitalised). Study implementation, acceptability, feasibility, costs and potential cost-effectiveness will also be evaluated. ETHICS AND DISSEMINATION The University of Pennsylvania Review Board has approved the study as the single IRB of record (protocol # 849575, V.1.4). Results will be published in peer-reviewed journals and summaries will be provided to study funders. TRIAL REGISTRATION NUMBER NCT05260268.
Collapse
|
20
|
Induction Immunosuppression Does Not Worsen Tumor Recurrence After Liver Transplantation for Hepatocellular Carcinoma. Transplantation 2023; 107:1524-1534. [PMID: 36695564 DOI: 10.1097/tp.0000000000004487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Abstract
BACKGROUND Prior studies are inconsistent regarding the impact of antibody induction therapy on outcomes after liver transplantation (LT) for hepatocellular carcinoma (HCC). METHODS Adults transplanted with HCC exception priority were identified from February 27, 2002, to March 31, 2019, using the United Network for Organ Sharing database. Time-to-event analyses evaluated the association of antibody induction therapy (none, nondepleting induction [NDI], depleting induction [DI]) with overall post-LT patient survival and HCC recurrence. Separate multivariable models adjusted for tumor characteristics on either last exception or on explant. The interaction of induction and maintenance regimen at LT discharge was investigated. RESULTS Among 22 535 LTs for HCC, 17 688 (78.48%) received no antibody induction, 2984 (13.24%) NDI, and 1863 (8.27%) DI. Minimal differences in patient and tumor characteristics were noted between induction groups, and there was significant center variability in practices. NDI was associated with improved survival, particularly when combined with a calcineurin inhibitor (CNI) and antimetabolite (hazard ratio [HR] 0.73 versus no induction plus 3-drug therapy in the last exception model [ P < 0.001]; HR 0.64 in the explant model [ P = 0.011]). The combination of DI with CNI alone was also protective (HR 0.43; P = 0.003). Neither NDI nor DI was associated with tumor recurrence (all P > 0.1). However, increased HCC recurrence was observed with no induction plus CNI monotherapy (HR 1.47, P = 0.019; versus no induction plus 3-drug therapy). CONCLUSIONS In conclusion, induction immunosuppression was not associated with worse post-LT outcomes in patients transplanted with HCC exception priority. An improvement in survival was possibly observed with NDI.
Collapse
|
21
|
Variability in Organ Procurement Organization Performance by Individual Hospital in the United States. JAMA Surg 2023; 158:404-409. [PMID: 36753195 PMCID: PMC9909569 DOI: 10.1001/jamasurg.2022.7853] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Accepted: 10/23/2022] [Indexed: 02/09/2023]
Abstract
Importance Availability of organs inadequately addresses the need of patients waiting for a transplant. Objective To estimate the true number of donor patients in the United States and identify inefficiencies in the donation process as a way to guide system improvement. Design, Setting, and Participants A retrospective cross-sectional analysis was performed of organ donation across 13 different hospitals in 2 donor service areas covered by 2 organ procurement organizations (OPOs) in 2017 and 2018 to compare donor potential to actual donors. More than 2000 complete medical records for decedents were reviewed as a sample of nearly 9000 deaths. Data were analyzed from January 1, 2017, to December 31, 2018. Exposure Deaths of causes consistent with donation according to medical record review, ventilated patient referrals, center acceptance practices, and actual deceased donors. Main Outcomes and Measures Potential donors by medical record review vs actual donors and OPO performance at specific hospitals. Results Compared with 242 actual donors, 931 potential donors were identified at these hospitals. This suggests a deceased donor potential of 3.85 times (95% CI, 4.23-5.32) the actual number of donors recovered. There was a surprisingly wide variability in conversion of potential donor patients into actual donors among the hospitals studied, from 0% to 51.0%. One OPO recovered 18.8% of the potential donors, whereas the second recovered 48.2%. The performance of the OPOs was moderately related to referrals of ventilated patients and not related to center acceptance practices. Conclusions and Relevance In this cross-sectional study of hospitals served by 2 OPOs, wide variation was found in the performance of the OPOs, especially at individual hospitals. Addressing this opportunity could greatly increase the organ supply, affirming the importance of recent efforts from the federal government to increase OPO accountability and transparency.
Collapse
|
22
|
Organ Transplantation Outcomes of Deceased Organ Donors in Organ Procurement Organization-Based Recovery Facilities Versus Acute-Care Hospitals. Prog Transplant 2023; 33:110-120. [PMID: 36942433 PMCID: PMC10150267 DOI: 10.1177/15269248231164176] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/23/2023]
Abstract
INTRODUCTION Recovery of donated organs at organ procurement organization (OPO)-based recovery facilities has been proposed to improve organ donation outcomes, but few data exist to characterize differences between facilities and acute-care hospitals. RESEARCH QUESTION To compare donation outcomes between organ donors that underwent recovery procedures in OPO-based recovery facilities and hospitals. DESIGN Retrospective study of Organ Procurement and Transplantation Network data. From a population-based sample of deceased donors after brain death April 2017 to June 2021, donation outcomes were examined in 10 OPO regions with organ recovery facilities. Primary exposure was organ recovery procedure in an OPO-based organ recovery. Primary outcome was the number of organs transplanted per donor. Multivariable regression models were used to adjust for donor characteristics and managing OPO. RESULTS Among 5010 cohort donors, 2590 (51.7%) underwent recovery procedures in an OPO-based facility. Donors in facilities differed from those in hospitals, including recovery year, mechanisms of death, and some comorbid diseases. Donors in OPO-based facilities had higher total numbers of organs transplanted per donor (mean 3.5 [SD1.8] vs 3.3 [SD1.8]; adjusted mean difference 0.27, 95% confidence interval 0.18-0.36). Organ recovery at an OPO-based facility was also associated with more lungs, livers, and pancreases transplanted. CONCLUSION Organ recovery procedures at OPO-based facilities were associated with more organs transplanted per donor than in hospitals. Increasing access to OPO-based organ recovery facilities may improve rates of organ transplantation from deceased organ donors, although further data are needed on other important donor management quality metrics.
Collapse
|
23
|
Analysis of hospital readmissions with competing risks. Stat Methods Med Res 2022; 31:2189-2200. [PMID: 35899312 PMCID: PMC9931495 DOI: 10.1177/09622802221115879] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The 30-day hospital readmission rate has been used in provider profiling for evaluating inter-provider care coordination, medical cost effectiveness, and patient quality of life. Current profiling analyzes use logistic regression to model 30-day readmission as a binary outcome, but one disadvantage of this approach is that this outcome is strongly affected by competing risks (e.g., death). Thus, one, perhaps unintended, consequence is that if two facilities have the same rates of readmission, the one with the higher rate of competing risks will have the lower 30-day readmission rate. We propose a discrete time competing risk model wherein the cause-specific readmission hazard is used to assess provider-level effects. This approach takes account of the timing of events and focuses on the readmission rates which are of primary interest. The quality measure, then is a standardized readmission ratio, akin to a standardized mortality ratio. This measure is not systematically affected by the rate of competing risks. To facilitate the estimation and inference of a large number of provider effects, we develop an efficient Blockwise Inversion Newton algorithm, and a stabilized robust score test that overcomes the conservative nature of the classical robust score test. An application to dialysis patients demonstrates improved profiling, model fitting, and outlier detection over existing methods.
Collapse
|
24
|
Clinical impact of a modified lung allocation score that mitigates selection bias. J Heart Lung Transplant 2022; 41:1590-1600. [PMID: 36064649 PMCID: PMC10167739 DOI: 10.1016/j.healun.2022.08.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 07/22/2022] [Accepted: 08/03/2022] [Indexed: 11/21/2022] Open
Abstract
BACKGROUND The Lung Allocation Score (LAS) is used in the U.S. to prioritize lung transplant candidates. Selection bias, induced by dependent censoring of waitlisted candidates and prediction of posttransplant survival among surviving, transplanted patients only, is only partially addressed by the LAS. Recently, a modified LAS (mLAS) was designed to mitigate such bias. Here, we estimate the clinical impact of replacing the LAS with the mLAS. METHODS We considered lung transplant candidates waitlisted during 2016 and 2017. LAS and mLAS scores were computed for each registrant at each observed organ offer date; individuals were ranked accordingly. Patient characteristics associated with better priority under the mLAS were investigated via logistic regression and generalized linear mixed models. We also determined whether differences in rank were explained more by changes in predicted pre- or posttransplant survival. Simulations examined how 1-year waitlist, posttransplant, and overall survival might change under the mLAS. RESULTS Diagnosis group, 6-minute walk distance, continuous mechanical ventilation, functional status, and age demonstrated the highest impact on differential allocation. Differences in rank were explained more by changes in predicted pretransplant survival than changes in predicted posttransplant survival, suggesting that selection bias has more impact on estimates of waitlist urgency. Simulations suggest that for every 1000 waitlisted individuals, 12.8 (interquartile range: 5.2-24.3) fewer waitlist deaths per year would occur under the mLAS, without compromising posttransplant and overall survival. CONCLUSIONS Implementing a mLAS that mitigates selection bias into clinical practice can lead to important differences in allocation and possibly modest improvement in waitlist survival.
Collapse
|
25
|
Five-Year Allograft Survival for Recipients of Kidney Transplants From Hepatitis C Virus Infected vs Uninfected Deceased Donors in the Direct-Acting Antiviral Therapy Era. JAMA 2022; 328:1102-1104. [PMID: 35994263 PMCID: PMC9396466 DOI: 10.1001/jama.2022.12868] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
This cohort study examines the validity of the Kidney Donor Profile Index’s hepatitis C virus (HCV) penalty during the direct-acting antiviral era by comparing 5-year allograft survival between recipients of kidneys from HCV-RNA–positive donors vs HCV-RNA–negative donors.
Collapse
|
26
|
Associations of Anxiety during the COVID-19 Pandemic with Patient Characteristics and Behaviors in CKD Patients: Findings from the Chronic Renal Insufficiency Cohort (CRIC) Study. KIDNEY360 2022; 3:1341-1349. [PMID: 36176662 PMCID: PMC9416826 DOI: 10.34067/kid.0000222022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Accepted: 05/24/2022] [Indexed: 01/11/2023]
Abstract
Background Chronic kidney disease (CKD) is associated with anxiety and depression. Although the coronavirus disease 2019 (COVID-19) pandemic has increased stressors on patients with CKD, assessments of anxiety and its predictors and consequences on behaviors, specifically virus mitigation behaviors, are lacking. Methods From June to October 2020, we administered a survey to 1873 patients in the Chronic Renal Insufficiency Cohort (CRIC) Study, asking participants about anxiety related to the COVID-19 pandemic. We examined associations between anxiety and participant demographics, clinical indexes, and health literacy and whether anxiety was associated with health-related behaviors and COVID-19 mitigation behaviors. Results The mean age of the study population was 70 years (SD=9.6 years), 47% were women, 39% were Black non-Hispanic, 14% were Hispanic, and 38% had a history of cardiovascular disease. In adjusted analyses, younger age, being a woman, Hispanic ethnicity, cardiovascular disease, household income <$20,000, and marginal or inadequate health literacy predicted higher anxiety. Higher global COVID-19-related anxiety scores were associated with higher odds of reporting always wearing a mask in public (OR=1.3 [95% CI, 1.14 to 1.48], P<0.001) and of eating less healthy foods (OR=1.29 [95% CI, 1.13 to 1.46], P<0.001), reduced physical activity (OR=1.32 [95% CI, 1.2 to 1.45], P<0.001), and weight gain (OR=1.23 [95% CI, 1.11 to 1.38], P=0.001). Conclusions Higher anxiety levels related to the COVID-19 pandemic were associated not only with higher self-reported adherence to mask wearing but also with higher weight gain and less adherence to healthy lifestyle behaviors. Interventions are needed to support continuation of healthy lifestyle behaviors in patients with CKD experiencing increased anxiety related to the pandemic.
Collapse
|
27
|
Association of Statin Usage and the Development of Diabetes Mellitus after Acute Pancreatitis. Clin Gastroenterol Hepatol 2022; 21:1214-1222.e14. [PMID: 35750248 DOI: 10.1016/j.cgh.2022.05.017] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/13/2022] [Revised: 04/21/2022] [Accepted: 05/09/2022] [Indexed: 02/07/2023]
Abstract
BACKGROUND Patients with acute pancreatitis (AP) have at least a 2-fold higher risk for developing postpancreatitis diabetes mellitus (PPDM). No therapies have prevented PPDM. Statins were demonstrated to possibly lower the incidence and severity of AP but have not been studied to prevent PPDM. METHODS Data from a commercial insurance claim database (Optum Clinformatics) were used to assess the impact of statins on patients without pre-existing DM admitted for a first episode of AP in 118,479 patients. Regular statin usage was defined as filled statin prescriptions for at least 80% of the year prior to AP. The primary outcome was defined as PPDM. We constructed a propensity score and applied inverse probability of treatment weighting to balance baseline characteristics between groups. Using Cox proportional hazards regression modeling, we estimated the risk of PPDM, accounting for competing events. RESULTS With a median of 3.5 years of follow-up, the 5-year cumulative incidence of PPDM was 7.5% (95% confidence interval [CI], 6.9% to 8.0%) among regular statin users and 12.7% (95% CI, 12.4% to 12.9%) among nonusers. Regular statin users had a 42% lower risk of developing PPDM compared with nonusers (hazard ratio, 0.58; 95% CI, 0.52 to 0.65; P < .001). Irregular statin users had a 15% lower risk of PPDM (hazard ratio, 0.85; 95% CI, 0.81 to 0.89; P < .001). Similar benefits were seen with low, moderate, and high statin doses. CONCLUSIONS In a large database-based study, statin usage reduced the risk of developing DM after acute pancreatitis. Further prospective studies with long-term follow-up are needed to study the impact of statins on acute pancreatitis and prevention of PPDM.
Collapse
|
28
|
Trends and Outcomes of Hypothermic Machine Perfusion Preservation of Kidney Allografts in Simultaneous Liver and Kidney Transplantation in the United States. Transpl Int 2022; 35:10345. [PMID: 35356400 PMCID: PMC8958417 DOI: 10.3389/ti.2022.10345] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Accepted: 01/18/2022] [Indexed: 11/18/2022]
Abstract
Optimal kidney graft outcomes after simultaneous liver-kidney (SLK) transplant may be threatened by the increased cold ischemia time and hemodynamic perturbations of dual organ transplantation. Hypothermic machine perfusion (MP) of kidney allografts may mitigate these effects. We analyzed U.S. trends and renal outcomes of hypothermic non-oxygenated MP vs. static cold storage (CS) of kidney grafts from 6,689 SLK transplants performed between 2005 and 2020 using the United Network for Organ Sharing database. Outcomes included delayed graft function (DGF), primary non-function (PNF), and kidney graft survival (GS). Overall, 17.2% of kidney allografts were placed on MP. Kidney cold ischemia time was longer in the MP group (median 12.8 vs. 10.0 h; p < 0.001). Nationally, MP utilization in SLK increased from <3% in 2005 to >25% by 2019. Center preference was the primary determinant of whether a graft underwent MP vs. CS (intraclass correlation coefficient 65.0%). MP reduced DGF (adjusted OR 0.74; p = 0.008), but not PNF (p = 0.637). Improved GS with MP was only observed with Kidney Donor Profile Index <20% (HR 0.71; p = 0.030). Kidney MP has increased significantly in SLK in the U.S. in a heterogeneous manner and with variable short-term benefits. Additional studies are needed to determine the ideal utilization for MP in SLK.
Collapse
|
29
|
Center Variability in Acute Rejection and Biliary Complications After Pediatric Liver Transplantation. Liver Transpl 2022; 28:454-465. [PMID: 34365719 PMCID: PMC8821725 DOI: 10.1002/lt.26259] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Revised: 07/09/2021] [Accepted: 07/28/2021] [Indexed: 01/13/2023]
Abstract
Transplant center performance and practice variation for pediatric post-liver transplantation (LT) outcomes other than survival are understudied. This was a retrospective cohort study of pediatric LT recipients who received transplants between January 1, 2006, and May 31, 2017, using United Network for Organ Sharing (UNOS) data that were merged with the Pediatric Health Information System database. Center effects for the acute rejection rate at 1 year after LT (AR1) using UNOS coding and the biliary complication rate at 1 year after LT (BC1) using inpatient billing claims data were estimated by center-specific rescaled odds ratios that accounted for potential differences in recipient and donor characteristics. There were 2216 pediatric LT recipients at 24 freestanding children's hospitals in the United States during the study period. The median unadjusted center rate of AR1 was 36.92% (interquartile range [IQR], 22.36%-44.52%), whereas that of BC1 was 32.29% (IQR, 26.14%-40.44%). Accounting for recipient case mix and donor factors, 5/24 centers performed better than expected with regard to AR1, whereas 3/24 centers performed worse than expected. There was less heterogeneity across the center effects for BC1 than for AR1. There was no relationship observed between the center effects for AR1 or BC1 and center volume. Beyond recipient and allograft factors, differences in transplant center management are an important driver of center AR1 performance, and less so of BC1 performance. Further research is needed to identify the sources of variability so as to implement the most effective solutions to broadly enhance outcomes for pediatric LT recipients.
Collapse
|
30
|
Association of donor hepatitis C virus infection status and risk of BK polyomavirus viremia after kidney transplantation. Am J Transplant 2022; 22:599-609. [PMID: 34613666 PMCID: PMC8968853 DOI: 10.1111/ajt.16834] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2021] [Revised: 08/01/2021] [Accepted: 09/03/2021] [Indexed: 02/03/2023]
Abstract
Kidney transplantation (KT) from deceased donors with hepatitis C virus (HCV) into HCV-negative recipients has become more common. However, the risk of complications such as BK polyomavirus (BKPyV) remains unknown. We assembled a retrospective cohort at four centers. We matched recipients of HCV-viremic kidneys to highly similar recipients of HCV-aviremic kidneys on established risk factors for BKPyV. To limit bias, matches were within the same center. The primary outcome was BKPyV viremia ≥1000 copies/ml or biopsy-proven BKPyV nephropathy; a secondary outcome was BKPyV viremia ≥10 000 copies/ml or nephropathy. Outcomes were analyzed using weighted and stratified Cox regression. The median days to peak BKPyV viremia level was 119 (IQR 87-182). HCV-viremic KT was not associated with increased risk of the primary BKPyV outcome (HR 1.26, p = .22), but was significantly associated with the secondary outcome of BKPyV ≥10 000 copies/ml (HR 1.69, p = .03). One-year eGFR was similar between the matched groups. Only one HCV-viremic kidney recipient had primary graft loss. In summary, HCV-viremic KT was not significantly associated with the primary outcome of BKPyV viremia, but the data suggested that donor HCV might elevate the risk of more severe BKPyV viremia ≥10 000 copies/ml. Nonetheless, one-year graft function for HCV-viremic recipients was reassuring.
Collapse
|
31
|
Facility profiling under competing risks using multivariate prognostic scores: Application to kidneytransplant centers. Stat Methods Med Res 2021; 31:563-575. [PMID: 34879778 DOI: 10.1177/09622802211052873] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The performance of health care facilities (e.g. hospitals, transplant centers, etc.) is often evaluated through time-to-event outcomes. In this paper, we consider the case where, for each subject, the failure event is due to one of several mutually exclusive causes (competing risks). Since the distribution of patient characteristics may differ greatly by the center, some form of covariate adjustment is generally necessary in order for center-specific outcomes to be accurately compared (to each other or to an overall average). We propose a weighting method for comparing facility-specific cumulative incidence functions to an overall average. The method directly standardizes each facility's non-parametric cumulative incidence function through a weight function constructed from a multivariate prognostic score. We formally define the center effects and derive large-sample properties of the proposed estimator. We evaluate the finite sample performance of the estimator through simulation. The proposed method is applied to the end-stage renal disease setting to evaluate the center-specific pre-transplant mortality and transplant cumulative incidence functions from the Scientific Registry of Transplant Recipients.
Collapse
|
32
|
Mitigating selection bias in organ allocation models. BMC Med Res Methodol 2021; 21:191. [PMID: 34548017 PMCID: PMC8454078 DOI: 10.1186/s12874-021-01379-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2021] [Accepted: 08/25/2021] [Indexed: 05/31/2023] Open
Abstract
Background The lung allocation system in the U.S. prioritizes lung transplant candidates based on estimated pre- and post-transplant survival via the Lung Allocation Scores (LAS). However, these models do not account for selection bias, which results from individuals being removed from the waitlist due to receipt of transplant, as well as transplanted individuals necessarily having survived long enough to receive a transplant. Such selection biases lead to inaccurate predictions. Methods We used a weighted estimation strategy to account for selection bias in the pre- and post-transplant models used to calculate the LAS. We then created a modified LAS using these weights, and compared its performance to that of the existing LAS via time-dependent receiver operating characteristic (ROC) curves, calibration curves, and Bland-Altman plots. Results The modified LAS exhibited better discrimination and calibration than the existing LAS, and led to changes in patient prioritization. Conclusions Our approach to addressing selection bias is intuitive and can be applied to any organ allocation system that prioritizes patients based on estimated pre- and post-transplant survival. This work is especially relevant to current efforts to ensure more equitable distribution of organs. Supplementary Information The online version contains supplementary material available at 10.1186/s12874-021-01379-7.
Collapse
|
33
|
Life-years lost due to cancer among solid organ transplant recipients in the United States, 1987 to 2014. Cancer 2021; 128:150-159. [PMID: 34541673 DOI: 10.1002/cncr.33877] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2021] [Revised: 07/14/2021] [Accepted: 07/21/2021] [Indexed: 01/20/2023]
Abstract
BACKGROUND Solid organ transplant recipients have an elevated risk of cancer. Quantifying the life-years lost (LYL) due to cancer provides a complementary view of the burden of cancer distinct from other metrics and may identify subgroups of transplant recipients who are most affected. METHODS Linked transplant and cancer registry data were used to identify incident cancers and deaths among solid organ transplant recipients in the United States (1987-2014). Data on LYL due to cancer within 10 years posttransplant were derived using mean survival estimates from Cox models. RESULTS Among 221,962 transplant recipients, 13,074 (5.9%) developed cancer within 10 years of transplantation. During this period, the mean LYL due to cancer were 0.16 years per transplant recipient and 2.7 years per cancer case. Cancer was responsible for a loss of 1.9% of the total life-years expected in the absence of cancer in this population. Lung recipients had the highest proportion of total LYL due to cancer (0.45%) followed by heart recipients (0.29%). LYL due to cancer increased with age, from 0.5% among those aged birth to 34 years at transplant to 3.2% among those aged 50 years and older. Among recipients overall, lung cancer was the largest contributor, accounting for 24% of all LYL due to cancer, and non-Hodgkin lymphoma had the next highest contribution (15%). CONCLUSIONS Transplant recipients have a shortened lifespan after developing cancer. Lung cancer and non-Hodgkin lymphoma contribute strongly to LYL due to cancer within the first 10 years after transplant, highlighting opportunities to reduce cancer mortality through prevention and screening.
Collapse
|
34
|
Black Race Is Associated With Higher Rates of Early-Onset End-Stage Renal Disease and Increased Mortality Following Liver Transplantation. Liver Transpl 2021; 27:1154-1164. [PMID: 33733570 PMCID: PMC8355050 DOI: 10.1002/lt.26054] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Revised: 03/01/2021] [Accepted: 03/08/2021] [Indexed: 12/13/2022]
Abstract
Black race is a risk factor for end-stage renal disease (ESRD). Racial disparities in the risks of early and long-term renal complications after liver transplantation (LT) have not been systematically studied. This study evaluated racial differences in the natural history of acute and chronic renal insufficiency after LT. This was a retrospective single-center cohort study of 763 non-Hispanic White and 181 Black LT recipients between 2008 and 2017. Black race was investigated as an independent predictor of the following outcomes: (1) receipt and duration of early post-LT hemodialysis and (2) time to post-LT ESRD. The interaction of race and post-LT ESRD on survival was also studied. Black recipients had higher rates of pre-LT hypertension (P < 0.001), but diabetes mellitus and renal function before LT were not different by race (all P > 0.05). Overall, 15.2% of patients required early hemodialysis immediately after LT with no difference by race (covariate-adjusted odds ratio, 0.89; P = 0.71). Early dialysis discontinuation was lower among Black recipients (covariate-adjusted hazard ratio [aHR], 0.47; P = 0.02), whereas their rate of post-LT ESRD was higher (aHR, 1.91; P = 0.005). Post-LT survival after ESRD was markedly worse for Black (aHR, 11.18; P < 0.001) versus White recipients (aHR, 5.83; P < 0.001; interaction P = 0.08). Although Black and White LT recipients had comparable pretransplant renal function, post-LT renal outcomes differed considerably, and the impact of ESRD on post-LT survival was greater for Black recipients. This study highlights the need for an individualized approach to post-LT management to improve outcomes for all patients.
Collapse
|
35
|
Arteriovenous Vascular Access-Related Procedural Burden Among Incident Hemodialysis Patients in the United States. Am J Kidney Dis 2021; 78:369-379.e1. [PMID: 33857533 DOI: 10.1053/j.ajkd.2021.01.019] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2020] [Accepted: 01/26/2021] [Indexed: 11/11/2022]
Abstract
RATIONALE & OBJECTIVE As the proportion of arteriovenous fistulas (AVFs) compared with arteriovenous grafts (AVGs) in the United States has increased, there has been a concurrent increase in interventions. We explored AVF and AVG maturation and maintenance procedural burden in the first year of hemodialysis. STUDY DESIGN Observational cohort study. SETTING & PARTICIPANTS Patients initiating hemodialysis from July 1, 2012, to December 31, 2014, and having a first-time AVF or AVG placement between dialysis initiation and 1 year (N = 73,027), identified using the US Renal Data System (USRDS). PREDICTORS Patient characteristics. OUTCOME Successful AVF/AVG use and intervention procedure burden. ANALYTICAL APPROACH For each group, we analyzed interventional procedure rates during maturation maintenance phases using Poisson regression. We used proportional rate modeling for covariate-adjusted analysis of interventional procedure rates during the maintenance phase. RESULTS During the maturation phase, 13,989 of 57,275 patients (24.4%) in the AVF group required intervention, with therapeutic interventional requirements of 0.36 per person. In the AVG group 2,904 of 15,572 patients (18.4%) required intervention during maturation, with therapeutic interventional requirements of 0.28 per person. During the maintenance phase, in the AVF group 12,732 of 32,115 patients (39.6%) required intervention, with a therapeutic intervention rate of 0.93 per person-year. During maintenance phase, in the AVG group 5,928 of 10,271 patients (57.7%) required intervention, with a therapeutic intervention rate of 1.87 per person-year. For both phases, the intervention rates for AVF tended to be higher on the East Coast while those for AVG were more uniform geographically. LIMITATIONS This study relies on administrative data, with monthly recording of access use. CONCLUSIONS During maturation, interventions for both AVFs and AVGs were relatively common. Once successfully matured, AVFs had lower maintenance interventional requirements. During the maturation and maintenance phases, there were geographic variations in AVF intervention rates that warrant additional study.
Collapse
|
36
|
Weighted estimators of the complier average causal effect on restricted mean survival time with observed instrument-outcome confounders. Biom J 2021; 63:712-724. [PMID: 33346382 PMCID: PMC8035265 DOI: 10.1002/bimj.201900284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2019] [Revised: 05/19/2020] [Accepted: 06/22/2020] [Indexed: 11/07/2022]
Abstract
A major concern in any observational study is unmeasured confounding of the relationship between a treatment and outcome of interest. Instrumental variable (IV) analysis methods are able to control for unmeasured confounding. However, IV analysis methods developed for censored time-to-event data tend to rely on assumptions that may not be reasonable in many practical applications, making them unsuitable for use in observational studies. In this report, we develop weighted estimators of the complier average causal effect (CACE) on the restricted mean survival time in the overall population as well as in an evenly matchable population (CACE-m). Our method is able to accommodate instrument-outcome confounding and adjust for covariate-dependent censoring, making it particularly suited for causal inference from observational studies. We establish the asymptotic properties and derive easily implementable asymptotic variance estimators for the proposed estimators. Through simulation studies, we show that the proposed estimators tend to be more efficient than instrument propensity score matching-based estimators or IPIW estimators. We apply our method to compare dialytic modality-specific survival for end stage renal disease patients using data from the U.S. Renal Data System.
Collapse
|
37
|
Transplant center experience influences spontaneous survival and waitlist mortality in acute liver failure: An analysis of the UNOS database. Am J Transplant 2021; 21:1092-1099. [PMID: 32741074 DOI: 10.1111/ajt.16234] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2020] [Revised: 07/15/2020] [Accepted: 07/19/2020] [Indexed: 01/25/2023]
Abstract
Transplant centers coordinate complex care in acute liver failure (ALF), for which liver transplant (LT) can be lifesaving. We studied associations between waitlist outcomes and center (1) ALF waitlist volume (low: <20; medium: 20-39; high: 40+ listings) and (2) total LT volume (<600, 600-1199, 1200+ LTs) in a retrospective cohort of 3248 adults with ALF listed for LT at 92 centers nationally from 2002 to 2019. Predicted outcome probabilities (LT, died/too sick, spontaneous survival [SS]) were obtained with multinomial regression, and observed-to-expected ratios were calculated. Median center outcome rates were 72.6% LT, 18.2% died/too sick, and 6.1% SS. SS was significantly higher with greater center ALF volume (median 0% for low-, 5.9% for medium-, and 8.6% for high-volume centers; P = .039), while waitlist mortality was highest at low-volume centers (median 21.4%, IQR: 16.1%-26.7%; P = .042). Significant heterogeneity in center performance was observed for waitlist mortality (observed-to-expected ratio range: 0-4.1) and particularly for SS (0-6.4), which persisted despite accounting for recipient case mix. This novel study demonstrates that increased center experience is associated with greater SS and reduced waitlist mortality for ALF. More-focused management pathways are needed to improve ALF outcomes at less-experienced centers and to identify opportunities for improvement at large.
Collapse
|
38
|
International Comparisons of Native Arteriovenous Fistula Patency and Time to Becoming Catheter-Free: Findings From the Dialysis Outcomes and Practice Patterns Study (DOPPS). Am J Kidney Dis 2021; 77:245-254. [DOI: 10.1053/j.ajkd.2020.06.020] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2019] [Accepted: 06/30/2020] [Indexed: 11/11/2022]
|
39
|
Optimizing Peritoneal Dialysis-Associated Peritonitis Prevention in the United States: From Standardized Peritoneal Dialysis-Associated Peritonitis Reporting and Beyond. Clin J Am Soc Nephrol 2021; 16:154-161. [PMID: 32764025 PMCID: PMC7792655 DOI: 10.2215/cjn.11280919] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Peritoneal dialysis (PD)-associated peritonitis is the leading cause of permanent transition to hemodialysis among patients receiving PD. Peritonitis is associated with higher mortality risk and added treatment costs and limits more widespread PD utilization. Optimizing the prevention of peritonitis in the United States will first require standardization of peritonitis definitions, key data elements, and outcomes in an effort to facilitate nationwide reporting. Standardized reporting can also help describe the variability in peritonitis rates and outcomes across facilities in the United States in an effort to identify potential peritonitis prevention strategies and engage with stakeholders to develop strategies for their implementation. Here, we will highlight considerations and challenges in developing standardized definitions and implementation of national reporting of peritonitis rates by PD facilities. We will describe existing peritonitis prevention evidence gaps, highlight successful infection-reporting initiatives among patients receiving in-center hemodialysis or PD, and provide an overview of nationwide quality improvement initiatives, both in the United States and elsewhere, that have translated into a reduction in peritonitis incidence. We will discuss opportunities for collaboration and expansion of the Nephrologists Transforming Dialysis Safety (NTDS) initiative to develop knowledge translation pathways that will lead to dissemination of best practices in an effort to reduce peritonitis incidence.
Collapse
|
40
|
Restricted mean survival time as a function of restriction time. Biometrics 2020; 78:192-201. [PMID: 33616953 DOI: 10.1111/biom.13414] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2019] [Revised: 08/18/2020] [Accepted: 11/25/2020] [Indexed: 11/26/2022]
Abstract
Restricted mean survival time (RMST) is a clinically interpretable and meaningful survival metric that has gained popularity in recent years. Several methods are available for regression modeling of RMST, most based on pseudo-observations or what is essentially an inverse-weighted complete-case analysis. No existing RMST regression method allows for the covariate effects to be expressed as functions over time. This is a considerable limitation, in light of the many hazard regression methods that do accommodate such effects. To address this void in the literature, we propose RMST methods that permit estimating time-varying effects. In particular, we propose an inference framework for directly modeling RMST as a continuous function of L. Large-sample properties are derived. Simulation studies are performed to evaluate the performance of the methods in finite sample sizes. The proposed framework is applied to kidney transplant data obtained from the Scientific Registry of Transplant Recipients.
Collapse
|
41
|
Replicating Randomized Trial Results with Observational Data Using the Parametric g-Formula: An Application to Intravenous Iron Treatment in Hemodialysis Patients. Clin Epidemiol 2020; 12:1249-1260. [PMID: 33204166 PMCID: PMC7667704 DOI: 10.2147/clep.s283321] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Accepted: 10/27/2020] [Indexed: 11/23/2022] Open
Abstract
BACKGROUND Reproducibility of clinical and epidemiologic research is important to generalize findings and has increasingly been scrutinized. A recently published randomized trial, PIVOTAL, evaluated high vs low intravenous iron dosing strategies to manage anemia in hemodialysis patients in the UK. Our objective was to assess the reproducibility of the PIVOTAL trial findings using data from a well-established cohort study, the Dialysis Outcomes and Practice Patterns Study (DOPPS). METHODS To overcome the absence of randomization in the DOPPS, we applied the parametric g-formula, an extension of standardization to longitudinal data. We estimated the effect of a proactive high-dose vs reactive low-dose iron supplementation strategy on all-cause mortality (primary outcome), hemoglobin, two measures of iron concentration (ferritin and TSAT), and erythropoiesis-stimulating agent dose over 12 months of follow-up in 6325 DOPPS patients. RESULTS Comparing high- vs low-iron dose strategies, the 1-year mortality risk difference was 0.020 (95% CI: 0.008, 0.031) and risk ratio was 1.20 (95% CI: 1.07, 1.33), compared with null 1-year findings in the PIVOTAL trial. Differences in secondary outcomes were directionally consistent but of lesser magnitude than in the PIVOTAL trial. CONCLUSION Our findings are somewhat consistent with the recent PIVOTAL trial, with discrepancies potentially attributable to model misspecification and differences between the two study populations. In addition to the importance of our results to nephrologists and hence hemodialysis patients, our analysis illustrates the utility of the parametric g-formula for generalizing results and comparing complex and dynamic treatment strategies using observational data.
Collapse
|
42
|
Peritoneal Dialysis and Mortality, Kidney Transplant, and Transition to Hemodialysis: Trends From 1996-2015 in the United States. Kidney Med 2020; 2:610-619.e1. [PMID: 33089139 PMCID: PMC7568078 DOI: 10.1016/j.xkme.2020.06.009] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
Rationale & Objective Transitions between dialysis modalities can be disruptive to care. Our goals were to evaluate rates of transition from peritoneal dialysis (PD) to in-center hemodialysis (HD), mortality, and transplantation among incident PD patients in the US Renal Data System from 1996 to 2015 and identify factors associated with these outcomes. Study Design Observational registry-based retrospective cohort study. Setting & Participants Medicare patients incident to end-stage renal disease (ESRD) from January 1, 1996, through December 31, 2011 (for adjusted analyses; through December 31, 2014, for unadjusted analyses), and treated with PD 1 or more days within 180 days of ESRD incidence (n = 173,533 for adjusted analyses; n = 219,787 for unadjusted analyses). Exposure & Predictors Exposure: 1 or more days of PD. Predictors: patient- and facility-level characteristics obtained from Centers for Medicare & Medicaid Services Form 2728 and other data sources. Outcomes Patients were followed up for 3 years until transition to in-center HD, death, or transplantation. Analytical Approach Multivariable Cox regression was used to estimate hazards over time and associations with predictors. Results Compared with earlier cohorts, recent incident PD patient cohorts had lower rates of death (48% decline) and transition to in-center HD (13% decline). Among many other findings, we found that: (1) rates of transition to in-center HD and death were lowest in the 2008 to 2011 cohort, (2) longer time receiving PD was associated with higher mortality risk but lower risk for transition to in-center HD, and (3) larger PD programs (≥25 vs ≤6 patients) displayed lower risks for death and transition to in-center HD. Limitations Data collected on Form 2728 are only at the time of ESRD incidence and do not provide information at the time of transition to in-center HD, death, or transplantation. Conclusions Rates of transition from PD to in-center HD and death rates for PD patients decreased over time and were lowest in PD programs with 25 or more patients. Implications of the observed improved technique survival warrant further investigation, focusing on modifiable factors of center-level performance to create opportunities for improved patient outcomes.
Collapse
|
43
|
Survival Among Incident Peritoneal Dialysis Versus Hemodialysis Patients Who Initiate With an Arteriovenous Fistula. Kidney Med 2020; 2:732-741.e1. [PMID: 33319197 PMCID: PMC7729241 DOI: 10.1016/j.xkme.2020.09.002] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023] Open
Abstract
Rationale & Objective Comparisons of outcomes between in-center hemodialysis (HD) and peritoneal dialysis (PD) are confounded by selection bias because PD patients are typically younger and healthier and may have received longer predialysis care. We compared first-year survival between what we hypothesized were clinically equivalent groups; namely, patients who initiate maintenance HD using an arteriovenous fistula (AVF) and those selecting PD as their initial modality. Study Design Observational, registry-based, retrospective cohort study. Setting & Participants US Renal Data System data for 5 annual cohorts (2010-2014; n = 130,324) of incident HD with an AVF and incident PD patients. Exposures and Predictors Exposure was more than 1 day receiving PD or more than 1 day receiving HD with an AVF. Time at risk for both cohorts was determined for 12 consecutive 30-day segments, censoring for transplantation, loss to follow-up, or end of time. Predictors included patient-level characteristics obtained from Centers for Medicare & Medicaid Services 2728 Form and other data sources. Outcomes Patient survival. Analytical Approach Unadjusted and multivariable risk-adjusted HRs for death of HD versus PD patients, averaged over 2010 to 2014, were calculated. Results The HD cohort's average unadjusted mortality rate was consistently higher than for the PD cohort. The HR of HD versus PD was 1.25 (95% CI, 1.20-1.30) in the unadjusted model and 0.84 (95% CI, 0.80-0.87) in the adjusted model. However, multivariable risk-adjusted analyses showed the HR of HD versus PD for the first 90 days was 1.06 (95% CI, 0.98-1.14), decreasing to 0.74 (95% CI, 0.68-0.80) in the 270- to 360-day period. Limitations Residual confounding due to selection bias inherent in dialysis modality choice and the observational study design. Form 2728 provides baseline data at dialysis incidence alone, but not over time. Conclusions US patients receiving HD with an AVF appear to have a survival advantage over PD patients after 90 days of dialysis initiation after accounting for patient characteristics. These findings have implications in the choice of initial dialysis modality and vascular access for patients.
Collapse
|
44
|
Multicenter Study to Transplant Hepatitis C-Infected Kidneys (MYTHIC): An Open-Label Study of Combined Glecaprevir and Pibrentasvir to Treat Recipients of Transplanted Kidneys from Deceased Donors with Hepatitis C Virus Infection. J Am Soc Nephrol 2020; 31:2678-2687. [PMID: 32843477 DOI: 10.1681/asn.2020050686] [Citation(s) in RCA: 46] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2020] [Accepted: 07/06/2020] [Indexed: 12/30/2022] Open
Abstract
BACKGROUND Single-center trials and retrospective case series have reported promising outcomes using kidneys from donors with hepatitis C virus (HCV) infection. However, multicenter trials are needed to determine if those findings are generalizable. METHODS We conducted a prospective trial at seven centers to transplant 30 kidneys from deceased donors with HCV viremia into HCV-uninfected recipients, followed by 8 weeks of once-daily coformulated glecaprevir and pibrentasvir, targeted to start 3 days posttransplant. Key outcomes included sustained virologic response (undetectable HCV RNA 12 weeks after completing treatment with glecaprevir and pibrentasvir), adverse events, and allograft function. RESULTS We screened 76 patients and enrolled 63 patients, of whom 30 underwent kidney transplantation from an HCV-viremic deceased donor (median kidney donor profile index, 53%) in May 2019 through October 2019. The median time between consent and transplantation of a kidney from an HCV-viremic donor was 6.3 weeks. All 30 recipients achieved a sustained virologic response. One recipient died of complications of sepsis 4 months after achieving a sustained virologic response. No severe adverse events in any patient were deemed likely related to HCV infection or treatment with glecaprevir and pibrentasvir. Three recipients developed acute cellular rejection, which was borderline in one case. Three recipients developed polyomavirus (BK) viremia near or >10,000 copies/ml that resolved after reduction of immunosuppression. All recipients had good allograft function, with a median creatinine of 1.2 mg/dl and median eGFR of 57 ml/min per 1.73 m2 at 6 months. CONCLUSIONS Our multicenter trial demonstrated safety and efficacy of transplantation of 30 HCV-viremic kidneys into HCV-negative recipients, followed by early initiation of an 8-week regimen of glecaprevir and pibrentasvir.
Collapse
|
45
|
Inflammation and Erythropoiesis-Stimulating Agent Response in Hemodialysis Patients: A Self-matched Longitudinal Study of Anemia Management in the Dialysis Outcomes and Practice Patterns Study (DOPPS). Kidney Med 2020; 2:286-296. [PMID: 32734248 PMCID: PMC7380435 DOI: 10.1016/j.xkme.2020.01.007] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023] Open
Abstract
Rationale & Objective Previous studies of inflammation and anemia management in hemodialysis (HD) patients may be biased due to patient differences. We used a self-matched longitudinal design to test whether new inflammation, defined as an acute increase in C-reactive protein (CRP) level, reduces hemoglobin response to erythropoiesis-stimulating agent (ESA) treatment. Study Design Self-matched longitudinal design. Setting & Participants 3,568 new inflammation events, defined as CRP level > 10 mg/L following a 3-month period with CRP level ≤ 5 mg/L, were identified from 12,389 HD patients in the Dialysis Outcomes and Practice Patterns Study (DOPPS) phases 4 to 6 (2009-2018) in 10 countries in which CRP is routinely measured. Predictor “After” (vs “before”) observing a high CRP level. Outcomes Within-patient changes in hemoglobin level, ESA dose, and ESA hyporesponsiveness (hemoglobin < 10 g/dL and ESA dose > 6,000 [Japan] or >8,000 [Europe] U/wk). Analytical Approach Linear mixed models and modified Poisson regression. Results Comparing before with after periods, mean hemoglobin level decreased from 11.2 to 10.9 g/dL (adjusted mean change, −0.26 g/dL), while mean ESA dose increased from 6,320 to 6,960 U/wk (adjusted relative change, 8.4%). The prevalence of ESA hyporesponsiveness increased from 7.6% to 12.3%. Both the unadjusted and adjusted prevalence ratios of ESA hyporesponsiveness were 1.68 (95% CI, 1.48-1.91). These associations were consistent in sensitivity analyses varying CRP thresholds and were stronger when the CRP level increase was sustained over the 3-month after period. Limitations Residual confounding by unmeasured time-varying risk factors for ESA hyporesponsiveness. Conclusions In the 3 months after HD patients experienced an increase in CRP levels, hemoglobin levels declined quickly, ESA doses increased, and the prevalence of ESA hyporesponsiveness increased appreciably. Routine CRP measurement could identify inflammation as a cause of worsened anemia. In turn, these findings speak to a potentially important role for anemia therapies that are less susceptible to the effects of inflammation.
Collapse
|
46
|
Matching with time-dependent treatments: A review and look forward. Stat Med 2020; 39:2350-2370. [PMID: 32242973 PMCID: PMC7384144 DOI: 10.1002/sim.8533] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2019] [Revised: 02/28/2020] [Accepted: 03/04/2020] [Indexed: 12/14/2022]
Abstract
Observational studies of treatment effects attempt to mimic a randomized experiment by balancing the covariate distribution in treated and control groups, thus removing biases related to measured confounders. Methods such as weighting, matching, and stratification, with or without a propensity score, are common in cross‐sectional data. When treatments are initiated over longitudinal follow‐up, a target pragmatic trial can be emulated using appropriate matching methods. The ideal experiment of interest is simple; patients would be enrolled sequentially, randomized to one or more treatments and followed subsequently. This tutorial defines a class of longitudinal matching methods that emulate this experiment and provides a review of existing variations, with guidance regarding study design, execution, and analysis. These principles are illustrated in application to the study of statins on cardiovascular outcomes in the Framingham Offspring cohort. We identify avenues for future research and highlight the relevance of this methodology to high‐quality comparative effectiveness studies in the era of big
data.
Collapse
|
47
|
Prognostic score matching methods for estimating the average effect of a non-reversible binary time-dependent treatment on the survival function. LIFETIME DATA ANALYSIS 2020; 26:451-470. [PMID: 31576491 DOI: 10.1007/s10985-019-09485-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/11/2018] [Accepted: 09/17/2019] [Indexed: 06/10/2023]
Abstract
In evaluating the benefit of a treatment on survival, it is often of interest to compare post-treatment survival with the survival function that would have been observed in the absence of treatment. In many practical settings, treatment is time-dependent in the sense that subjects typically begin follow-up untreated, with some going on to receive treatment at some later time point. In observational studies, treatment is not assigned at random and, therefore, may depend on various patient characteristics. We have developed semi-parametric matching methods to estimate the average treatment effect on the treated (ATT) with respect to survival probability and restricted mean survival time. Matching is based on a prognostic score which reflects each patient's death hazard in the absence of treatment. Specifically, each treated patient is matched with multiple as-yet-untreated patients with similar prognostic scores. The matched sets do not need to be of equal size, since each matched control is weighted in order to preserve risk score balancing across treated and untreated groups. After matching, we estimate the ATT non-parametrically by contrasting pre- and post-treatment weighted Nelson-Aalen survival curves. A closed-form variance is proposed and shown to work well in simulation studies. The proposed methods are applied to national organ transplant registry data.
Collapse
|
48
|
Abstract
Objective Although important enhancements to continuous ambulatory peritoneal dialysis (CAPD) have occurred since its inception, few studies have explicitly evaluated trends over time in CAPD technique failure rates. To assist in quantifying the net benefit of improvements to CAPD for patient outcomes, we examined trends in technique failure rates among Canadian CAPD patients. Patients Patients initiating renal replacement therapy on CAPD ( n = 7110) between 1981 and 1997. Main Outcome Measures Technique failure ( i.e., switch to hemodialysis). Results Total follow-up was 12 831 patient-years (pt-yr). There were 1976 technique failures, for a crude CAPD failure rate of 154.0/1000 pt-yr. Technique failure rate ratios (RR) estimated using Poisson regression and adjusted for age, gender, race, province, primary renal diagnosis, and follow-up time, were significantly reduced for the 1990–93 [RR = 0.75, 95% confidence interval (CI) = (0.68, 0.83)], 1994–95 [RR = 0.83, CI (0.75, 0.93)], and 1996–97 [RR = 0.78, CI (0.70, 0.87)] calendar periods relative to 1981–89 (RR = 1, reference). Among cause-specific technique failure rates, the greatest improvement was observed for peritonitis-attributable technique failure, with RR = 0.46, CI (0.41, 0.50) for 1990–97 relative to 1981–89. However, rates of technique failure due to inadequate dialysis were significantly elevated for the 1990–97 period [RR = 1.68, CI (1.44, 1.96)]. Conclusions The collection of more detailed data on practice patterns would enable future studies to elucidate the cause-and-effect relationship between CAPD descriptors and technique failure, and hence assist in clinical decision-making.
Collapse
|
49
|
Comparing Mortality Rates on Capd/Ccpd and Hemodialysis the Canadian Experience: Fact or Fiction? Perit Dial Int 2020. [DOI: 10.1177/089686089801800504] [Citation(s) in RCA: 80] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
← Objective To compare mortality rates on hemodialysis (HD) to rates on continuous ambulatory/cyclic peritoneal dialysis (CAPD/CCPD), to contrast our results with those of other recent investigations, and to discuss reasons for discrepancies. ← Data Sources Patient -specific data obtained from the Canadian Organ Replacement Register on patients initiating renal replacement therapy (RRT) between 1 January 1990 and 31 December 1995 (n = 14483). Recent mortality comparisons of CAPD and HD. ← Main Outcome Measures Mortality rate ratio (RR) based on “as-treated” (AT) analysis incorporating treatment modality switches and adjusting for age, primary renal diagnosis, and comorbid conditions using Poisson regression. Hazard ratios (HR) were estimated using Cox regression and based on an “intent-to-treat” (ITT) analysis wherein patients were classified based on dialytic modality received on follow-up day 90. ← Results Adjusted mortality rates were significantly decreased on CAPD/CCPD relative to HD [RR = 0.73, 95% confidence interval (CI) = (0.69, 0.77)] based on the AT analysis. Most of the protective effect of CAPD/CCPD was concentrated in the first 2 years of follow-up post-RRT initiation. Based on the ITT analysis, the estimated CAPD/ CCPD effect was greatly reduced, with HR = 0.93 (0.87, 0.99). ← Conclusions We provide further evidence that CAPD/ CCPD is not an inferior dialytic modality to HD, particularly in the short term. Comparing mortality rates on CAPD/ CCPD and HD is inherently difficult due to the potential for bias. Discrepancies between our results and those of previous investigations, and variability in findings among previous studies, relate to differences in clinical and demographic setting, patient populations, study design, statistical methods, and interaction between the dialytic modality effect and various other covariables.
Collapse
|
50
|
Abstract
Objective Primarily, to determine whether peritoneal small solute clearance is related to patient and technique survival among anuric peritoneal dialysis [continuous ambulatory (CAPD) and automated peritoneal dialysis (APD)] patients. A secondary goal was to describe the ability to attain Dialysis Outcomes Quality Initiative (DOQI) targets among anuric patients on peritoneal dialysis. Design Retrospective cohort study via chart reviews. Setting Peritoneal Dialysis Unit of Toronto Hospital (Western Division). Patients The study included 122 CAPD and APD patients between January 1992 and September 1997, with 24-hour urine volume less than 100 mL, or renal creatinine clearance (CCr) less than 1 mL/minute. Adequacy data were available for 115 patients. Outcome Measures Mortality and technique failure (TF). Regression analysis was used to estimate the mortality and TF rate ratios (RR) for peritoneal Kt/V urea (pKt/V) and pCCr, adjusting for age, gender, diabetes, months of follow-up prior to anuria, albumin, transport status, coronary artery disease, cardiovascular disease, and peripheral vascular disease. Results Fifty seven per cent (51/89) of patients on CAPD and 81% (21/26) on APD had a weekly pKt/V ≥ 2 and ≥ 2.2, respectively (DOQI targets); whereas only 35% on CAPD (31/89) and 35% (9/26) on APD had a weekly pCCr ≥ 60 L/1.73 m2 and 66 L/1.73 m2, respectively. Median follow-up times among patients were 16.5 and 19.5 months pre- and postanuria, respectively. Patients with pKt/V ≥ 1.85 experienced a strong decrease in patient mortality (RR = 0.54, p = 0.10); the effect was less pronounced for pCCr ≥ 50 L/1.73 m2 (RR = 0.63, p = 0.25). No relationship was observed between pKt/V or pCCr and TF. Conclusion Mortality was noticeably less frequent among patients with a pKt/V ≥ 1.85 compared with those with a Kt/V < 1.85 ( p = 0.10). Given the magnitude of the association, the failure to observe statistical significance relates to the size of the patient cohort. Our results imply that it is, in fact, possible to achieve DOQI targets among anuric patients on peritoneal dialysis.
Collapse
|