8 resultados para Risk adjustment

em Université de Lausanne, Switzerland


Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND: Regional rates of hospitalization for ambulatory care sensitive conditions (ACSC) are used to compare the availability and quality of ambulatory care but the risk adjustment for population health status is often minimal. The objectives of the study was to examine the impact of more extensive risk adjustment on regional comparisons and to investigate the relationship between various area-level factors and the properly adjusted rates. METHODS: Our study is an observational study based on routine data of 2 million anonymous insured in 26 Swiss cantons followed over one or two years. A binomial negative regression was modeled with increasingly detailed information on health status (age and gender only, inpatient diagnoses, outpatient conditions inferred from dispensed drugs and frequency of physician visits). Hospitalizations for ACSC were identified from principal diagnoses detecting 19 conditions, with an updated list of ICD-10 diagnostic codes. Co-morbidities and surgical procedures were used as exclusion criteria to improve the specificity of the detection of potentially avoidable hospitalizations. The impact of the adjustment approaches was measured by changes in the standardized ratios calculated with and without other data besides age and gender. RESULTS: 25% of cases identified by inpatient main diagnoses were removed by applying exclusion criteria. Cantonal ACSC hospitalizations rates varied from to 1.4 to 8.9 per 1,000 insured, per year. Morbidity inferred from diagnoses and drugs dramatically increased the predictive performance, the greatest effect found for conditions linked to an ACSC. More visits were associated with fewer PAH although very high users were at greater risk and subjects who had not consulted at negligible risk. By maximizing health status adjustment, two thirds of the cantons changed their adjusted ratio by more than 10 percent. Cantonal variations remained substantial but unexplained by supply or demand. CONCLUSION: Additional adjustment for health status is required when using ACSC to monitor ambulatory care. Drug-inferred morbidities are a promising approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

AIMS/HYPOTHESIS: Several susceptibility genes for type 2 diabetes have been discovered recently. Individually, these genes increase the disease risk only minimally. The goals of the present study were to determine, at the population level, the risk of diabetes in individuals who carry risk alleles within several susceptibility genes for the disease and the added value of this genetic information over the clinical predictors. METHODS: We constructed an additive genetic score using the most replicated single-nucleotide polymorphisms (SNPs) within 15 type 2 diabetes-susceptibility genes, weighting each SNP with its reported effect. We tested this score in the extensively phenotyped population-based cross-sectional CoLaus Study in Lausanne, Switzerland (n = 5,360), involving 356 diabetic individuals. RESULTS: The clinical predictors of prevalent diabetes were age, BMI, family history of diabetes, WHR, and triacylglycerol/HDL-cholesterol ratio. After adjustment for these variables, the risk of diabetes was 2.7 (95% CI 1.8-4.0, p = 0.000006) for individuals with a genetic score within the top quintile, compared with the bottom quintile. Adding the genetic score to the clinical covariates improved the area under the receiver operating characteristic curve slightly (from 0.86 to 0.87), yet significantly (p = 0.002). BMI was similar in these two extreme quintiles. CONCLUSIONS/INTERPRETATION: In this population, a simple weighted 15 SNP-based genetic score provides additional information over clinical predictors of prevalent diabetes. At this stage, however, the clinical benefit of this genetic information is limited.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Data from prospective cohort studies regarding the association between subclinical hyperthyroidism and cardiovascular outcomes are conflicting.We aimed to assess the risks of total and coronary heart disease (CHD) mortality, CHD events, and atrial fibrillation (AF) associated with endogenous subclinical hyperthyroidism among all available large prospective cohorts. METHODS: Individual data on 52 674 participants were pooled from 10 cohorts. Coronary heart disease events were analyzed in 22 437 participants from 6 cohorts with available data, and incident AF was analyzed in 8711 participants from 5 cohorts. Euthyroidism was defined as thyrotropin level between 0.45 and 4.49 mIU/L and endogenous subclinical hyperthyroidism as thyrotropin level lower than 0.45 mIU/L with normal free thyroxine levels, after excluding those receiving thyroid-altering medications. RESULTS: Of 52 674 participants, 2188 (4.2%) had subclinical hyperthyroidism. During follow-up, 8527 participants died (including 1896 from CHD), 3653 of 22 437 had CHD events, and 785 of 8711 developed AF. In age- and sex-adjusted analyses, subclinical hyperthyroidism was associated with increased total mortality (hazard ratio[HR], 1.24, 95% CI, 1.06-1.46), CHD mortality (HR,1.29; 95% CI, 1.02-1.62), CHD events (HR, 1.21; 95%CI, 0.99-1.46), and AF (HR, 1.68; 95% CI, 1.16-2.43).Risks did not differ significantly by age, sex, or preexisting cardiovascular disease and were similar after further adjustment for cardiovascular risk factors, with attributable risk of 14.5% for total mortality to 41.5% forAF in those with subclinical hyperthyroidism. Risks for CHD mortality and AF (but not other outcomes) were higher for thyrotropin level lower than 0.10 mIU/L compared with thyrotropin level between 0.10 and 0.44 mIU/L(for both, P value for trend, .03). CONCLUSION: Endogenous subclinical hyperthyroidism is associated with increased risks of total, CHD mortality, and incident AF, with highest risks of CHD mortality and AF when thyrotropin level is lower than 0.10 mIU/L.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: American College of Cardiology/American Heart Association guidelines for the diagnosis and management of heart failure recommend investigating exacerbating conditions such as thyroid dysfunction, but without specifying the impact of different thyroid-stimulation hormone (TSH) levels. Limited prospective data exist on the association between subclinical thyroid dysfunction and heart failure events. METHODS AND RESULTS: We performed a pooled analysis of individual participant data using all available prospective cohorts with thyroid function tests and subsequent follow-up of heart failure events. Individual data on 25 390 participants with 216 248 person-years of follow-up were supplied from 6 prospective cohorts in the United States and Europe. Euthyroidism was defined as TSH of 0.45 to 4.49 mIU/L, subclinical hypothyroidism as TSH of 4.5 to 19.9 mIU/L, and subclinical hyperthyroidism as TSH <0.45 mIU/L, the last two with normal free thyroxine levels. Among 25 390 participants, 2068 (8.1%) had subclinical hypothyroidism and 648 (2.6%) had subclinical hyperthyroidism. In age- and sex-adjusted analyses, risks of heart failure events were increased with both higher and lower TSH levels (P for quadratic pattern <0.01); the hazard ratio was 1.01 (95% confidence interval, 0.81-1.26) for TSH of 4.5 to 6.9 mIU/L, 1.65 (95% confidence interval, 0.84-3.23) for TSH of 7.0 to 9.9 mIU/L, 1.86 (95% confidence interval, 1.27-2.72) for TSH of 10.0 to 19.9 mIU/L (P for trend <0.01) and 1.31 (95% confidence interval, 0.88-1.95) for TSH of 0.10 to 0.44 mIU/L and 1.94 (95% confidence interval, 1.01-3.72) for TSH <0.10 mIU/L (P for trend=0.047). Risks remained similar after adjustment for cardiovascular risk factors. CONCLUSION: Risks of heart failure events were increased with both higher and lower TSH levels, particularly for TSH ≥10 and <0.10 mIU/L.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: In Switzerland, health policies are decided at the local level, but little is known regarding their impact on the screening and management of cardiovascular risk factors (CVRFs). We thus aimed at assessing geographical levels of CVRFs in Switzerland.¦METHODS: Swiss Health Survey for 2007 (N = 17,879). Seven administrative regions were defined: West (Leman), West-Central (Mittelland), Zurich, South (Ticino), North-West, East and Central Switzerland. Obesity, smoking, hypertension, dyslipidemia and diabetes prevalence, treatment and screening within the last 12 months were assessed by interview.¦RESULTS: After multivariate adjustment for age, gender, educational level, marital status and Swiss citizenship, no significant differences were found between regions regarding prevalence of obesity or current smoking. Similarly, no differences were found regarding hypertension screening and prevalence. Two thirds of subjects who had been told they had high blood pressure were treated, the lowest treatment rates being found in East Switzerland: odds-ratio and [95% confidence interval] 0.65 [0.50-0.85]. Screening for hypercholesterolemia was more frequently reported in French (Leman) and Italian (Ticino) speaking regions. Four out of ten participants who had been told they had high cholesterol levels were treated and the lowest treatment rates were found in German-speaking regions. Screening for diabetes was higher in Ticino (1.24 [1.09 - 1.42]). Six out of ten participants who had been told they had diabetes were treated, the lowest treatment rates were found for German-speaking regions.¦CONCLUSIONS: In Switzerland, cardiovascular risk factor screening and management differ between regions and these differences cannot be accounted for by differences in populations' characteristics. Management of most cardiovascular risk factors could be improved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Intensification of pharmacotherapy in persons with poorly controlled chronic conditions has been proposed as a clinically meaningful process measure of quality. OBJECTIVE: To validate measures of treatment intensification by evaluating their associations with subsequent control in hypertension, hyperlipidemia, and diabetes mellitus across 35 medical facility populations in Kaiser Permanente, Northern California. DESIGN: Hierarchical analyses of associations of improvements in facility-level treatment intensification rates from 2001 to 2003 with patient-level risk factor levels at the end of 2003. PATIENTS: Members (515,072 and 626,130; age >20 years) with hypertension, hyperlipidemia, and/or diabetes mellitus in 2001 and 2003, respectively. MEASUREMENTS: Treatment intensification for each risk factor defined as an increase in number of drug classes prescribed, of dosage for at least 1 drug, or switching to a drug from another class within 3 months of observed poor risk factor control. RESULTS: Facility-level improvements in treatment intensification rates between 2001 and 2003 were strongly associated with greater likelihood of being in control at the end of 2003 (P < or = 0.05 for each risk factor) after adjustment for patient- and facility-level covariates. Compared with facility rankings based solely on control, addition of percentages of poorly controlled patients who received treatment intensification changed 2003 rankings substantially: 14%, 51%, and 29% of the facilities changed ranks by 5 or more positions for hypertension, hyperlipidemia, and diabetes, respectively. CONCLUSIONS: Treatment intensification is tightly linked to improved control. Thus, it deserves consideration as a process measure for motivating quality improvement and possibly for measuring clinical performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND AND PURPOSE: Statins display anti-inflammatory and anti-epileptogenic properties in animal models, and may reduce the epilepsy risk in elderly humans; however, a possible modulating role on outcome in patients with status epilepticus (SE) has not been assessed. METHODS: This cohort study was based on a prospective registry including all consecutive adults with incident SE treated in our center between April 2006 and September 2012. SE outcome was categorized at hospital discharge into 'return to baseline', 'new disability' and 'mortality'. The role of potential predictors, including statins treatment on admission, was evaluated using a multinomial logistic regression model. RESULTS: Amongst 427 patients identified, information on statins was available in 413 (97%). Mean age was 60.9 (±17.8) years; 201 (49%) were women; 211 (51%) had a potentially fatal SE etiology; and 191 (46%) experienced generalized-convulsive or non-convulsive SE in coma. Statins (simvastatin, atorvastatin or pravastatin) were prescribed prior to admission in 76 (18%) subjects, mostly elderly. Whilst 208 (50.4%) patients returned to baseline, 58 (14%) died. After adjustment for established SE outcome predictors (age, etiology, SE severity score), statins correlated significantly with lower mortality (relative risk ratio 0.38, P = 0.046). CONCLUSION: This study suggests for the first time that exposure to statins before an SE episode is related to its outcome, involving a possible anti-epileptogenic role. Other studies are needed to confirm this intriguing finding.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Trabecular bone score (TBS) is a gray-level textural index of bone microarchitecture derived from lumbar spine dual-energy X-ray absorptiometry (DXA) images. TBS is a bone mineral density (BMD)-independent predictor of fracture risk. The objective of this meta-analysis was to determine whether TBS predicted fracture risk independently of FRAX probability and to examine their combined performance by adjusting the FRAX probability for TBS. We utilized individual-level data from 17,809 men and women in 14 prospective population-based cohorts. Baseline evaluation included TBS and the FRAX risk variables, and outcomes during follow-up (mean 6.7 years) comprised major osteoporotic fractures. The association between TBS, FRAX probabilities, and the risk of fracture was examined using an extension of the Poisson regression model in each cohort and for each sex and expressed as the gradient of risk (GR; hazard ratio per 1 SD change in risk variable in direction of increased risk). FRAX probabilities were adjusted for TBS using an adjustment factor derived from an independent cohort (the Manitoba Bone Density Cohort). Overall, the GR of TBS for major osteoporotic fracture was 1.44 (95% confidence interval [CI] 1.35-1.53) when adjusted for age and time since baseline and was similar in men and women (p > 0.10). When additionally adjusted for FRAX 10-year probability of major osteoporotic fracture, TBS remained a significant, independent predictor for fracture (GR = 1.32, 95% CI 1.24-1.41). The adjustment of FRAX probability for TBS resulted in a small increase in the GR (1.76, 95% CI 1.65-1.87 versus 1.70, 95% CI 1.60-1.81). A smaller change in GR for hip fracture was observed (FRAX hip fracture probability GR 2.25 vs. 2.22). TBS is a significant predictor of fracture risk independently of FRAX. The findings support the use of TBS as a potential adjustment for FRAX probability, though the impact of the adjustment remains to be determined in the context of clinical assessment guidelines. © 2015 American Society for Bone and Mineral Research.