950 resultados para RISK SCORE
Resumo:
PURPOSE: To derive a prediction rule by using prospectively obtained clinical and bone ultrasonographic (US) data to identify elderly women at risk for osteoporotic fractures. MATERIALS AND METHODS: The study was approved by the Swiss Ethics Committee. A prediction rule was computed by using data from a 3-year prospective multicenter study to assess the predictive value of heel-bone quantitative US in 6174 Swiss women aged 70-85 years. A quantitative US device to calculate the stiffness index at the heel was used. Baseline characteristics, known risk factors for osteoporosis and fall, and the quantitative US stiffness index were used to elaborate a predictive rule for osteoporotic fracture. Predictive values were determined by using a univariate Cox model and were adjusted with multivariate analysis. RESULTS: There were five risk factors for the incidence of osteoporotic fracture: older age (>75 years) (P < .001), low heel quantitative US stiffness index (<78%) (P < .001), history of fracture (P = .001), recent fall (P = .001), and a failed chair test (P = .029). The score points assigned to these risk factors were as follows: age, 2 (3 if age > 80 years); low quantitative US stiffness index, 5 (7.5 if stiffness index < 60%); history of fracture, 1; recent fall, 1.5; and failed chair test, 1. The cutoff value to obtain a high sensitivity (90%) was 4.5. With this cutoff, 1464 women were at lower risk (score, <4.5) and 4710 were at higher risk (score, >or=4.5) for fracture. Among the higher-risk women, 6.1% had an osteoporotic fracture, versus 1.8% of women at lower risk. Among the women who had a hip fracture, 90% were in the higher-risk group. CONCLUSION: A prediction rule obtained by using quantitative US stiffness index and four clinical risk factors helped discriminate, with high sensitivity, women at higher versus those at lower risk for osteoporotic fracture.
Resumo:
Purpose: To assess the global cardiovascular (CV) risk of an individual, several scores have been developed. However, their accuracy and comparability need to be evaluated in populations others from which they were derived. The aim of this study was to compare the predictive accuracy of 4 CV risk scores using data of a large population-based cohort. Methods: Prospective cohort study including 4980 participants (2698 women, mean age± SD: 52.7±10.8 years) in Lausanne, Switzerland followed for an average of 5.5 years (range 0.2 - 8.5). Two end points were assessed: 1) coronary heart disease (CHD), and 2) CV diseases (CVD). Four risk scores were compared: original and recalibrated Framingham coronary heart disease scores (1998 and 2001); original PROCAM score (2002) and its recalibrated version for Switzerland (IAS-AGLA); Reynolds risk score. Discrimination was assessed using Harrell's C statistics, model fitness using Akaike's information criterion (AIC) and calibration using pseudo Hosmer-Lemeshow test. The sensitivity, specificity and corresponding 95% confidence intervals were assessed for each risk score using the highest risk category ([20+ % at 10 years) as the "positive" test. Results: Recalibrated and original 1998 and original 2001 Framingham scores show better discrimination (>0.720) and model fitness (low AIC) for CHD and CVD. All 4 scores are correctly calibrated (Chi2<20). The recalibrated Framingham 1998 score has the best sensitivities, 37.8% and 40.4%, for CHD and CVD, respectively. All scores present specificities >90%. Framingham 1998, PROCAM and IAS-AGLA scores include the greatest proportion of subjects (>200) in the high risk category whereas recalibrated Framingham 2001 and Reynolds include <=44 subjects. Conclusion: In this cohort, we see variations of accuracy between risk scores, the original Framingham 2001 score demonstrating the best compromise between its accuracy and its limited selection of subjects in the highest risk category. We advocate that national guidelines, based on independently validated data, take into account calibrated CV risk scores for their respective countries.
Resumo:
PURPOSE: The nutritional risk score is a recommended screening tool for malnutrition. While a nutritional risk score of 3 or greater predicts adverse outcomes after digestive surgery, to our knowledge its predictive value for morbidity after urological interventions is unknown. We determined whether urological patients at nutritional risk are at higher risk for complications after major surgery than patients not at nutritional risk. MATERIALS AND METHODS: We performed a prospective observational study in consecutive patients undergoing major surgery. A priori sample calculation resulted in a study cohort of 220 patients. Interim analysis was planned after 110 patients. The nutritional risk score was assessed preoperatively by a specialized study nurse. Nutritional care was standardized in all patients. Postoperative complications were defined previously using the standardized Dindo-Clavien classification. The primary end point was 30-day morbidity. Univariate and multivariate analysis was performed to identify predictors of complications. RESULTS: The study was discontinued due to significant results after interim analysis. A total of 125 patients were included in analysis from June 2011 to June 2012 and 15 were excluded because of incomplete data. Of 51 patients at nutritional risk 38 (74%) presented with at least 1 complication compared to 28 of 59 controls (47%). Patients at nutritional risk were at threefold risk for complications on univariate and multivariate analysis (OR 3.3, 95% CI 1.3-8.0). Cystectomy was the only other predictor of morbidity (OR 10, 95% CI 2-48). CONCLUSIONS: Patients at nutritional risk are more prone to complications after major urological procedures. Whether this increased morbidity can be reversed by perioperative nutritional support should be studied.
Resumo:
We hypothesized that combining clinical risk factors (CRF) with the heel stiffness index (SI) measured via quantitative ultrasound (QUS) would improve the detection of women both at low and high risk for hip fracture. Categorizing women by risk score improved the specificity of detection to 42.4%, versus 33.8% using CRF alone and 38.4% using the SI alone. This combined CRF-SI score could be used wherever and whenever DXA is not readily accessible. INTRODUCTION AND HYPOTHESIS: Several strategies have been proposed to identify women at high risk for osteoporosis-related fractures; we wanted to investigate whether combining clinical risk factors (CRF) and heel QUS parameters could provide a more accurate tool to identify women at both low and high risk for hip fracture than either CRF or QUS alone. METHODS: We pooled two Caucasian cohorts, EPIDOS and SEMOF, into a large database named "EPISEM", in which 12,064 women, 70 to 100 years old, were analyzed. Amongst all the CRF available in EPISEM, we used only the ones which were statistically significant in a Cox multivariate model. Then, we constructed a risk score, by combining the QUS-derived heel stiffness index (SI) and the following seven CRF: patient age, body mass index (BMI), fracture history, fall history, diabetes history, chair-test results, and past estrogen treatment. RESULTS: Using the composite SI-CRF score, 42% of the women who did not report a hip fracture were found to be at low risk at baseline, and 57% of those who subsequently sustained a fracture were at high risk. Using the SI alone, corresponding percentages were 38% and 52%; using CRF alone, 34% and 53%. The number of subjects in the intermediate group was reduced from 5,400 (including 112 hip fractures) and 5,032 (including 111 hip fractures) to 4,549 (including 100 including fractures) for the CRF and QUS alone versus the combination score. CONCLUSIONS: Combining clinical risk factors to heel bone ultrasound appears to correctly identify more women at low risk for hip fracture than either the stiffness index or the CRF alone; it improves the detection of women both at low and high risk.
Resumo:
BACKGROUND: Exposure to combination antiretroviral therapy (cART) can lead to important metabolic changes and increased risk of coronary heart disease (CHD). Computerized clinical decision support systems have been advocated to improve the management of patients at risk for CHD but it is unclear whether such systems reduce patients' risk for CHD. METHODS: We conducted a cluster trial within the Swiss HIV Cohort Study (SHCS) of HIV-infected patients, aged 18 years or older, not pregnant and receiving cART for >3 months. We randomized 165 physicians to either guidelines for CHD risk factor management alone or guidelines plus CHD risk profiles. Risk profiles included the Framingham risk score, CHD drug prescriptions and CHD events based on biannual assessments, and were continuously updated by the SHCS data centre and integrated into patient charts by study nurses. Outcome measures were total cholesterol, systolic and diastolic blood pressure and Framingham risk score. RESULTS: A total of 3,266 patients (80% of those eligible) had a final assessment of the primary outcome at least 12 months after the start of the trial. Mean (95% confidence interval) patient differences where physicians received CHD risk profiles and guidelines, rather than guidelines alone, were total cholesterol -0.02 mmol/l (-0.09-0.06), systolic blood pressure -0.4 mmHg (-1.6-0.8), diastolic blood pressure -0.4 mmHg (-1.5-0.7) and Framingham 10-year risk score -0.2% (-0.5-0.1). CONCLUSIONS: Systemic computerized routine provision of CHD risk profiles in addition to guidelines does not significantly improve risk factors for CHD in patients on cART.
Resumo:
BACKGROUND: The strong observational association between total homocysteine (tHcy) concentrations and risk of coronary artery disease (CAD) and the null associations in the homocysteine-lowering trials have prompted the need to identify genetic variants associated with homocysteine concentrations and risk of CAD. OBJECTIVE: We tested whether common genetic polymorphisms associated with variation in tHcy are also associated with CAD. DESIGN: We conducted a meta-analysis of genome-wide association studies (GWAS) on tHcy concentrations in 44,147 individuals of European descent. Polymorphisms associated with tHcy (P < 10(-8)) were tested for association with CAD in 31,400 cases and 92,927 controls. RESULTS: Common variants at 13 loci, explaining 5.9% of the variation in tHcy, were associated with tHcy concentrations, including 6 novel loci in or near MMACHC (2.1 Ã- 10(-9)), SLC17A3 (1.0 Ã- 10(-8)), GTPB10 (1.7 Ã- 10(-8)), CUBN (7.5 Ã- 10(-10)), HNF1A (1.2 Ã- 10(-12)), and FUT2 (6.6 Ã- 10(-9)), and variants previously reported at or near the MTHFR, MTR, CPS1, MUT, NOX4, DPEP1, and CBS genes. Individuals within the highest 10% of the genotype risk score (GRS) had 3-μmol/L higher mean tHcy concentrations than did those within the lowest 10% of the GRS (P = 1 Ã- 10(-36)). The GRS was not associated with risk of CAD (OR: 1.01; 95% CI: 0.98, 1.04; P = 0.49). CONCLUSIONS: We identified several novel loci that influence plasma tHcy concentrations. Overall, common genetic variants that influence plasma tHcy concentrations are not associated with risk of CAD in white populations, which further refutes the causal relevance of moderately elevated tHcy concentrations and tHcy-related pathways for CAD.
Resumo:
AIMS/HYPOTHESIS: To assist in the development of preventive strategies, we studied whether the neighbourhood environment or modifiable behavioural parameters, including cardiorespiratory fitness (CRF) and physical activity (PA), are independently associated with obesity and metabolic risk markers in children. METHODS: We carried out a cross-sectional analysis of 502 randomly selected first and fifth grade urban and rural Swiss schoolchildren with regard to CRF, PA and the neighbourhood (rural vs urban) environment. Outcome measures included BMI, sum of four skinfold thicknesses, homeostasis model assessment of insulin resistance (HOMA-IR) and a standardised clustered metabolic risk score. RESULTS: CRF and PA (especially total PA, but also the time spent engaged in light and in moderate and vigorous intensity PA) were inversely associated with measures of obesity, HOMA-IR and the metabolic risk score, independently of each other, and of sociodemographic and nutritional parameters, media use, sleep duration, BMI and the neighbourhood environment (all p < 0.05). Children living in a rural environment were more physically active and had higher CRF values and reduced HOMA-IR and metabolic risk scores compared with children living in an urban environment (all p < 0.05). These differences in cardiovascular risk factors persisted after adjustment for CRF, total PA and BMI. CONCLUSIONS/INTERPRETATION: Reduced CRF, low PA and an urban environment are independently associated with an increase in metabolic risk markers in children.
Resumo:
We present the most comprehensive comparison to date of the predictive benefit of genetics in addition to currently used clinical variables, using genotype data for 33 single-nucleotide polymorphisms (SNPs) in 1,547 Caucasian men from the placebo arm of the REduction by DUtasteride of prostate Cancer Events (REDUCE®) trial. Moreover, we conducted a detailed comparison of three techniques for incorporating genetics into clinical risk prediction. The first method was a standard logistic regression model, which included separate terms for the clinical covariates and for each of the genetic markers. This approach ignores a substantial amount of external information concerning effect sizes for these Genome Wide Association Study (GWAS)-replicated SNPs. The second and third methods investigated two possible approaches to incorporating meta-analysed external SNP effect estimates - one via a weighted PCa 'risk' score based solely on the meta analysis estimates, and the other incorporating both the current and prior data via informative priors in a Bayesian logistic regression model. All methods demonstrated a slight improvement in predictive performance upon incorporation of genetics. The two methods that incorporated external information showed the greatest receiver-operating-characteristic AUCs increase from 0.61 to 0.64. The value of our methods comparison is likely to lie in observations of performance similarities, rather than difference, between three approaches of very different resource requirements. The two methods that included external information performed best, but only marginally despite substantial differences in complexity.
Resumo:
Using a large prospective cohort of over 12,000 women, we determined 2 thresholds (high risk and low risk of hip fracture) to use in a 10-yr hip fracture probability model that we had previously described, a model combining the heel stiffness index measured by quantitative ultrasound (QUS) and a set of easily determined clinical risk factors (CRFs). The model identified a higher percentage of women with fractures as high risk than a previously reported risk score that combined QUS and CRF. In addition, it categorized women in a way that was quite consistent with the categorization that occurred using dual X-ray absorptiometry (DXA) and the World Health Organization (WHO) classification system; the 2 methods identified similar percentages of women with and without fractures in each of their 3 categories, but the 2 identified only in part the same women. Nevertheless, combining our composite probability model with DXA in a case findings strategy will likely further improve the detection of women at high risk of fragility hip fracture. We conclude that the currently proposed model may be of some use as an alternative to the WHO classification criteria for osteoporosis, at least when access to DXA is limited.
Resumo:
BACKGROUND: Current guidelines recommend treating patients according to their absolute cardiovascular disease (CVD) risk. We examined perception of CVD risk among adults and how it can be compared with actual CVD risk. METHODS: The perception of CVD risk was assessed by two questions asking about participants' 'risk to get a heart attack or a stroke over the next 10 years' using semiquantitative and quantitative answers in a population-based survey of 816 individuals aged 40-64 years in the Seychelles (African region). Actual CVD risk was calculated using a standard risk prediction score and 24% of adults aged 40-64 years had elevated risk. RESULTS: Only 59% of individuals could give an estimate of perceived CVD risk based on the semiquantitative question and 31% based on the quantitative question. Reporting a perceived CVD risk was strongly associated with high socio-economic status (SES; odds ratio = 9). Among individuals who reported a perceived CVD risk, 48% overestimated their perceived risk versus their actual risk. Reporting a high perceived CVD risk was associated with treatment for CVD risk factors, older age, low SES, and overweight. Reporting a low perceived CVD risk was associated with male sex, younger age, education, normal BMI, and leisure time exercise. CONCLUSION: Only half of the individuals could provide an estimate of their perceived CVD risk, and this perception was strongly associated with SES. Individuals under treatment perceived higher CVD risk than nontreated individuals. Further studies should determine how risk-related information can be better conveyed to individuals as a means to improve adherence to healthy lifestyles and/or treatment.
Resumo:
PURPOSE: To improve the risk stratification of patients with rhabdomyosarcoma (RMS) through the use of clinical and molecular biologic data. PATIENTS AND METHODS: Two independent data sets of gene-expression profiling for 124 and 101 patients with RMS were used to derive prognostic gene signatures by using a meta-analysis. These and a previously published metagene signature were evaluated by using cross validation analyses. A combined clinical and molecular risk-stratification scheme that incorporated the PAX3/FOXO1 fusion gene status was derived from 287 patients with RMS and evaluated. RESULTS: We showed that our prognostic gene-expression signature and the one previously published performed well with reproducible and significant effects. However, their effect was reduced when cross validated or tested in independent data and did not add new prognostic information over the fusion gene status, which is simpler to assay. Among nonmetastatic patients, patients who were PAX3/FOXO1 positive had a significantly poorer outcome compared with both alveolar-negative and PAX7/FOXO1-positive patients. Furthermore, a new clinicomolecular risk score that incorporated fusion gene status (negative and PAX3/FOXO1 and PAX7/FOXO1 positive), Intergroup Rhabdomyosarcoma Study TNM stage, and age showed a significant increase in performance over the current risk-stratification scheme. CONCLUSION: Gene signatures can improve current stratification of patients with RMS but will require complex assays to be developed and extensive validation before clinical application. A significant majority of their prognostic value was encapsulated by the fusion gene status. A continuous risk score derived from the combination of clinical parameters with the presence or absence of PAX3/FOXO1 represents a robust approach to improving current risk-adapted therapy for RMS.
Resumo:
BACKGROUND: Persons infected with human immunodeficiency virus (HIV) have increased rates of coronary artery disease (CAD). The relative contribution of genetic background, HIV-related factors, antiretroviral medications, and traditional risk factors to CAD has not been fully evaluated in the setting of HIV infection. METHODS: In the general population, 23 common single-nucleotide polymorphisms (SNPs) were shown to be associated with CAD through genome-wide association analysis. Using the Metabochip, we genotyped 1875 HIV-positive, white individuals enrolled in 24 HIV observational studies, including 571 participants with a first CAD event during the 9-year study period and 1304 controls matched on sex and cohort. RESULTS: A genetic risk score built from 23 CAD-associated SNPs contributed significantly to CAD (P = 2.9 × 10(-4)). In the final multivariable model, participants with an unfavorable genetic background (top genetic score quartile) had a CAD odds ratio (OR) of 1.47 (95% confidence interval [CI], 1.05-2.04). This effect was similar to hypertension (OR = 1.36; 95% CI, 1.06-1.73), hypercholesterolemia (OR = 1.51; 95% CI, 1.16-1.96), diabetes (OR = 1.66; 95% CI, 1.10-2.49), ≥ 1 year lopinavir exposure (OR = 1.36; 95% CI, 1.06-1.73), and current abacavir treatment (OR = 1.56; 95% CI, 1.17-2.07). The effect of the genetic risk score was additive to the effect of nongenetic CAD risk factors, and did not change after adjustment for family history of CAD. CONCLUSIONS: In the setting of HIV infection, the effect of an unfavorable genetic background was similar to traditional CAD risk factors and certain adverse antiretroviral exposures. Genetic testing may provide prognostic information complementary to family history of CAD.
Resumo:
Objectifs La chirurgie pancréatique reste associée à une morbidité postopératoire importante. Les efforts sont concentrés la plupart du temps sur la diminution de cette morbidité, mais la détection précoce de patients à risque de complications pourrait être une autre stratégie valable. Un score simple de prédiction des complications après duodénopancréatectomie céphalique a récemment été publié par Braga et al. La présente étude a pour but de valider ce score et de discuter de ses possibles implications cliniques. Méthodes De 2000 à 2015, 245 patients ont bénéficié d'une duodénopancréatectomie céphalique dans notre service. Les complications postopératoires ont été recensées selon la classification de Dindo et Clavien. Le score de Braga se base sur quatre paramètres : le score ASA (American Society of Anesthesiologists), la texture du pancréas, le diamètre du canal de Wirsung (canal pancréatique principal) et les pertes sanguines intra-opératoires. Un score de risque global de 0 à 15 peut être calculé pour chaque patient. La puissance de discrimination du score a été calculée en utilisant une courbe ROC (receiver operating characteristic). Résultats Des complications majeures sont apparues chez 31% des patients, alors que 17% des patients ont eu des complications majeures dans l'article de Braga. La texture du pancréas et les pertes sanguines étaient statistiquement significativement corrélées à une morbidité accrue. Les aires sous la courbe étaient respectivement de 0.95 et 0.99 pour les scores classés en quatre catégories de risques (de 0 à 3, 4 à 7, 8 à 11 et 12 à 15) et pour les scores individuels (de 0 à 15). Conclusions Le score de Braga permet donc une bonne discrimination entre les complications mineures et majeures. Notre étude de validation suggère que ce score peut être utilisé comme un outil pronostique de complications majeures après duodénopancréatectomie céphalique. Les implications cliniques, c'est-à-dire si les stratégies de prise en charge postopératoire doivent être adaptées en fonction du risque individuel du patient, restent cependant à élucider. -- Objectives Pancreatic surgery remains associated with important morbidity. Efforts are most commonly concentrated on decreasing postoperative morbidity, but early detection of patients at risk could be another valuable strategy. A simple prognostic score has recently been published. This study aimed to validate this score and discuss possible clinical implications. Methods From 2000 to 2012, 245 patients underwent pancreaticoduodenectomy. Complications were graded according to the Dindo-Clavien classification. The Braga score is based on American Society of Anesthesiologists score, pancreatic texture, Wirsung duct diameter, and blood loss. An overall risk score (from 0 to 15) can be calculated for each patient. Score discriminant power was calculated using a receiver operating characteristic curve. Results Major complications occurred in 31% of patients compared to 17% in Braga's data. Pancreatic texture and blood loss were independently statistically significant for increased morbidity. The areas under curve were 0.95 and 0.99 for 4-risk categories and for individual scores, respectively. Conclusions The Braga score discriminates well between minor and major complications. Our validation suggests that it can be used as prognostic tool for major complications after pancreaticoduodenectomy. The clinical implications, i.e., whether postoperative treatment strategies should be adapted according to the patient's individual risk, remain to be elucidated.
Resumo:
BACKGROUND: Obesity has been shown to be associated with depression and it has been suggested that higher body mass index (BMI) increases the risk of depression and other common mental disorders. However, the causal relationship remains unclear and Mendelian randomisation, a form of instrumental variable analysis, has recently been employed to attempt to resolve this issue. AIMS: To investigate whether higher BMI increases the risk of major depression. METHOD: Two instrumental variable analyses were conducted to test the causal relationship between obesity and major depression in RADIANT, a large case-control study of major depression. We used a single nucleotide polymorphism (SNP) in FTO and a genetic risk score (GRS) based on 32 SNPs with well-established associations with BMI. RESULTS: Linear regression analysis, as expected, showed that individuals carrying more risk alleles of FTO or having higher score of GRS had a higher BMI. Probit regression suggested that higher BMI is associated with increased risk of major depression. However, our two instrumental variable analyses did not support a causal relationship between higher BMI and major depression (FTO genotype: coefficient -0.03, 95% CI -0.18 to 0.13, P = 0.73; GRS: coefficient -0.02, 95% CI -0.11 to 0.07, P = 0.62). CONCLUSIONS: Our instrumental variable analyses did not support a causal relationship between higher BMI and major depression. The positive associations of higher BMI with major depression in probit regression analyses might be explained by reverse causality and/or residual confounding.
Resumo:
OBJECTIVES: Pancreatic surgery remains associated with important morbidity. Efforts are most commonly concentrated on decreasing postoperative morbidity, but early detection of patients at risk could be another valuable strategy. A simple prognostic score has recently been published. This study aimed to validate this score and discuss possible clinical implications. METHODS: From 2000 to 2012, 245 patients underwent a pancreaticoduodenectomy. Complications were graded according to the Dindo-Clavien Classification. The Braga score is based on American Society of Anesthesiologists score, pancreatic texture, Wirsung duct diameter, and blood loss. An overall risk score (0-15) can be calculated for each patient. Score discriminant power was calculated using a receiver operating characteristic curve. RESULTS: Major complications occurred in 31% of patients compared with 17% in Braga's data. Pancreatic texture and blood loss were independently statistically significant for increased morbidity. Areas under the curve were 0.95 and 0.99 for 4-risk categories and for individual scores, respectively. CONCLUSIONS: The Braga score discriminates well between minor and major complications. Our validation suggests that it can be used as a prognostic tool for major complications after pancreaticoduodenectomy. The clinical implications, that is, whether postoperative treatment strategies should be adapted according to the patient's individual risk, remain to be elucidated.