204 resultados para predictive accuracy

em Université de Lausanne, Switzerland


Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Prevention of cardiovascular disease (CVD) at the individual level should rely on the assessment of absolute risk using population-specific risk tables. OBJECTIVE: To compare the predictive accuracy of the original and the calibrated SCORE functions regarding 10-year cardiovascular risk in Switzerland. DESIGN: Cross-sectional, population-based study (5773 participants aged 35-74 years). METHODS: The SCORE equation for low-risk countries was calibrated based on the Swiss CVD mortality rates and on the CVD risk factor levels from the study sample. The predicted number of CVD deaths after a 10-year period was computed from the original and the calibrated equations and from the observed cardiovascular mortality for 2003. RESULTS: According to the original and calibrated functions, 16.3 and 15.8% of men and 8.2 and 8.9% of women, respectively, had a 10-year CVD risk > or =5%. Concordance correlation coefficient between the two functions was 0.951 for men and 0.948 for women, both P<0.001. Both risk functions adequately predicted the 10-year cumulative number of CVD deaths: in men, 71 (original) and 74 (calibrated) deaths for 73 deaths when using the CVD mortality rates; in women, 44 (original), 45 (calibrated) and 45 (CVD mortality rates), respectively. Compared to the original function, the calibrated function classified more women and fewer men at high-risk. Moreover, the calibrated function gave better risk estimates among participants aged over 65 years. CONCLUSION: The original SCORE function adequately predicts CVD death in Switzerland, particularly for individuals aged less than 65 years. The calibrated function provides more reliable estimates for older individuals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: To compare the predictive accuracy of the original and recalibrated Framingham risk function on current morbidity from coronary heart disease (CHD) and mortality data from the Swiss population. METHODS: Data from the CoLaus study, a cross-sectional, population-based study conducted between 2003 and 2006 on 5,773 participants aged 35-74 without CHD were used to recalibrate the Framingham risk function. The predicted number of events from each risk function were compared with those issued from local MONICA incidence rates and official mortality data from Switzerland. RESULTS: With the original risk function, 57.3%, 21.2%, 16.4% and 5.1% of men and 94.9%, 3.8%, 1.2% and 0.1% of women were at very low (<6%), low (6-10%), intermediate (10-20%) and high (>20%) risk, respectively. With the recalibrated risk function, the corresponding values were 84.7%, 10.3%, 4.3% and 0.6% in men and 99.5%, 0.4%, 0.0% and 0.1% in women, respectively. The number of CHD events over 10 years predicted by the original Framingham risk function was 2-3 fold higher than predicted by mortality+case fatality or by MONICA incidence rates (men: 191 vs. 92 and 51 events, respectively). The recalibrated risk function provided more reasonable estimates, albeit slightly overestimated (92 events, 5-95th percentile: 26-223 events); sensitivity analyses showed that the magnitude of the overestimation was between 0.4 and 2.2 in men, and 0.7 and 3.3 in women. CONCLUSION: The recalibrated Framingham risk function provides a reasonable alternative to assess CHD risk in men, but not in women.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Purpose: To assess the global cardiovascular (CV) risk of an individual, several scores have been developed. However, their accuracy and comparability need to be evaluated in populations others from which they were derived. The aim of this study was to compare the predictive accuracy of 4 CV risk scores using data of a large population-based cohort. Methods: Prospective cohort study including 4980 participants (2698 women, mean age± SD: 52.7±10.8 years) in Lausanne, Switzerland followed for an average of 5.5 years (range 0.2 - 8.5). Two end points were assessed: 1) coronary heart disease (CHD), and 2) CV diseases (CVD). Four risk scores were compared: original and recalibrated Framingham coronary heart disease scores (1998 and 2001); original PROCAM score (2002) and its recalibrated version for Switzerland (IAS-AGLA); Reynolds risk score. Discrimination was assessed using Harrell's C statistics, model fitness using Akaike's information criterion (AIC) and calibration using pseudo Hosmer-Lemeshow test. The sensitivity, specificity and corresponding 95% confidence intervals were assessed for each risk score using the highest risk category ([20+ % at 10 years) as the "positive" test. Results: Recalibrated and original 1998 and original 2001 Framingham scores show better discrimination (>0.720) and model fitness (low AIC) for CHD and CVD. All 4 scores are correctly calibrated (Chi2<20). The recalibrated Framingham 1998 score has the best sensitivities, 37.8% and 40.4%, for CHD and CVD, respectively. All scores present specificities >90%. Framingham 1998, PROCAM and IAS-AGLA scores include the greatest proportion of subjects (>200) in the high risk category whereas recalibrated Framingham 2001 and Reynolds include <=44 subjects. Conclusion: In this cohort, we see variations of accuracy between risk scores, the original Framingham 2001 score demonstrating the best compromise between its accuracy and its limited selection of subjects in the highest risk category. We advocate that national guidelines, based on independently validated data, take into account calibrated CV risk scores for their respective countries.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND: Clinical scores may help physicians to better assess the individual risk/benefit of oral anticoagulant therapy. We aimed to externally validate and compare the prognostic performance of 7 clinical prediction scores for major bleeding events during oral anticoagulation therapy. METHODS: We followed 515 adult patients taking oral anticoagulants to measure the first major bleeding event over a 12-month follow-up period. The performance of each score to predict the risk of major bleeding and the physician's subjective assessment of bleeding risk were compared with the C statistic. RESULTS: The cumulative incidence of a first major bleeding event during follow-up was 6.8% (35/515). According to the 7 scoring systems, the proportions of major bleeding ranged from 3.0% to 5.7% for low-risk, 6.7% to 9.9% for intermediate-risk, and 7.4% to 15.4% for high-risk patients. The overall predictive accuracy of the scores was poor, with the C statistic ranging from 0.54 to 0.61 and not significantly different from each other (P=.84). Only the Anticoagulation and Risk Factors in Atrial Fibrillation score performed slightly better than would be expected by chance (C statistic, 0.61; 95% confidence interval, 0.52-0.70). The performance of the scores was not statistically better than physicians' subjective risk assessments (C statistic, 0.55; P=.94). CONCLUSION: The performance of 7 clinical scoring systems in predicting major bleeding events in patients receiving oral anticoagulation therapy was poor and not better than physicians' subjective assessments.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Although both inflammatory and atherosclerosis markers have been associated with coronary heart disease (CHD) risk, data directly comparing their predictive value are limited. The authors compared the value of 2 atherosclerosis markers (ankle-arm index (AAI) and aortic pulse wave velocity (aPWV)) and 3 inflammatory markers (C-reactive protein (CRP), interleukin-6 (IL-6), and tumor necrosis factor-alpha (TNF-alpha)) in predicting CHD events. Among 2,191 adults aged 70-79 years at baseline (1997-1998) from the Health, Aging, and Body Composition Study cohort, the authors examined adjudicated incident myocardial infarction or CHD death ("hard" events) and "hard" events plus hospitalization for angina or coronary revascularization (total CHD events). During 8 years of follow-up between 1997-1998 and June 2007, 351 participants developed total CHD events (197 "hard" events). IL-6 (highest quartile vs. lowest: hazard ratio = 1.82, 95% confidence interval: 1.33, 2.49; P-trend < 0.001) and AAI (AAI </= 0.9 vs. AAI 1.01-1.30: hazard ratio = 1.57, 95% confidence interval: 1.14, 2.18) predicted CHD events above traditional risk factors and modestly improved global measures of predictive accuracy. CRP, TNF-alpha, and aPWV had weaker associations. IL-6 and AAI accurately reclassified 6.6% and 3.3% of participants, respectively (P's </= 0.05). Results were similar for "hard" CHD, with higher reclassification rates for AAI. IL-6 and AAI are associated with future CHD events beyond traditional risk factors and modestly improve risk prediction in older adults.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The distribution of plants along environmental gradients is constrained by abiotic and biotic factors. Cumulative evidence attests of the impact of biotic factors on plant distributions, but only few studies discuss the role of belowground communities. Soil fungi, in particular, are thought to play an important role in how plant species assemble locally into communities. We first review existing evidence, and then test the effect of the number of soil fungal operational taxonomic units (OTUs) on plant species distributions using a recently collected dataset of plant and metagenomic information on soil fungi in the Western Swiss Alps. Using species distribution models (SDMs), we investigated whether the distribution of individual plant species is correlated to the number of OTUs of two important soil fungal classes known to interact with plants: the Glomeromycetes, that are obligatory symbionts of plants, and the Agaricomycetes, that may be facultative plant symbionts, pathogens, or wood decayers. We show that including the fungal richness information in the models of plant species distributions improves predictive accuracy. Number of fungal OTUs is especially correlated to the distribution of high elevation plant species. We suggest that high elevation soil show greater variation in fungal assemblages that may in turn impact plant turnover among communities. We finally discuss how to move beyond correlative analyses, through the design of field experiments manipulating plant and fungal communities along environmental gradients.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Early Smoking Experience (ESE) questionnaire is the most widely used questionnaire to assess initial subjective experiences of cigarette smoking. However, its factor structure is not clearly defined and can be perceived from two main standpoints: valence, or positive and negative experiences, and sensitivity to nicotine. This article explores the ESE's factor structure and determines which standpoint was more relevant. It compares two groups of young Swiss men (German- and French-speaking). We examined baseline data on 3,368 tobacco users from a representative sample in the ongoing Cohort Study on Substance Use Risk Factors (C-SURF). ESE, continued tobacco use, weekly smoking and nicotine dependence were assessed. Exploratory structural equation modeling (ESEM) and structural equation modeling (SEM) were performed. ESEM clearly distinguished positive experiences from negative experiences, but negative experiences were divided in experiences related to dizziness and experiences related to irritations. SEM underlined the reinforcing effects of positive experiences, but also of experiences related to dizziness on nicotine dependence and weekly smoking. The best ESE structure for predictive accuracy of experiences on smoking behavior was a compromise between the valence and sensitivity standpoints, which showed clinical relevance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Community-level patterns of functional traits relate to community assembly and ecosystem functioning. By modelling the changes of different indices describing such patterns - trait means, extremes and diversity in communities - as a function of abiotic gradients, we could understand their drivers and build projections of the impact of global change on the functional components of biodiversity. We used five plant functional traits (vegetative height, specific leaf area, leaf dry matter content, leaf nitrogen content and seed mass) and non-woody vegetation plots to model several indices depicting community-level patterns of functional traits from a set of abiotic environmental variables (topographic, climatic and edaphic) over contrasting environmental conditions in a mountainous landscape. We performed a variation partitioning analysis to assess the relative importance of these variables for predicting patterns of functional traits in communities, and projected the best models under several climate change scenarios to examine future potential changes in vegetation functional properties. Not all indices of trait patterns within communities could be modelled with the same level of accuracy: the models for mean and extreme values of functional traits provided substantially better predictive accuracy than the models calibrated for diversity indices. Topographic and climatic factors were more important predictors of functional trait patterns within communities than edaphic predictors. Overall, model projections forecast an increase in mean vegetation height and in mean specific leaf area following climate warming. This trend was important at mid elevation particularly between 1000 and 2000 m asl. With this study we showed that topographic, climatic and edaphic variables can successfully model descriptors of community-level patterns of plant functional traits such as mean and extreme trait values. However, which factors determine the diversity of functional traits in plant communities remains unclear and requires more investigations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Colorectal cancer (CRC) is the second leading cause of cancer-related death in developed countries. Early detection of CRC leads to decreased CRC mortality. A blood-based CRC screening test is highly desirable due to limited invasiveness and high acceptance rate among patients compared to currently used fecal occult blood testing and colonoscopy. Here we describe the discovery and validation of a 29-gene panel in peripheral blood mononuclear cells (PBMC) for the detection of CRC and adenomatous polyps (AP). Blood samples were prospectively collected from a multicenter, case-control clinical study. First, we profiled 93 samples with 667 candidate and 3 reference genes by high throughput real-time PCR (OpenArray system). After analysis, 160 genes were retained and tested again on 51 additional samples. Low expressed and unstable genes were discarded resulting in a final dataset of 144 samples profiled with 140 genes. To define which genes, alone or in combinations had the highest potential to discriminate AP and/or CRC from controls, data were analyzed by a combination of univariate and multivariate methods. A list of 29 potentially discriminant genes was compiled and evaluated for its predictive accuracy by penalized logistic regression and bootstrap. This method discriminated AP >1cm and CRC from controls with a sensitivity of 59% and 75%, respectively, with 91% specificity. The behavior of the 29-gene panel was validated with a LightCycler 480 real-time PCR platform, commonly adopted by clinical laboratories. In this work we identified a 29-gene panel expressed in PBMC that can be used for developing a novel minimally-invasive test for accurate detection of AP and CRC using a standard real-time PCR platform.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND: After cardiac surgery with cardiopulmonary bypass (CPB), acquired coagulopathy often leads to post-CPB bleeding. Though multifactorial in origin, this coagulopathy is often aggravated by deficient fibrinogen levels. OBJECTIVE: To assess whether laboratory and thrombelastometric testing on CPB can predict plasma fibrinogen immediately after CPB weaning. PATIENTS / METHODS: This prospective study in 110 patients undergoing major cardiovascular surgery at risk of post-CPB bleeding compares fibrinogen level (Clauss method) and function (fibrin-specific thrombelastometry) in order to study the predictability of their course early after termination of CPB. Linear regression analysis and receiver operating characteristics were used to determine correlations and predictive accuracy. RESULTS: Quantitative estimation of post-CPB Clauss fibrinogen from on-CPB fibrinogen was feasible with small bias (+0.19 g/l), but with poor precision and a percentage of error >30%. A clinically useful alternative approach was developed by using on-CPB A10 to predict a Clauss fibrinogen range of interest instead of a discrete level. An on-CPB A10 ≤10 mm identified patients with a post-CPB Clauss fibrinogen of ≤1.5 g/l with a sensitivity of 0.99 and a positive predictive value of 0.60; it also identified those without a post-CPB Clauss fibrinogen <2.0 g/l with a specificity of 0.83. CONCLUSIONS: When measured on CPB prior to weaning, a FIBTEM A10 ≤10 mm is an early alert for post-CPB fibrinogen levels below or within the substitution range (1.5-2.0 g/l) recommended in case of post-CPB coagulopathic bleeding. This helps to minimize the delay to data-based hemostatic management after weaning from CPB.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

PURPOSE: The purpose of our study was to assess whether a model combining clinical factors, MR imaging features, and genomics would better predict overall survival of patients with glioblastoma (GBM) than either individual data type. METHODS: The study was conducted leveraging The Cancer Genome Atlas (TCGA) effort supported by the National Institutes of Health. Six neuroradiologists reviewed MRI images from The Cancer Imaging Archive (http://cancerimagingarchive.net) of 102 GBM patients using the VASARI scoring system. The patients' clinical and genetic data were obtained from the TCGA website (http://www.cancergenome.nih.gov/). Patient outcome was measured in terms of overall survival time. The association between different categories of biomarkers and survival was evaluated using Cox analysis. RESULTS: The features that were significantly associated with survival were: (1) clinical factors: chemotherapy; (2) imaging: proportion of tumor contrast enhancement on MRI; and (3) genomics: HRAS copy number variation. The combination of these three biomarkers resulted in an incremental increase in the strength of prediction of survival, with the model that included clinical, imaging, and genetic variables having the highest predictive accuracy (area under the curve 0.679±0.068, Akaike's information criterion 566.7, P<0.001). CONCLUSION: A combination of clinical factors, imaging features, and HRAS copy number variation best predicts survival of patients with GBM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study was to determine the prognostic accuracy of perfusion computed tomography (CT), performed at the time of emergency room admission, in acute stroke patients. Accuracy was determined by comparison of perfusion CT with delayed magnetic resonance (MR) and by monitoring the evolution of each patient's clinical condition. Twenty-two acute stroke patients underwent perfusion CT covering four contiguous 10mm slices on admission, as well as delayed MR, performed after a median interval of 3 days after emergency room admission. Eight were treated with thrombolytic agents. Infarct size on the admission perfusion CT was compared with that on the delayed diffusion-weighted (DWI)-MR, chosen as the gold standard. Delayed magnetic resonance angiography and perfusion-weighted MR were used to detect recanalization. A potential recuperation ratio, defined as PRR = penumbra size/(penumbra size + infarct size) on the admission perfusion CT, was compared with the evolution in each patient's clinical condition, defined by the National Institutes of Health Stroke Scale (NIHSS). In the 8 cases with arterial recanalization, the size of the cerebral infarct on the delayed DWI-MR was larger than or equal to that of the infarct on the admission perfusion CT, but smaller than or equal to that of the ischemic lesion on the admission perfusion CT; and the observed improvement in the NIHSS correlated with the PRR (correlation coefficient = 0.833). In the 14 cases with persistent arterial occlusion, infarct size on the delayed DWI-MR correlated with ischemic lesion size on the admission perfusion CT (r = 0.958). In all 22 patients, the admission NIHSS correlated with the size of the ischemic area on the admission perfusion CT (r = 0.627). Based on these findings, we conclude that perfusion CT allows the accurate prediction of the final infarct size and the evaluation of clinical prognosis for acute stroke patients at the time of emergency evaluation. It may also provide information about the extent of the penumbra. Perfusion CT could therefore be a valuable tool in the early management of acute stroke patients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Predictive species distribution modelling (SDM) has become an essential tool in biodiversity conservation and management. The choice of grain size (resolution) of environmental layers used in modelling is one important factor that may affect predictions. We applied 10 distinct modelling techniques to presence-only data for 50 species in five different regions, to test whether: (1) a 10-fold coarsening of resolution affects predictive performance of SDMs, and (2) any observed effects are dependent on the type of region, modelling technique, or species considered. Results show that a 10 times change in grain size does not severely affect predictions from species distribution models. The overall trend is towards degradation of model performance, but improvement can also be observed. Changing grain size does not equally affect models across regions, techniques, and species types. The strongest effect is on regions and species types, with tree species in the data sets (regions) with highest locational accuracy being most affected. Changing grain size had little influence on the ranking of techniques: boosted regression trees remain best at both resolutions. The number of occurrences used for model training had an important effect, with larger sample sizes resulting in better models, which tended to be more sensitive to grain. Effect of grain change was only noticeable for models reaching sufficient performance and/or with initial data that have an intrinsic error smaller than the coarser grain size.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Co-morbidity information derived from administrative data needs to be validated to allow its regular use. We assessed evolution in the accuracy of coding for Charlson and Elixhauser co-morbidities at three time points over a 5-year period, following the introduction of the International Classification of Diseases, 10th Revision (ICD-10), coding of hospital discharges.METHODS: Cross-sectional time trend evaluation study of coding accuracy using hospital chart data of 3'499 randomly selected patients who were discharged in 1999, 2001 and 2003, from two teaching and one non-teaching hospital in Switzerland. We measured sensitivity, positive predictive and Kappa values for agreement between administrative data coded with ICD-10 and chart data as the 'reference standard' for recording 36 co-morbidities.RESULTS: For the 17 the Charlson co-morbidities, the sensitivity - median (min-max) - was 36.5% (17.4-64.1) in 1999, 42.5% (22.2-64.6) in 2001 and 42.8% (8.4-75.6) in 2003. For the 29 Elixhauser co-morbidities, the sensitivity was 34.2% (1.9-64.1) in 1999, 38.6% (10.5-66.5) in 2001 and 41.6% (5.1-76.5) in 2003. Between 1999 and 2003, sensitivity estimates increased for 30 co-morbidities and decreased for 6 co-morbidities. The increase in sensitivities was statistically significant for six conditions and the decrease significant for one. Kappa values were increased for 29 co-morbidities and decreased for seven.CONCLUSIONS: Accuracy of administrative data in recording clinical conditions improved slightly between 1999 and 2003. These findings are of relevance to all jurisdictions introducing new coding systems, because they demonstrate a phenomenon of improved administrative data accuracy that may relate to a coding 'learning curve' with the new coding system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: EEG and somatosensory evoked potential are highly predictive of poor outcome after cardiac arrest; their accuracy for good recovery is however low. We evaluated whether addition of an automated mismatch negativity-based auditory discrimination paradigm (ADP) to EEG and somatosensory evoked potential improves prediction of awakening. METHODS: EEG and ADP were prospectively recorded in 30 adults during therapeutic hypothermia and in normothermia. We studied the progression of auditory discrimination on single-trial multivariate analyses from therapeutic hypothermia to normothermia, and its correlation to outcome at 3 months, assessed with cerebral performance categories. RESULTS: At 3 months, 18 of 30 patients (60%) survived; 5 had severe neurologic impairment (cerebral performance categories = 3) and 13 had good recovery (cerebral performance categories = 1-2). All 10 subjects showing improvements of auditory discrimination from therapeutic hypothermia to normothermia regained consciousness: ADP was 100% predictive for awakening. The addition of ADP significantly improved mortality prediction (area under the curve, 0.77 for standard model including clinical examination, EEG, somatosensory evoked potential, versus 0.86 after adding ADP, P = 0.02). CONCLUSIONS: This automated ADP significantly improves early coma prognostic accuracy after cardiac arrest and therapeutic hypothermia. The progression of auditory discrimination is strongly predictive of favorable recovery and appears complementary to existing prognosticators of poor outcome. Before routine implementation, validation on larger cohorts is warranted.