958 resultados para Score statistic


Relevância:

70.00% 70.00%

Publicador:

Resumo:

We present simple matrix formulae for corrected score statistics in symmetric nonlinear regression models. The corrected score statistics follow more closely a chi (2) distribution than the classical score statistic. Our simulation results indicate that the corrected score tests display smaller size distortions than the original score test. We also compare the sizes and the powers of the corrected score tests with bootstrap-based score tests.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We introduce a technique for assessing the diurnal development of convective storm systems based on outgoing longwave radiation fields. Using the size distribution of the storms measured from a series of images, we generate an array in the lengthscale-time domain based on the standard score statistic. It demonstrates succinctly the size evolution of storms as well as the dissipation kinematics. It also provides evidence related to the temperature evolution of the cloud tops. We apply this approach to a test case comparing observations made by the Geostationary Earth Radiation Budget instrument to output from the Met Office Unified Model run at two resolutions. The 12km resolution model produces peak convective activity on all lengthscales significantly earlier in the day than shown by the observations and no evidence for storms growing in size. The 4km resolution model shows realistic timing and growth evolution although the dissipation mechanism still differs from the observed data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a simple Bayesian approach to sample size determination in clinical trials. It is required that the trial should be large enough to ensure that the data collected will provide convincing evidence either that an experimental treatment is better than a control or that it fails to improve upon control by some clinically relevant difference. The method resembles standard frequentist formulations of the problem, and indeed in certain circumstances involving 'non-informative' prior information it leads to identical answers. In particular, unlike many Bayesian approaches to sample size determination, use is made of an alternative hypothesis that an experimental treatment is better than a control treatment by some specified magnitude. The approach is introduced in the context of testing whether a single stream of binary observations are consistent with a given success rate p(0). Next the case of comparing two independent streams of normally distributed responses is considered, first under the assumption that their common variance is known and then for unknown variance. Finally, the more general situation in which a large sample is to be collected and analysed according to the asymptotic properties of the score statistic is explored. Copyright (C) 2007 John Wiley & Sons, Ltd.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Considering the Wald, score, and likelihood ratio asymptotic test statistics, we analyze a multivariate null intercept errors-in-variables regression model, where the explanatory and the response variables are subject to measurement errors, and a possible structure of dependency between the measurements taken within the same individual are incorporated, representing a longitudinal structure. This model was proposed by Aoki et al. (2003b) and analyzed under the bayesian approach. In this article, considering the classical approach, we analyze asymptotic test statistics and present a simulation study to compare the behavior of the three test statistics for different sample sizes, parameter values and nominal levels of the test. Also, closed form expressions for the score function and the Fisher information matrix are presented. We consider two real numerical illustrations, the odontological data set from Hadgu and Koch (1999), and a quality control data set.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The aim of this paper is to develop a flexible model for analysis of quantitative trait loci (QTL) in outbred line crosses, which includes both additive and dominance effects. Our flexible intercross analysis (FIA) model accounts for QTL that are not fixed within founder lines and is based on the variance component framework. Genome scans with FIA are performed using a score statistic, which does not require variance component estimation. RESULTS: Simulations of a pedigree with 800 F2 individuals showed that the power of FIA including both additive and dominance effects was almost 50% for a QTL with equal allele frequencies in both lines with complete dominance and a moderate effect, whereas the power of a traditional regression model was equal to the chosen significance value of 5%. The power of FIA without dominance effects included in the model was close to those obtained for FIA with dominance for all simulated cases except for QTL with overdominant effects. A genome-wide linkage analysis of experimental data from an F2 intercross between Red Jungle Fowl and White Leghorn was performed with both additive and dominance effects included in FIA. The score values for chicken body weight at 200 days of age were similar to those obtained in FIA analysis without dominance. CONCLUSION: We have extended FIA to include QTL dominance effects. The power of FIA was superior, or similar, to standard regression methods for QTL effects with dominance. The difference in power for FIA with or without dominance is expected to be small as long as the QTL effects are not overdominant. We suggest that FIA with only additive effects should be the standard model to be used, especially since it is more computationally efficient.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

An extension of k-ratio multiple comparison methods to rank-based analyses is described. The new method is analogous to the Duncan-Godbold approximate k-ratio procedure for unequal sample sizes or correlated means. The close parallel of the new methods to the Duncan-Godbold approach is shown by demonstrating that they are based upon different parameterizations as starting points.^ A semi-parametric basis for the new methods is shown by starting from the Cox proportional hazards model, using Wald statistics. From there the log-rank and Gehan-Breslow-Wilcoxon methods may be seen as score statistic based methods.^ Simulations and analysis of a published data set are used to show the performance of the new methods. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: The only available score to assess the risk for fatal bleeding in patients with venous thromboembolism (VTE) has not been validated yet. METHODS: We used the RIETE database to validate the risk-score for fatal bleeding within the first 3 months of anticoagulation in a new cohort of patients recruited after the end of the former study. Accuracy was measured using the ROC curve analysis. RESULTS: As of December 2011, 39,284 patients were recruited in RIETE. Of these, 15,206 had not been included in the former study, and were considered to validate the score. Within the first 3 months of anticoagulation, 52 patients (0.34%; 95% CI: 0.27-0.45) died of bleeding. Patients with a risk score of <1.5 points (64.1% of the cohort) had a 0.10% rate of fatal bleeding, those with a score of 1.5-4.0 (33.6%) a rate of 0.72%, and those with a score of >4 points had a rate of 1.44%. The c-statistic for fatal bleeding was 0.775 (95% CI 0.720-0.830). The score performed better for predicting gastrointestinal (c-statistic, 0.869; 95% CI: 0.810-0.928) than intracranial (c-statistic, 0.687; 95% CI: 0.568-0.806) fatal bleeding. The score value with highest combined sensitivity and specificity was 1.75. The risk for fatal bleeding was significantly increased (odds ratio: 7.6; 95% CI 3.7-16.2) above this cut-off value. CONCLUSIONS: The accuracy of the score in this validation cohort was similar to the accuracy found in the index study. Interestingly, it performed better for predicting gastrointestinal than intracranial fatal bleeding.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: No prior studies have identified which patients with deep vein thrombosis in the lower limbs are at a low risk for adverse events within the first week of therapy. METHODS: We used data from the Registro Informatizado de la Enfermedad TromboEmbólica (RIETE) to identify patients at low risk for the composite outcome of pulmonary embolism, major bleeding, or death within the first week. We built a prognostic score and compared it with the decision to treat patients at home. RESULTS: As of December 2013, 15,280 outpatients with deep vein thrombosis had been enrolled. Overall, 5164 patients (34%) were treated at home. Of these, 12 (0.23%) had pulmonary embolism, 8 (0.15%) bled, and 4 (0.08%) died. On multivariable analysis, chronic heart failure, recent immobility, recent bleeding, cancer, renal insufficiency, and abnormal platelet count independently predicted the risk for the composite outcome. Among 11,430 patients (75%) considered to be at low risk, 15 (0.13%) suffered pulmonary embolism, 22 (0.19%) bled, and 8 (0.07%) died. The C-statistic was 0.61 (95% confidence interval [CI], 0.57-0.65) for the decision to treat patients at home and 0.76 (95% CI, 0.72-0.79) for the score (P = .003). Net reclassification improvement was 41% (P < .001). Integrated discrimination improvement was 0.034 for the score and 0.015 for the clinical decision (P < .001). CONCLUSIONS: Using 6 easily available variables, we identified outpatients with deep vein thrombosis at low risk for adverse events within the first week. These data may help to safely treat more patients at home. This score, however, should be validated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The SEARCH-RIO study prospectively investigated electrocardiogram (ECG)-derived variables in chronic Chagas disease (CCD) as predictors of cardiac death and new onset ventricular tachycardia (VT). Cardiac arrhythmia is a major cause of death in CCD, and electrical markers may play a significant role in risk stratification. One hundred clinically stable outpatients with CCD were enrolled in this study. They initially underwent a 12-lead resting ECG, signal-averaged ECG, and 24-h ambulatory ECG. Abnormal Q-waves, filtered QRS duration, intraventricular electrical transients (IVET), 24-h standard deviation of normal RR intervals (SDNN), and VT were assessed. Echocardiograms assessed left ventricular ejection fraction. Predictors of cardiac death and new onset VT were identified in a Cox proportional hazard model. During a mean follow-up of 95.3 months, 36 patients had adverse events: 22 new onset VT (mean±SD, 18.4±4‰/year) and 20 deaths (26.4±1.8‰/year). In multivariate analysis, only Q-wave (hazard ratio, HR=6.7; P<0.001), VT (HR=5.3; P<0.001), SDNN<100 ms (HR=4.0; P=0.006), and IVET+ (HR=3.0; P=0.04) were independent predictors of the composite endpoint of cardiac death and new onset VT. A prognostic score was developed by weighting points proportional to beta coefficients and summing-up: Q-wave=2; VT=2; SDNN<100 ms=1; IVET+=1. Receiver operating characteristic curve analysis optimized the cutoff value at >1. In 10,000 bootstraps, the C-statistic of this novel score was non-inferior to a previously validated (Rassi) score (0.89±0.03 and 0.80±0.05, respectively; test for non-inferiority: P<0.001). In CCD, surface ECG-derived variables are predictors of cardiac death and new onset VT.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A score test is developed for binary clinical trial data, which incorporates patient non-compliance while respecting randomization. It is assumed in this paper that compliance is all-or-nothing, in the sense that a patient either accepts all of the treatment assigned as specified in the protocol, or none of it. Direct analytic comparisons of the adjusted test statistic for both the score test and the likelihood ratio test are made with the corresponding test statistics that adhere to the intention-to-treat principle. It is shown that no gain in power is possible over the intention-to-treat analysis, by adjusting for patient non-compliance. Sample size formulae are derived and simulation studies are used to demonstrate that the sample size approximation holds. Copyright © 2003 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Surgical risk scores, such as the logistic EuroSCORE (LES) and Society of Thoracic Surgeons Predicted Risk of Mortality (STS) score, are commonly used to identify high-risk or “inoperable” patients for transcatheter aortic valve implantation (TAVI). In Europe, the LES plays an important role in selecting patients for implantation with the Medtronic CoreValve System. What is less clear, however, is the role of the STS score of these patients and the relationship between the LES and STS. Objective The purpose of this study is to examine the correlation between LES and STS scores and their performance characteristics in high-risk surgical patients implanted with the Medtronic CoreValve System. Methods All consecutive patients (n = 168) in whom a CoreValve bioprosthesis was implanted between November 2005 and June 2009 at 2 centers (Bern University Hospital, Bern, Switzerland, and Erasmus Medical Center, Rotterdam, The Netherlands) were included for analysis. Patient demographics were recorded in a prospective database. Logistic EuroSCORE and STS scores were calculated on a prospective and retrospective basis, respectively. Results Observed mortality was 11.1%. The mean LES was 3 times higher than the mean STS score (LES 20.2% ± 13.9% vs STS 6.7% ± 5.8%). Based on the various LES and STS cutoff values used in previous and ongoing TAVI trials, 53% of patients had an LES ≥15%, 16% had an STS ≥10%, and 40% had an LES ≥20% or STS ≥10%. Pearson correlation coefficient revealed a reasonable (moderate) linear relationship between the LES and STS scores, r = 0.58, P < .001. Although the STS score outperformed the LES, both models had suboptimal discriminatory power (c-statistic, 0.49 for LES and 0.69 for STS) and calibration. Conclusions Clinical judgment and the Heart Team concept should play a key role in selecting patients for TAVI, whereas currently available surgical risk score algorithms should be used to guide clinical decision making.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES This study sought to validate the Logistic Clinical SYNTAX (Synergy Between Percutaneous Coronary Intervention With Taxus and Cardiac Surgery) score in patients with non-ST-segment elevation acute coronary syndromes (ACS), in order to further legitimize its clinical application. BACKGROUND The Logistic Clinical SYNTAX score allows for an individualized prediction of 1-year mortality in patients undergoing contemporary percutaneous coronary intervention. It is composed of a "Core" Model (anatomical SYNTAX score, age, creatinine clearance, and left ventricular ejection fraction), and "Extended" Model (composed of an additional 6 clinical variables), and has previously been cross validated in 7 contemporary stent trials (>6,000 patients). METHODS One-year all-cause death was analyzed in 2,627 patients undergoing percutaneous coronary intervention from the ACUITY (Acute Catheterization and Urgent Intervention Triage Strategy) trial. Mortality predictions from the Core and Extended Models were studied with respect to discrimination, that is, separation of those with and without 1-year all-cause death (assessed by the concordance [C] statistic), and calibration, that is, agreement between observed and predicted outcomes (assessed with validation plots). Decision curve analyses, which weight the harms (false positives) against benefits (true positives) of using a risk score to make mortality predictions, were undertaken to assess clinical usefulness. RESULTS In the ACUITY trial, the median SYNTAX score was 9.0 (interquartile range 5.0 to 16.0); approximately 40% of patients had 3-vessel disease, 29% diabetes, and 85% underwent drug-eluting stent implantation. Validation plots confirmed agreement between observed and predicted mortality. The Core and Extended Models demonstrated substantial improvements in the discriminative ability for 1-year all-cause death compared with the anatomical SYNTAX score in isolation (C-statistics: SYNTAX score: 0.64, 95% confidence interval [CI]: 0.56 to 0.71; Core Model: 0.74, 95% CI: 0.66 to 0.79; Extended Model: 0.77, 95% CI: 0.70 to 0.83). Decision curve analyses confirmed the increasing ability to correctly identify patients who would die at 1 year with the Extended Model versus the Core Model versus the anatomical SYNTAX score, over a wide range of thresholds for mortality risk predictions. CONCLUSIONS Compared to the anatomical SYNTAX score alone, the Core and Extended Models of the Logistic Clinical SYNTAX score more accurately predicted individual 1-year mortality in patients presenting with non-ST-segment elevation acute coronary syndromes undergoing percutaneous coronary intervention. These findings support the clinical application of the Logistic Clinical SYNTAX score.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND To investigate the performance of the MI Sxscore in a multicentre randomised trial of patients undergoing primary percutaneous coronary intervention (PPCI). METHODS AND RESULTS The MI Sxscore was prospectively determined among 1132 STEMI patients enrolled into the COMFORTABLE AMI trial, which randomised patients to treatment with bare-metal (BMS) or biolimus-eluting (BES) stents. Patient- (death, myocardial infarction, any revascularisation) and device-oriented (cardiac death, target-vessel MI, target lesion revascularisation) major adverse cardiac events (MACEs) were compared across MI Sxscore tertiles and according to stent type. The median MI SXscore was 14 (IQR: 9-21). Patients were divided into tertiles of Sxscorelow (≤10), Sxscoreintermediate (11-18) and Sxscorehigh (≥19). At 1year, patient-oriented MACE occurred in 15% of the Sxscorehigh, 9% of the Sxscoreintermediate and 5% of the Sxscorelow tertiles (p<0.001), whereas device-oriented MACE occurred in 8% of the Sxscorehigh, 6% of the Sxscoreintermediate and 4% of the Sxscorelow tertiles (p=0.03). Addition of the MI Sxscore to the TIMI risk score improved prediction of patient- (c-statistic value increase from 0.63 to 0.69) and device-oriented MACEs (c-statistic value increase from 0.65 to 0.70). Differences in the risk for device-oriented MACE between BMS and BES were evident among Sxscorehigh (13% vs. 4% HR 0.33 (0.15-0.74), p=0.007 rather than those in Sxscorelow: 4% vs. 3% HR 0.68 (0.24-1.97), p=0.48) tertiles. CONCLUSIONS The MI Sxscore allows risk stratification of patient- and device-oriented MACEs among patients undergoing PPCI. The addition of the MI Sxscore to the TIMI risk score is of incremental prognostic value among patients undergoing PPCI for treatment of STEMI.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE Algorithms to predict the future long-term risk of patients with stable coronary artery disease (CAD) are rare. The VIenna and Ludwigshafen CAD (VILCAD) risk score was one of the first scores specifically tailored for this clinically important patient population. The aim of this study was to refine risk prediction in stable CAD creating a new prediction model encompassing various pathophysiological pathways. Therefore, we assessed the predictive power of 135 novel biomarkers for long-term mortality in patients with stable CAD. DESIGN, SETTING AND SUBJECTS We included 1275 patients with stable CAD from the LUdwigshafen RIsk and Cardiovascular health study with a median follow-up of 9.8 years to investigate whether the predictive power of the VILCAD score could be improved by the addition of novel biomarkers. Additional biomarkers were selected in a bootstrapping procedure based on Cox regression to determine the most informative predictors of mortality. RESULTS The final multivariable model encompassed nine clinical and biochemical markers: age, sex, left ventricular ejection fraction (LVEF), heart rate, N-terminal pro-brain natriuretic peptide, cystatin C, renin, 25OH-vitamin D3 and haemoglobin A1c. The extended VILCAD biomarker score achieved a significantly improved C-statistic (0.78 vs. 0.73; P = 0.035) and net reclassification index (14.9%; P < 0.001) compared to the original VILCAD score. Omitting LVEF, which might not be readily measureable in clinical practice, slightly reduced the accuracy of the new BIO-VILCAD score but still significantly improved risk classification (net reclassification improvement 12.5%; P < 0.001). CONCLUSION The VILCAD biomarker score based on routine parameters complemented by novel biomarkers outperforms previous risk algorithms and allows more accurate classification of patients with stable CAD, enabling physicians to choose more personalized treatment regimens for their patients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order to better take advantage of the abundant results from large-scale genomic association studies, investigators are turning to a genetic risk score (GRS) method in order to combine the information from common modest-effect risk alleles into an efficient risk assessment statistic. The statistical properties of these GRSs are poorly understood. As a first step toward a better understanding of GRSs, a systematic analysis of recent investigations using a GRS was undertaken. GRS studies were searched in the areas of coronary heart disease (CHD), cancer, and other common diseases using bibliographic databases and by hand-searching reference lists and journals. Twenty-one independent case-control studies, cohort studies, and simulation studies (12 in CHD, 9 in other diseases) were identified. The underlying statistical assumptions of the GRS using the experience of the Framingham risk score were investigated. Improvements in the construction of a GRS guided by the concept of composite indicators are discussed. The GRS will be a promising risk assessment tool to improve prediction and diagnosis of common diseases.^