756 resultados para Binary outcomes


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of phase II single-arm clinical trials of a new drug is to determine whether it has sufficient promising activity to warrant its further development. For the last several years Bayesian statistical methods have been proposed and used. Bayesian approaches are ideal for earlier phase trials as they take into account information that accrues during a trial. Predictive probabilities are then updated and so become more accurate as the trial progresses. Suitable priors can act as pseudo samples, which make small sample clinical trials more informative. Thus patients have better chances to receive better treatments. The goal of this paper is to provide a tutorial for statisticians who use Bayesian methods for the first time or investigators who have some statistical background. In addition, real data from three clinical trials are presented as examples to illustrate how to conduct a Bayesian approach for phase II single-arm clinical trials with binary outcomes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background and Purpose—Vascular prevention trials mostly count “yes/no” (binary) outcome events, eg, stroke/no stroke. Analysis of ordered categorical vascular events (eg, fatal stroke/nonfatal stroke/no stroke) is clinically relevant and could be more powerful statistically. Although this is not a novel idea in the statistical community, ordinal outcomes have not been applied to stroke prevention trials in the past. Methods—Summary data on stroke, myocardial infarction, combined vascular events, and bleeding were obtained by treatment group from published vascular prevention trials. Data were analyzed using 10 statistical approaches which allow comparison of 2 ordinal or binary treatment groups. The results for each statistical test for each trial were then compared using Friedman 2-way analysis of variance with multiple comparison procedures. Results—Across 85 trials (335 305 subjects) the test results differed substantially so that approaches which used the ordinal nature of stroke events (fatal/nonfatal/no stroke) were more efficient than those which combined the data to form 2 groups (P0.0001). The most efficient tests were bootstrapping the difference in mean rank, Mann–Whitney U test, and ordinal logistic regression; 4- and 5-level data were more efficient still. Similar findings were obtained for myocardial infarction, combined vascular outcomes, and bleeding. The findings were consistent across different types, designs and sizes of trial, and for the different types of intervention. Conclusions—When analyzing vascular events from prevention trials, statistical tests which use ordered categorical data are more efficient and are more likely to yield reliable results than binary tests. This approach gives additional information on treatment effects by severity of event and will allow trials to be smaller. (Stroke. 2008;39:000-000.)

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Many seemingly disparate approaches for marginal modeling have been developed in recent years. We demonstrate that many current approaches for marginal modeling of correlated binary outcomes produce likelihoods that are equivalent to the proposed copula-based models herein. These general copula models of underlying latent threshold random variables yield likelihood based models for marginal fixed effects estimation and interpretation in the analysis of correlated binary data. Moreover, we propose a nomenclature and set of model relationships that substantially elucidates the complex area of marginalized models for binary data. A diverse collection of didactic mathematical and numerical examples are given to illustrate concepts.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The present paper explores the role of motivation to observe a certain outcome in people's predictions, causal attributions, and beliefs about a streak of binary outcomes (basketball scoring shots). In two studies we found that positive streaks (points scored by the participants' favourite team) lead participants to predict the streak's continuation (belief in the hot hand), but negative streaks lead to predictions of its end (gambler's fallacy). More importantly, these wishful predictions are supported by strategic attributions and beliefs about how and why a streak might unfold. Results suggest that the effect of motivation on predictions is mediated by a serial path via causal attributions to the teams at play and belief in the hot hand.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND: Due to the underlying diseases and the need for immunosuppression, patients after lung transplantation are particularly at risk for gastrointestinal (GI) complications that may negatively influence long-term outcome. The present study assessed the incidences and impact of GI complications after lung transplantation and aimed to identify risk factors. METHODS: Retrospective analysis of all 227 consecutively performed single- and double-lung transplantations at the University hospitals of Lausanne and Geneva was performed between January 1993 and December 2010. Logistic regressions were used to test the effect of potentially influencing variables on the binary outcomes overall, severe, and surgery-requiring complications, followed by a multiple logistic regression model. RESULTS: Final analysis included 205 patients for the purpose of the present study, and 22 patients were excluded due to re-transplantation, multiorgan transplantation, or incomplete datasets. GI complications were observed in 127 patients (62 %). Gastro-esophageal reflux disease was the most commonly observed complication (22.9 %), followed by inflammatory or infectious colitis (20.5 %) and gastroparesis (10.7 %). Major GI complications (Dindo/Clavien III-V) were observed in 83 (40.5 %) patients and were fatal in 4 patients (2.0 %). Multivariate analysis identified double-lung transplantation (p = 0.012) and early (1993-1998) transplantation period (p = 0.008) as independent risk factors for developing major GI complications. Forty-three (21 %) patients required surgery such as colectomy, cholecystectomy, and fundoplication in 6.8, 6.3, and 3.9 % of the patients, respectively. Multivariate analysis identified Charlson comorbidity index of ≥3 as an independent risk factor for developing GI complications requiring surgery (p = 0.015). CONCLUSION: GI complications after lung transplantation are common. Outcome was rather encouraging in the setting of our transplant center.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND: Most peripheral T-cell lymphoma (PTCL) patients have a poor outcome and the identification of prognostic factors at diagnosis is needed. PATIENTS AND METHODS: The prognostic impact of total metabolic tumor volume (TMTV0), measured on baseline [(18)F]2-fluoro-2-deoxy-d-glucose positron emission tomography/computed tomography, was evaluated in a retrospective study including 108 PTCL patients (27 PTCL not otherwise specified, 43 angioimmunoblastic T-cell lymphomas and 38 anaplastic large-cell lymphomas). All received anthracycline-based chemotherapy. TMTV0 was computed with the 41% maximum standardized uptake value threshold method and an optimal cut-off point for binary outcomes was determined and compared with others prognostic factors. RESULTS: With a median follow-up of 23 months, 2-year progression-free survival (PFS) was 49% and 2-year overall survival (OS) was 67%. High TMTV0 was significantly associated with a worse prognosis. At 2 years, PFS was 26% in patients with a high TMTV0 (>230 cm(3), n = 53) versus 71% for those with a low TMTV0, [P < 0.0001, hazard ratio (HR) = 4], whereas OS was 50% versus 80%, respectively, (P = 0.0005, HR = 3.1). In multivariate analysis, TMTV0 was the only significant independent parameter for both PFS and OS. TMTV0, combined with PIT, discriminated even better than TMTV0 alone, patients with an adverse outcome (TMTV0 >230 cm(3) and PIT >1, n = 33,) from those with good prognosis (TMTV0 ≤230 cm(3) and PIT ≤1, n = 40): 19% versus 73% 2-year PFS (P < 0.0001) and 43% versus 81% 2-year OS, respectively (P = 0.0002). Thirty-one patients (other TMTV0-PIT combinations) had an intermediate outcome, 50% 2-year PFS and 68% 2-year OS. CONCLUSION: TMTV0 appears as an independent predictor of PTCL outcome. Combined with PIT, it could identify different risk categories at diagnosis and warrants further validation as a prognostic marker.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In chronic haemodialysis patients, anaemia is a frequent finding associated with high therapeutic costs and further expenses resulting from serial laboratory measurements. HemoHue HH1, HemoHue Ltd, is a novel tool consisting of a visual scale for the noninvasive assessment of anaemia by matching the coloration of the conjunctiva with a calibrated hue scale. The aim of the study was to investigate the usefulness of HemoHue in estimating individual haemoglobin concentrations and binary treatment outcomes in haemodialysis patients. A prospective blinded study with 80 hemodialysis patients comparing the visual haemoglobin assessment with the standard laboratory measurement was performed. Each patient's haemoglobin concentration was estimated by seven different medical and nonmedical observers with variable degrees of clinical experience on two different occasions. The estimated population mean was close to the measured one (11.06 ± 1.67 versus 11.32 ± 1.23 g/dL, P < 0.0005). A learning effect could be detected. Relative errors in individual estimates reached, however, up to 50%. Insufficient performance in predicting binary outcomes (ROC AUC: 0.72 to 0.78) and poor interrater reliability (Kappa < 0.6) further characterised this method.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Generalized linear mixed models (GLMM) are generalized linear models with normally distributed random effects in the linear predictor. Penalized quasi-likelihood (PQL), an approximate method of inference in GLMMs, involves repeated fitting of linear mixed models with “working” dependent variables and iterative weights that depend on parameter estimates from the previous cycle of iteration. The generality of PQL, and its implementation in commercially available software, has encouraged the application of GLMMs in many scientific fields. Caution is needed, however, since PQL may sometimes yield badly biased estimates of variance components, especially with binary outcomes. Recent developments in numerical integration, including adaptive Gaussian quadrature, higher order Laplace expansions, stochastic integration and Markov chain Monte Carlo (MCMC) algorithms, provide attractive alternatives to PQL for approximate likelihood inference in GLMMs. Analyses of some well known datasets, and simulations based on these analyses, suggest that PQL still performs remarkably well in comparison with more elaborate procedures in many practical situations. Adaptive Gaussian quadrature is a viable alternative for nested designs where the numerical integration is limited to a small number of dimensions. Higher order Laplace approximations hold the promise of accurate inference more generally. MCMC is likely the method of choice for the most complex problems that involve high dimensional integrals.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Investigators interested in whether a disease aggregates in families often collect case-control family data, which consist of disease status and covariate information for families selected via case or control probands. Here, we focus on the use of case-control family data to investigate the relative contributions to the disease of additive genetic effects (A), shared family environment (C), and unique environment (E). To this end, we describe a ACE model for binary family data and then introduce an approach to fitting the model to case-control family data. The structural equation model, which has been described previously, combines a general-family extension of the classic ACE twin model with a (possibly covariate-specific) liability-threshold model for binary outcomes. Our likelihood-based approach to fitting involves conditioning on the proband’s disease status, as well as setting prevalence equal to a pre-specified value that can be estimated from the data themselves if necessary. Simulation experiments suggest that our approach to fitting yields approximately unbiased estimates of the A, C, and E variance components, provided that certain commonly-made assumptions hold. These assumptions include: the usual assumptions for the classic ACE and liability-threshold models; assumptions about shared family environment for relative pairs; and assumptions about the case-control family sampling, including single ascertainment. When our approach is used to fit the ACE model to Austrian case-control family data on depression, the resulting estimate of heritability is very similar to those from previous analyses of twin data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In epidemiological work, outcomes are frequently non-normal, sample sizes may be large, and effects are often small. To relate health outcomes to geographic risk factors, fast and powerful methods for fitting spatial models, particularly for non-normal data, are required. We focus on binary outcomes, with the risk surface a smooth function of space. We compare penalized likelihood models, including the penalized quasi-likelihood (PQL) approach, and Bayesian models based on fit, speed, and ease of implementation. A Bayesian model using a spectral basis representation of the spatial surface provides the best tradeoff of sensitivity and specificity in simulations, detecting real spatial features while limiting overfitting and being more efficient computationally than other Bayesian approaches. One of the contributions of this work is further development of this underused representation. The spectral basis model outperforms the penalized likelihood methods, which are prone to overfitting, but is slower to fit and not as easily implemented. Conclusions based on a real dataset of cancer cases in Taiwan are similar albeit less conclusive with respect to comparing the approaches. The success of the spectral basis with binary data and similar results with count data suggest that it may be generally useful in spatial models and more complicated hierarchical models.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND: Osteoarthritis is a chronic joint disease that involves degeneration of articular cartilage. Pre-clinical data suggest that doxycycline might act as a disease-modifying agent for the treatment of osteoarthritis, with the potential to slow cartilage degeneration. OBJECTIVES: To examine the effects of doxycycline compared with placebo or no intervention on pain and function in patients with osteoarthritis of the hip or knee. SEARCH STRATEGY: We searched CENTRAL ( The Cochrane Library 2008, issue 3), MEDLINE, EMBASE and CINAHL up to 28 July 2008, checked conference proceedings, reference lists, and contacted authors. SELECTION CRITERIA: We included studies if they were randomised or quasi-randomised controlled trials that compared doxycycline at any dosage and any formulation with placebo or no intervention in patients with osteoarthritis of the knee or hip. DATA COLLECTION AND ANALYSIS: We extracted data in duplicate. We contacted investigators to obtain missing outcome information. We calculated differences in means at follow-up between experimental and control groups for continuous outcomes and risk ratios for binary outcomes. MAIN RESULTS: We found one randomised controlled trial that compared doxycycline with placebo in 431 obese women. After 30 months of treatment, clinical outcomes were similar between the two treatment groups, with a mean difference of -0.20 cm (95% confidence interval (CI) -0.77 to 0.37 cm) on a visual analogue scale from 0 to 10 cm for pain and -1.10 units (95% CI -3.86 to 1.66) for function on the WOMAC disability subscale, which ranges from 17 to 85. These differences correspond to clinically irrelevant effect sizes of -0.08 and -0.09 standard deviation units for pain and function, respectively. The difference in changes in minimum joint space narrowing was in favour of doxycycline (-0.15 mm, 95% CI -0.28 to -0.02 mm), which corresponds to a small effect size of -0.23 standard deviation units. More patients withdrew from the doxycycline group compared with placebo due to adverse events (risk ratio 1.69, 95% CI 1.03 to 2.75). AUTHORS' CONCLUSIONS: The symptomatic benefit of doxycycline is minimal to non-existent. The small benefit in terms of joint space narrowing is of questionable clinical relevance and outweighed by safety problems. Doxycycline should not be recommended for the treatment of osteoarthritis of the knee or hip.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Brain tumor is one of the most aggressive types of cancer in humans, with an estimated median survival time of 12 months and only 4% of the patients surviving more than 5 years after disease diagnosis. Until recently, brain tumor prognosis has been based only on clinical information such as tumor grade and patient age, but there are reports indicating that molecular profiling of gliomas can reveal subgroups of patients with distinct survival rates. We hypothesize that coupling molecular profiling of brain tumors with clinical information might improve predictions of patient survival time and, consequently, better guide future treatment decisions. In order to evaluate this hypothesis, the general goal of this research is to build models for survival prediction of glioma patients using DNA molecular profiles (U133 Affymetrix gene expression microarrays) along with clinical information. First, a predictive Random Forest model is built for binary outcomes (i.e. short vs. long-term survival) and a small subset of genes whose expression values can be used to predict survival time is selected. Following, a new statistical methodology is developed for predicting time-to-death outcomes using Bayesian ensemble trees. Due to a large heterogeneity observed within prognostic classes obtained by the Random Forest model, prediction can be improved by relating time-to-death with gene expression profile directly. We propose a Bayesian ensemble model for survival prediction which is appropriate for high-dimensional data such as gene expression data. Our approach is based on the ensemble "sum-of-trees" model which is flexible to incorporate additive and interaction effects between genes. We specify a fully Bayesian hierarchical approach and illustrate our methodology for the CPH, Weibull, and AFT survival models. We overcome the lack of conjugacy using a latent variable formulation to model the covariate effects which decreases computation time for model fitting. Also, our proposed models provides a model-free way to select important predictive prognostic markers based on controlling false discovery rates. We compare the performance of our methods with baseline reference survival methods and apply our methodology to an unpublished data set of brain tumor survival times and gene expression data, selecting genes potentially related to the development of the disease under study. A closing discussion compares results obtained by Random Forest and Bayesian ensemble methods under the biological/clinical perspectives and highlights the statistical advantages and disadvantages of the new methodology in the context of DNA microarray data analysis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In 2011, there will be an estimated 1,596,670 new cancer cases and 571,950 cancer-related deaths in the US. With the ever-increasing applications of cancer genetics in epidemiology, there is great potential to identify genetic risk factors that would help identify individuals with increased genetic susceptibility to cancer, which could be used to develop interventions or targeted therapies that could hopefully reduce cancer risk and mortality. In this dissertation, I propose to develop a new statistical method to evaluate the role of haplotypes in cancer susceptibility and development. This model will be flexible enough to handle not only haplotypes of any size, but also a variety of covariates. I will then apply this method to three cancer-related data sets (Hodgkin Disease, Glioma, and Lung Cancer). I hypothesize that there is substantial improvement in the estimation of association between haplotypes and disease, with the use of a Bayesian mathematical method to infer haplotypes that uses prior information from known genetics sources. Analysis based on haplotypes using information from publically available genetic sources generally show increased odds ratios and smaller p-values in both the Hodgkin, Glioma, and Lung data sets. For instance, the Bayesian Joint Logistic Model (BJLM) inferred haplotype TC had a substantially higher estimated effect size (OR=12.16, 95% CI = 2.47-90.1 vs. 9.24, 95% CI = 1.81-47.2) and more significant p-value (0.00044 vs. 0.008) for Hodgkin Disease compared to a traditional logistic regression approach. Also, the effect sizes of haplotypes modeled with recessive genetic effects were higher (and had more significant p-values) when analyzed with the BJLM. Full genetic models with haplotype information developed with the BJLM resulted in significantly higher discriminatory power and a significantly higher Net Reclassification Index compared to those developed with haplo.stats for lung cancer. Future analysis for this work could be to incorporate the 1000 Genomes project, which offers a larger selection of SNPs can be incorporated into the information from known genetic sources as well. Other future analysis include testing non-binary outcomes, like the levels of biomarkers that are present in lung cancer (NNK), and extending this analysis to full GWAS studies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

AIM Several surveys evaluate different retention approaches among orthodontists, but none exist for general dentists. The primary aim of this survey was to record the preferred fixed retainer designs and retention protocols amongst general dentists and orthodontists in Switzerland. A secondary aim was to investigate whether retention patterns were associated with parameters such as gender, university of graduation, time in practice, and specialist status. METHODS An anonymized questionnaire was distributed to general dentists (n = 401) and orthodontists (n = 398) practicing in the German-speaking part of Switzerland. A total of 768 questionnaires could be delivered, 562 (73.2 %) were returned and evaluated. Descriptive statistics were performed and responses to questions of interest were converted to binary outcomes and analyzed using multiple logistic regression. Any associations between the answers and gender, university of graduation (Swiss or foreign), years in practice, and specialist status (orthodontist/general dentist) were assessed. RESULTS Almost all responding orthodontists (98.0 %) and nearly a third of general dentists (29.6 %) reported bonding fixed retainers regularly. The answers were not associated with the practitioner's gender. The university of graduation and number of years in practice had a moderate impact on the responses. The answers were mostly influenced by specialist status. CONCLUSION Graduation school, years in practice, and specialist status influence retention protocol, and evidence-based guidelines for fixed retention should be issued to minimize these effects. Based on the observation that bonding and maintenance of retainers are also performed by general dentists, these guidelines should be taught in dental school and not during post-graduate training.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Although the area under the receiver operating characteristic (AUC) is the most popular measure of the performance of prediction models, it has limitations, especially when it is used to evaluate the added discrimination of a new biomarker in the model. Pencina et al. (2008) proposed two indices, the net reclassification improvement (NRI) and integrated discrimination improvement (IDI), to supplement the improvement in the AUC (IAUC). Their NRI and IDI are based on binary outcomes in case-control settings, which do not involve time-to-event outcome. However, many disease outcomes are time-dependent and the onset time can be censored. Measuring discrimination potential of a prognostic marker without considering time to event can lead to biased estimates. In this dissertation, we have extended the NRI and IDI to survival analysis settings and derived the corresponding sample estimators and asymptotic tests. Simulation studies were conducted to compare the performance of the time-dependent NRI and IDI with Pencina’s NRI and IDI. For illustration, we have applied the proposed method to a breast cancer study.^ Key words: Prognostic model, Discrimination, Time-dependent NRI and IDI ^