965 resultados para Bayesian Analysis
Resumo:
This paper describes the use of model-based geostatistics for choosing the optimal set of sampling locations, collectively called the design, for a geostatistical analysis. Two types of design situations are considered. These are retrospective design, which concerns the addition of sampling locations to, or deletion of locations from, an existing design, and prospective design, which consists of choosing optimal positions for a new set of sampling locations. We propose a Bayesian design criterion which focuses on the goal of efficient spatial prediction whilst allowing for the fact that model parameter values are unknown. The results show that in this situation a wide range of inter-point distances should be included in the design, and the widely used regular design is therefore not the optimal choice.
Resumo:
In this paper, we develop Bayesian hierarchical distributed lag models for estimating associations between daily variations in summer ozone levels and daily variations in cardiovascular and respiratory (CVDRESP) mortality counts for 19 U.S. large cities included in the National Morbidity Mortality Air Pollution Study (NMMAPS) for the period 1987 - 1994. At the first stage, we define a semi-parametric distributed lag Poisson regression model to estimate city-specific relative rates of CVDRESP associated with short-term exposure to summer ozone. At the second stage, we specify a class of distributions for the true city-specific relative rates to estimate an overall effect by taking into account the variability within and across cities. We perform the calculations with respect to several random effects distributions (normal, t-student, and mixture of normal), thus relaxing the common assumption of a two-stage normal-normal hierarchical model. We assess the sensitivity of the results to: 1) lag structure for ozone exposure; 2) degree of adjustment for long-term trends; 3) inclusion of other pollutants in the model;4) heat waves; 5) random effects distributions; and 6) prior hyperparameters. On average across cities, we found that a 10ppb increase in summer ozone level for every day in the previous week is associated with 1.25 percent increase in CVDRESP mortality (95% posterior regions: 0.47, 2.03). The relative rate estimates are also positive and statistically significant at lags 0, 1, and 2. We found that associations between summer ozone and CVDRESP mortality are sensitive to the confounding adjustment for PM_10, but are robust to: 1) the adjustment for long-term trends, other gaseous pollutants (NO_2, SO_2, and CO); 2) the distributional assumptions at the second stage of the hierarchical model; and 3) the prior distributions on all unknown parameters. Bayesian hierarchical distributed lag models and their application to the NMMAPS data allow us estimation of an acute health effect associated with exposure to ambient air pollution in the last few days on average across several locations. The application of these methods and the systematic assessment of the sensitivity of findings to model assumptions provide important epidemiological evidence for future air quality regulations.
Resumo:
In evaluating the accuracy of diagnosis tests, it is common to apply two imperfect tests jointly or sequentially to a study population. In a recent meta-analysis of the accuracy of microsatellite instability testing (MSI) and traditional mutation analysis (MUT) in predicting germline mutations of the mismatch repair (MMR) genes, a Bayesian approach (Chen, Watson, and Parmigiani 2005) was proposed to handle missing data resulting from partial testing and the lack of a gold standard. In this paper, we demonstrate an improved estimation of the sensitivities and specificities of MSI and MUT by using a nonlinear mixed model and a Bayesian hierarchical model, both of which account for the heterogeneity across studies through study-specific random effects. The methods can be used to estimate the accuracy of two imperfect diagnostic tests in other meta-analyses when the prevalence of disease, the sensitivities and/or the specificities of diagnostic tests are heterogeneous among studies. Furthermore, simulation studies have demonstrated the importance of carefully selecting appropriate random effects on the estimation of diagnostic accuracy measurements in this scenario.
Resumo:
Monte Carlo simulation was used to evaluate properties of a simple Bayesian MCMC analysis of the random effects model for single group Cormack-Jolly-Seber capture-recapture data. The MCMC method is applied to the model via a logit link, so parameters p, S are on a logit scale, where logit(S) is assumed to have, and is generated from, a normal distribution with mean μ and variance σ2 . Marginal prior distributions on logit(p) and μ were independent normal with mean zero and standard deviation 1.75 for logit(p) and 100 for μ ; hence minimally informative. Marginal prior distribution on σ2 was placed on τ2=1/σ2 as a gamma distribution with α=β=0.001 . The study design has 432 points spread over 5 factors: occasions (t) , new releases per occasion (u), p, μ , and σ . At each design point 100 independent trials were completed (hence 43,200 trials in total), each with sample size n=10,000 from the parameter posterior distribution. At 128 of these design points comparisons are made to previously reported results from a method of moments procedure. We looked at properties of point and interval inference on μ , and σ based on the posterior mean, median, and mode and equal-tailed 95% credibility interval. Bayesian inference did very well for the parameter μ , but under the conditions used here, MCMC inference performance for σ was mixed: poor for sparse data (i.e., only 7 occasions) or σ=0 , but good when there were sufficient data and not small σ .
Resumo:
BACKGROUND Several treatment strategies are available for adults with advanced-stage Hodgkin's lymphoma, but studies assessing two alternative standards of care-increased dose bleomycin, etoposide, doxorubicin, cyclophosphamide, vincristine, procarbazine, and prednisone (BEACOPPescalated), and doxorubicin, bleomycin, vinblastine, and dacarbazine (ABVD)-were not powered to test differences in overall survival. To guide treatment decisions in this population of patients, we did a systematic review and network meta-analysis to identify the best initial treatment strategy. METHODS We searched the Cochrane Library, Medline, and conference proceedings for randomised controlled trials published between January, 1980, and June, 2013, that assessed overall survival in patients with advanced-stage Hodgkin's lymphoma given BEACOPPbaseline, BEACOPPescalated, BEACOPP variants, ABVD, cyclophosphamide (mechlorethamine), vincristine, procarbazine, and prednisone (C[M]OPP), hybrid or alternating chemotherapy regimens with ABVD as the backbone (eg, COPP/ABVD, MOPP/ABVD), or doxorubicin, vinblastine, mechlorethamine, vincristine, bleomycin, etoposide, and prednisone combined with radiation therapy (the Stanford V regimen). We assessed studies for eligibility, extracted data, and assessed their quality. We then pooled the data and used a Bayesian random-effects model to combine direct comparisons with indirect evidence. We also reconstructed individual patient survival data from published Kaplan-Meier curves and did standard random-effects Poisson regression. Results are reported relative to ABVD. The primary outcome was overall survival. FINDINGS We screened 2055 records and identified 75 papers covering 14 eligible trials that assessed 11 different regimens in 9993 patients, providing 59 651 patient-years of follow-up. 1189 patients died, and the median follow-up was 5·9 years (IQR 4·9-6·7). Included studies were of high methodological quality, and between-trial heterogeneity was negligible (τ(2)=0·01). Overall survival was highest in patients who received six cycles of BEACOPPescalated (HR 0·38, 95% credibility interval [CrI] 0·20-0·75). Compared with a 5 year survival of 88% for ABVD, the survival benefit for six cycles of BEACOPPescalated is 7% (95% CrI 3-10)-ie, a 5 year survival of 95%. Reconstructed individual survival data showed that, at 5 years, BEACOPPescalated has a 10% (95% CI 3-15) advantage over ABVD in overall survival. INTERPRETATION Six cycles of BEACOPPescalated significantly improves overall survival compared with ABVD and other regimens, and thus we recommend this treatment strategy as standard of care for patients with access to the appropriate supportive care.
Resumo:
This dissertation explores phase I dose-finding designs in cancer trials from three perspectives: the alternative Bayesian dose-escalation rules, a design based on a time-to-dose-limiting toxicity (DLT) model, and a design based on a discrete-time multi-state (DTMS) model. We list alternative Bayesian dose-escalation rules and perform a simulation study for the intra-rule and inter-rule comparisons based on two statistical models to identify the most appropriate rule under certain scenarios. We provide evidence that all the Bayesian rules outperform the traditional ``3+3'' design in the allocation of patients and selection of the maximum tolerated dose. The design based on a time-to-DLT model uses patients' DLT information over multiple treatment cycles in estimating the probability of DLT at the end of treatment cycle 1. Dose-escalation decisions are made whenever a cycle-1 DLT occurs, or two months after the previous check point. Compared to the design based on a logistic regression model, the new design shows more safety benefits for trials in which more late-onset toxicities are expected. As a trade-off, the new design requires more patients on average. The design based on a discrete-time multi-state (DTMS) model has three important attributes: (1) Toxicities are categorized over a distribution of severity levels, (2) Early toxicity may inform dose escalation, and (3) No suspension is required between accrual cohorts. The proposed model accounts for the difference in the importance of the toxicity severity levels and for transitions between toxicity levels. We compare the operating characteristics of the proposed design with those from a similar design based on a fully-evaluated model that directly models the maximum observed toxicity level within the patients' entire assessment window. We describe settings in which, under comparable power, the proposed design shortens the trial. The proposed design offers more benefit compared to the alternative design as patient accrual becomes slower.
Resumo:
In 2011, there will be an estimated 1,596,670 new cancer cases and 571,950 cancer-related deaths in the US. With the ever-increasing applications of cancer genetics in epidemiology, there is great potential to identify genetic risk factors that would help identify individuals with increased genetic susceptibility to cancer, which could be used to develop interventions or targeted therapies that could hopefully reduce cancer risk and mortality. In this dissertation, I propose to develop a new statistical method to evaluate the role of haplotypes in cancer susceptibility and development. This model will be flexible enough to handle not only haplotypes of any size, but also a variety of covariates. I will then apply this method to three cancer-related data sets (Hodgkin Disease, Glioma, and Lung Cancer). I hypothesize that there is substantial improvement in the estimation of association between haplotypes and disease, with the use of a Bayesian mathematical method to infer haplotypes that uses prior information from known genetics sources. Analysis based on haplotypes using information from publically available genetic sources generally show increased odds ratios and smaller p-values in both the Hodgkin, Glioma, and Lung data sets. For instance, the Bayesian Joint Logistic Model (BJLM) inferred haplotype TC had a substantially higher estimated effect size (OR=12.16, 95% CI = 2.47-90.1 vs. 9.24, 95% CI = 1.81-47.2) and more significant p-value (0.00044 vs. 0.008) for Hodgkin Disease compared to a traditional logistic regression approach. Also, the effect sizes of haplotypes modeled with recessive genetic effects were higher (and had more significant p-values) when analyzed with the BJLM. Full genetic models with haplotype information developed with the BJLM resulted in significantly higher discriminatory power and a significantly higher Net Reclassification Index compared to those developed with haplo.stats for lung cancer. Future analysis for this work could be to incorporate the 1000 Genomes project, which offers a larger selection of SNPs can be incorporated into the information from known genetic sources as well. Other future analysis include testing non-binary outcomes, like the levels of biomarkers that are present in lung cancer (NNK), and extending this analysis to full GWAS studies.
Resumo:
Objective To determine the comparative effectiveness and safety of current maintenance strategies in preventing exacerbations of asthma. Design Systematic review and network meta-analysis using Bayesian statistics. Data sources Cochrane systematic reviews on chronic asthma, complemented by an updated search when appropriate. Eligibility criteria Trials of adults with asthma randomised to maintenance treatments of at least 24 weeks duration and that reported on asthma exacerbations in full text. Low dose inhaled corticosteroid treatment was the comparator strategy. The primary effectiveness outcome was the rate of severe exacerbations. The secondary outcome was the composite of moderate or severe exacerbations. The rate of withdrawal was analysed as a safety outcome. Results 64 trials with 59 622 patient years of follow-up comparing 15 strategies and placebo were included. For prevention of severe exacerbations, combined inhaled corticosteroids and long acting β agonists as maintenance and reliever treatment and combined inhaled corticosteroids and long acting β agonists in a fixed daily dose performed equally well and were ranked first for effectiveness. The rate ratios compared with low dose inhaled corticosteroids were 0.44 (95% credible interval 0.29 to 0.66) and 0.51 (0.35 to 0.77), respectively. Other combined strategies were not superior to inhaled corticosteroids and all single drug treatments were inferior to single low dose inhaled corticosteroids. Safety was best for conventional best (guideline based) practice and combined maintenance and reliever therapy. Conclusions Strategies with combined inhaled corticosteroids and long acting β agonists are most effective and safe in preventing severe exacerbations of asthma, although some heterogeneity was observed in this network meta-analysis of full text reports.
Resumo:
OBJECTIVE To investigate whether revascularisation improves prognosis compared with medical treatment among patients with stable coronary artery disease. DESIGN Bayesian network meta-analyses to combine direct within trial comparisons between treatments with indirect evidence from other trials while maintaining randomisation. ELIGIBILITY CRITERIA FOR SELECTING STUDIES A strategy of initial medical treatment compared with revascularisation by coronary artery bypass grafting or Food and Drug Administration approved techniques for percutaneous revascularization: balloon angioplasty, bare metal stent, early generation paclitaxel eluting stent, sirolimus eluting stent, and zotarolimus eluting (Endeavor) stent, and new generation everolimus eluting stent, and zotarolimus eluting (Resolute) stent among patients with stable coronary artery disease. DATA SOURCES Medline and Embase from 1980 to 2013 for randomised trials comparing medical treatment with revascularisation. MAIN OUTCOME MEASURE All cause mortality. RESULTS 100 trials in 93 553 patients with 262 090 patient years of follow-up were included. Coronary artery bypass grafting was associated with a survival benefit (rate ratio 0.80, 95% credibility interval 0.70 to 0.91) compared with medical treatment. New generation drug eluting stents (everolimus: 0.75, 0.59 to 0.96; zotarolimus (Resolute): 0.65, 0.42 to 1.00) but not balloon angioplasty (0.85, 0.68 to 1.04), bare metal stents (0.92, 0.79 to 1.05), or early generation drug eluting stents (paclitaxel: 0.92, 0.75 to 1.12; sirolimus: 0.91, 0.75 to 1.10; zotarolimus (Endeavor): 0.88, 0.69 to 1.10) were associated with improved survival compared with medical treatment. Coronary artery bypass grafting reduced the risk of myocardial infarction compared with medical treatment (0.79, 0.63 to 0.99), and everolimus eluting stents showed a trend towards a reduced risk of myocardial infarction (0.75, 0.55 to 1.01). The risk of subsequent revascularisation was noticeably reduced by coronary artery bypass grafting (0.16, 0.13 to 0.20) followed by new generation drug eluting stents (zotarolimus (Resolute): 0.26, 0.17 to 0.40; everolimus: 0.27, 0.21 to 0.35), early generation drug eluting stents (zotarolimus (Endeavor): 0.37, 0.28 to 0.50; sirolimus: 0.29, 0.24 to 0.36; paclitaxel: 0.44, 0.35 to 0.54), and bare metal stents (0.69, 0.59 to 0.81) compared with medical treatment. CONCLUSION Among patients with stable coronary artery disease, coronary artery bypass grafting reduces the risk of death, myocardial infarction, and subsequent revascularisation compared with medical treatment. All stent based coronary revascularisation technologies reduce the need for revascularisation to a variable degree. Our results provide evidence for improved survival with new generation drug eluting stents but no other percutaneous revascularisation technology compared with medical treatment.
Resumo:
The extraction of the finite temperature heavy quark potential from lattice QCD relies on a spectral analysis of the real-time Wilson loop. Through its position and shape, the lowest lying spectral peak encodes the real and imaginary part of this complex potential. We benchmark this extraction strategy using leading order hard-thermal loop (HTL) calculations. I.e. we analytically calculate the Wilson loop and determine the corresponding spectrum. By fitting its lowest lying peak we obtain the real- and imaginary part and confirm that the knowledge of the lowest peak alone is sufficient for obtaining the potential. We deploy a novel Bayesian approach to the reconstruction of spectral functions to HTL correlators in Euclidean time and observe how well the known spectral function and values for the real and imaginary part are reproduced. Finally we apply the method to quenched lattice QCD data and perform an improved estimate of both real and imaginary part of the non-perturbative heavy ǪǬ potential.
Resumo:
Importance In treatment-resistant schizophrenia, clozapine is considered the standard treatment. However, clozapine use has restrictions owing to its many adverse effects. Moreover, an increasing number of randomized clinical trials (RCTs) of other antipsychotics have been published. Objective To integrate all the randomized evidence from the available antipsychotics used for treatment-resistant schizophrenia by performing a network meta-analysis. Data Sources MEDLINE, EMBASE, Biosis, PsycINFO, PubMed, Cochrane Central Register of Controlled Trials, World Health Organization International Trial Registry, and clinicaltrials.gov were searched up to June 30, 2014. Study Selection At least 2 independent reviewers selected published and unpublished single- and double-blind RCTs in treatment-resistant schizophrenia (any study-defined criterion) that compared any antipsychotic (at any dose and in any form of administration) with another antipsychotic or placebo. Data Extraction and Synthesis At least 2 independent reviewers extracted all data into standard forms and assessed the quality of all included trials with the Cochrane Collaboration's risk-of-bias tool. Data were pooled using a random-effects model in a Bayesian setting. Main Outcomes and Measures The primary outcome was efficacy as measured by overall change in symptoms of schizophrenia. Secondary outcomes included change in positive and negative symptoms of schizophrenia, categorical response to treatment, dropouts for any reason and for inefficacy of treatment, and important adverse events. Results Forty blinded RCTs with 5172 unique participants (71.5% men; mean [SD] age, 38.8 [3.7] years) were included in the analysis. Few significant differences were found in all outcomes. In the primary outcome (reported as standardized mean difference; 95% credible interval), olanzapine was more effective than quetiapine (-0.29; -0.56 to -0.02), haloperidol (-0. 29; -0.44 to -0.13), and sertindole (-0.46; -0.80 to -0.06); clozapine was more effective than haloperidol (-0.22; -0.38 to -0.07) and sertindole (-0.40; -0.74 to -0.04); and risperidone was more effective than sertindole (-0.32; -0.63 to -0.01). A pattern of superiority for olanzapine, clozapine, and risperidone was seen in other efficacy outcomes, but results were not consistent and effect sizes were usually small. In addition, relatively few RCTs were available for antipsychotics other than clozapine, haloperidol, olanzapine, and risperidone. The most surprising finding was that clozapine was not significantly better than most other drugs. Conclusions and Relevance Insufficient evidence exists on which antipsychotic is more efficacious for patients with treatment-resistant schizophrenia, and blinded RCTs-in contrast to unblinded, randomized effectiveness studies-provide little evidence of the superiority of clozapine compared with other second-generation antipsychotics. Future clozapine studies with high doses and patients with extremely treatment-refractory schizophrenia might be most promising to change the current evidence.
Resumo:
BACKGROUND Non-steroidal anti-inflammatory drugs (NSAIDs) are the backbone of osteoarthritis pain management. We aimed to assess the effectiveness of different preparations and doses of NSAIDs on osteoarthritis pain in a network meta-analysis. METHODS For this network meta-analysis, we considered randomised trials comparing any of the following interventions: NSAIDs, paracetamol, or placebo, for the treatment of osteoarthritis pain. We searched the Cochrane Central Register of Controlled Trials (CENTRAL) and the reference lists of relevant articles for trials published between Jan 1, 1980, and Feb 24, 2015, with at least 100 patients per group. The prespecified primary and secondary outcomes were pain and physical function, and were extracted in duplicate for up to seven timepoints after the start of treatment. We used an extension of multivariable Bayesian random effects models for mixed multiple treatment comparisons with a random effect at the level of trials. For the primary analysis, a random walk of first order was used to account for multiple follow-up outcome data within a trial. Preparations that used different total daily dose were considered separately in the analysis. To assess a potential dose-response relation, we used preparation-specific covariates assuming linearity on log relative dose. FINDINGS We identified 8973 manuscripts from our search, of which 74 randomised trials with a total of 58 556 patients were included in this analysis. 23 nodes concerning seven different NSAIDs or paracetamol with specific daily dose of administration or placebo were considered. All preparations, irrespective of dose, improved point estimates of pain symptoms when compared with placebo. For six interventions (diclofenac 150 mg/day, etoricoxib 30 mg/day, 60 mg/day, and 90 mg/day, and rofecoxib 25 mg/day and 50 mg/day), the probability that the difference to placebo is at or below a prespecified minimum clinically important effect for pain reduction (effect size [ES] -0·37) was at least 95%. Among maximally approved daily doses, diclofenac 150 mg/day (ES -0·57, 95% credibility interval [CrI] -0·69 to -0·46) and etoricoxib 60 mg/day (ES -0·58, -0·73 to -0·43) had the highest probability to be the best intervention, both with 100% probability to reach the minimum clinically important difference. Treatment effects increased as drug dose increased, but corresponding tests for a linear dose effect were significant only for celecoxib (p=0·030), diclofenac (p=0·031), and naproxen (p=0·026). We found no evidence that treatment effects varied over the duration of treatment. Model fit was good, and between-trial heterogeneity and inconsistency were low in all analyses. All trials were deemed to have a low risk of bias for blinding of patients. Effect estimates did not change in sensitivity analyses with two additional statistical models and accounting for methodological quality criteria in meta-regression analysis. INTERPRETATION On the basis of the available data, we see no role for single-agent paracetamol for the treatment of patients with osteoarthritis irrespective of dose. We provide sound evidence that diclofenac 150 mg/day is the most effective NSAID available at present, in terms of improving both pain and function. Nevertheless, in view of the safety profile of these drugs, physicians need to consider our results together with all known safety information when selecting the preparation and dose for individual patients. FUNDING Swiss National Science Foundation (grant number 405340-104762) and Arco Foundation, Switzerland.
Resumo:
The Fourth Amendment prohibits unreasonable searches and seizures in criminal investigations. The Supreme Court has interpreted this to require that police obtain a warrant prior to search and that illegally seized evidence be excluded from trial. A consensus has developed in the law and economics literature that tort liability for police officers is a superior means of deterring unreasonable searches. We argue that this conclusion depends on the assumption of truth-seeking police, and develop a game-theoretic model to compare the two remedies when some police officers (the bad type) are willing to plant evidence in order to obtain convictions, even though other police (the good type) are not (where this type is private information). We characterize the perfect Bayesian equilibria of the asymmetric-information game between the police and a court that seeks to minimize error costs in deciding whether to convict or acquit suspects. In this framework, we show that the exclusionary rule with a warrant requirement leads to superior outcomes (relative to tort liability) in terms of truth-finding function of courts, because the warrant requirement can reduce the scope for bad types of police to plant evidence
Resumo:
When conducting a randomized comparative clinical trial, ethical, scientific or economic considerations often motivate the use of interim decision rules after successive groups of patients have been treated. These decisions may pertain to the comparative efficacy or safety of the treatments under study, cost considerations, the desire to accelerate the drug evaluation process, or the likelihood of therapeutic benefit for future patients. At the time of each interim decision, an important question is whether patient enrollment should continue or be terminated; either due to a high probability that one treatment is superior to the other, or a low probability that the experimental treatment will ultimately prove to be superior. The use of frequentist group sequential decision rules has become routine in the conduct of phase III clinical trials. In this dissertation, we will present a new Bayesian decision-theoretic approach to the problem of designing a randomized group sequential clinical trial, focusing on two-arm trials with time-to-failure outcomes. Forward simulation is used to obtain optimal decision boundaries for each of a set of possible models. At each interim analysis, we use Bayesian model selection to adaptively choose the model having the largest posterior probability of being correct, and we then make the interim decision based on the boundaries that are optimal under the chosen model. We provide a simulation study to compare this method, which we call Bayesian Doubly Optimal Group Sequential (BDOGS), to corresponding frequentist designs using either O'Brien-Fleming (OF) or Pocock boundaries, as obtained from EaSt 2000. Our simulation results show that, over a wide variety of different cases, BDOGS either performs at least as well as both OF and Pocock, or on average provides a much smaller trial. ^
Resumo:
Many phase II clinical studies in oncology use two-stage frequentist design such as Simon's optimal design. However, they have a common logistical problem regarding the patient accrual at the interim. Strictly speaking, patient accrual at the end of the first stage may have to be suspended until all patients have events, success or failure. For example, when the study endpoint is six-month progression free survival, patient accrual has to be stopped until all outcomes from stage I is observed. However, study investigators may have concern when accrual is suspended after the first stage due to the loss of accrual momentum during this hiatus. We propose a two-stage phase II design that resolves the patient accrual problem due to an interim analysis, and it can be used as an alternative way to frequentist two-stage phase II studies in oncology. ^