940 resultados para MCMC, Metropolis Hastings, Gibbs, Bayesian, OBMC, slice sampler, Python


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a fully Bayesian approach that simultaneously combines basic event and statistically independent higher event-level failure data in fault tree quantification. Such higher-level data could correspond to train, sub-system or system failure events. The full Bayesian approach also allows the highest-level data that are usually available for existing facilities to be automatically propagated to lower levels. A simple example illustrates the proposed approach. The optimal allocation of resources for collecting additional data from a choice of different level events is also presented. The optimization is achieved using a genetic algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVES: This study sought to evaluate the diagnostic accuracy of coronary binary in-stent restenosis (ISR) with angiography using 64-slice multislice computed tomography coronary angiography (CTCA) compared with invasive coronary angiography (ICA). BACKGROUND: A noninvasive detection of ISR would result in an easier and safer way to conduct patient follow-up. METHODS: We performed CTCA in 81 patients after stent implantation, and 125 stented lesions were scanned. Two sets of images were reconstructed with different types of convolution kernels. On CTCA, neointimal proliferation was visually evaluated according to luminal contrast attenuation inside the stent. Lesions were graded as follows: grade 1, none or slight neointimal proliferation; grade 2, neointimal proliferation with no significant stenosis (<50%); grade 3, neointimal proliferation with moderate stenosis (> or =50%); and grade 4, neointimal proliferation with severe stenosis (> or =75%). Grades 3 and 4 were considered binary ISR. The diagnostic accuracy of CTCA compared with ICA was evaluated. RESULTS: By ICA, 24 ISRs were diagnosed. Sensitivity, specificity, positive predictive value, and negative predictive value were 92%, 81%, 54%, and 98% for the overall population, whereas values were 91%, 93%, 77%, and 98% when excluding unassessable segments (15 segments, 12%). For assessable segments, CTCA correctly diagnosed 20 of the 22 ISRs detected by ICA. Six lesions without ISR were overestimated as ISR by CTCA. As the grade of neointimal proliferation by CTCA increases, the median value of percent diameter stenosis increased linearly. CONCLUSIONS: Binary ISR can be excluded with high probability by CTCA, with a moderate rate of false-positive results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Multislice computed tomography (MSCT) is a promising noninvasive method of detecting coronary artery disease (CAD). However, most data have been obtained in selected series of patients. The purpose of the present study was to investigate the accuracy of 64-slice MSCT (64 MSCT) in daily practice, without any patient selection. METHODS AND RESULTS: Using 64-slice MSCT coronary angiography (CTA), 69 consecutive patients, 39 (57%) of whom had previously undergone stent implantation, were evaluated. The mean heart rate during scan was 72 beats/min, scan time 13.6 s and the amount of contrast media 72 mL. The mean time span between invasive coronary angiography (ICAG) and CTA was 6 days. Significant stenosis was defined as a diameter reduction of > 50%. Of 966 segments, 884 (92%) were assessable. Compared with ICAG, the sensitivity of CTA to diagnose significant stenosis was 90%, specificity 94%, positive predictive value (PPV) 89% and negative predictive value (NPV) 95%. With regard to 58 stented lesions, the sensitivity, specificity, PPV and NPV were 93%, 96%, 87% and 98%, respectively. On the patient-based analysis, the sensitivity, specificity, PPV and NPV of CTA to detect CAD were 98%, 86%, 98% and 86%, respectively. Eighty-two (8%) segments were not assessable because of irregular rhythm, calcification or tachycardia. CONCLUSION: Sixty-four-MSCT has a high accuracy for the detection of significant CAD in an unselected patient population and therefore can be considered as a valuable noninvasive technique.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Traffic particle concentrations show considerable spatial variability within a metropolitan area. We consider latent variable semiparametric regression models for modeling the spatial and temporal variability of black carbon and elemental carbon concentrations in the greater Boston area. Measurements of these pollutants, which are markers of traffic particles, were obtained from several individual exposure studies conducted at specific household locations as well as 15 ambient monitoring sites in the city. The models allow for both flexible, nonlinear effects of covariates and for unexplained spatial and temporal variability in exposure. In addition, the different individual exposure studies recorded different surrogates of traffic particles, with some recording only outdoor concentrations of black or elemental carbon, some recording indoor concentrations of black carbon, and others recording both indoor and outdoor concentrations of black carbon. A joint model for outdoor and indoor exposure that specifies a spatially varying latent variable provides greater spatial coverage in the area of interest. We propose a penalised spline formation of the model that relates to generalised kringing of the latent traffic pollution variable and leads to a natural Bayesian Markov Chain Monte Carlo algorithm for model fitting. We propose methods that allow us to control the degress of freedom of the smoother in a Bayesian framework. Finally, we present results from an analysis that applies the model to data from summer and winter separately

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Medical errors originating in health care facilities are a significant source of preventable morbidity, mortality, and healthcare costs. Voluntary error report systems that collect information on the causes and contributing factors of medi- cal errors regardless of the resulting harm may be useful for developing effective harm prevention strategies. Some patient safety experts question the utility of data from errors that did not lead to harm to the patient, also called near misses. A near miss (a.k.a. close call) is an unplanned event that did not result in injury to the patient. Only a fortunate break in the chain of events prevented injury. We use data from a large voluntary reporting system of 836,174 medication errors from 1999 to 2005 to provide evidence that the causes and contributing factors of errors that result in harm are similar to the causes and contributing factors of near misses. We develop Bayesian hierarchical models for estimating the log odds of selecting a given cause (or contributing factor) of error given harm has occurred and the log odds of selecting the same cause given that harm did not occur. The posterior distribution of the correlation between these two vectors of log-odds is used as a measure of the evidence supporting the use of data from near misses and their causes and contributing factors to prevent medical errors. In addition, we identify the causes and contributing factors that have the highest or lowest log-odds ratio of harm versus no harm. These causes and contributing factors should also be a focus in the design of prevention strategies. This paper provides important evidence on the utility of data from near misses, which constitute the vast majority of errors in our data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider inference in randomized studies, in which repeatedly measured outcomes may be informatively missing due to drop out. In this setting, it is well known that full data estimands are not identified unless unverified assumptions are imposed. We assume a non-future dependence model for the drop-out mechanism and posit an exponential tilt model that links non-identifiable and identifiable distributions. This model is indexed by non-identified parameters, which are assumed to have an informative prior distribution, elicited from subject-matter experts. Under this model, full data estimands are shown to be expressed as functionals of the distribution of the observed data. To avoid the curse of dimensionality, we model the distribution of the observed data using a Bayesian shrinkage model. In a simulation study, we compare our approach to a fully parametric and a fully saturated model for the distribution of the observed data. Our methodology is motivated and applied to data from the Breast Cancer Prevention Trial.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Latent class regression models are useful tools for assessing associations between covariates and latent variables. However, evaluation of key model assumptions cannot be performed using methods from standard regression models due to the unobserved nature of latent outcome variables. This paper presents graphical diagnostic tools to evaluate whether or not latent class regression models adhere to standard assumptions of the model: conditional independence and non-differential measurement. An integral part of these methods is the use of a Markov Chain Monte Carlo estimation procedure. Unlike standard maximum likelihood implementations for latent class regression model estimation, the MCMC approach allows us to calculate posterior distributions and point estimates of any functions of parameters. It is this convenience that allows us to provide the diagnostic methods that we introduce. As a motivating example we present an analysis focusing on the association between depression and socioeconomic status, using data from the Epidemiologic Catchment Area study. We consider a latent class regression analysis investigating the association between depression and socioeconomic status measures, where the latent variable depression is regressed on education and income indicators, in addition to age, gender, and marital status variables. While the fitted latent class regression model yields interesting results, the model parameters are found to be invalid due to the violation of model assumptions. The violation of these assumptions is clearly identified by the presented diagnostic plots. These methods can be applied to standard latent class and latent class regression models, and the general principle can be extended to evaluate model assumptions in other types of models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes the use of model-based geostatistics for choosing the optimal set of sampling locations, collectively called the design, for a geostatistical analysis. Two types of design situations are considered. These are retrospective design, which concerns the addition of sampling locations to, or deletion of locations from, an existing design, and prospective design, which consists of choosing optimal positions for a new set of sampling locations. We propose a Bayesian design criterion which focuses on the goal of efficient spatial prediction whilst allowing for the fact that model parameter values are unknown. The results show that in this situation a wide range of inter-point distances should be included in the design, and the widely used regular design is therefore not the optimal choice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective. To examine effects of primary care physicians (PCPs) and patients on the association between charges for primary care and specialty care in a point-of-service (POS) health plan. Data Source. Claims from 1996 for 3,308 adult male POS plan members, each of whom was assigned to one of the 50 family practitioner-PCPs with the largest POS plan member-loads. Study Design. A hierarchical multivariate two-part model was fitted using a Gibbs sampler to estimate PCPs' effects on patients' annual charges for two types of services, primary care and specialty care, the associations among PCPs' effects, and within-patient associations between charges for the two services. Adjusted Clinical Groups (ACGs) were used to adjust for case-mix. Principal Findings. PCPs with higher case-mix adjusted rates of specialist use were less likely to see their patients at least once during the year (estimated correlation: –.40; 95% CI: –.71, –.008) and provided fewer services to patients that they saw (estimated correlation: –.53; 95% CI: –.77, –.21). Ten of 11 PCPs whose case-mix adjusted effects on primary care charges were significantly less than or greater than zero (p < .05) had estimated, case-mix adjusted effects on specialty care charges that were of opposite sign (but not significantly different than zero). After adjustment for ACG and PCP effects, the within-patient, estimated odds ratio for any use of primary care given any use of specialty care was .57 (95% CI: .45, .73). Conclusions. PCPs and patients contributed independently to a trade-off between utilization of primary care and specialty care. The trade-off appeared to partially offset significant differences in the amount of care provided by PCPs. These findings were possible because we employed a hierarchical multivariate model rather than separate univariate models.