867 resultados para Bayesian hierarchical model
Resumo:
AIMS:Duchenne muscular dystrophy (DMD) is a muscle disease with serious cardiac complications. Changes in Ca(2+) homeostasis and oxidative stress were recently associated with cardiac deterioration, but the cellular pathophysiological mechanisms remain elusive. We investigated whether the activity of ryanodine receptor (RyR) Ca(2+) release channels is affected, whether changes in function are cause or consequence and which post-translational modifications drive disease progression. METHODS AND RESULTS:Electrophysiological, imaging, and biochemical techniques were used to study RyRs in cardiomyocytes from mdx mice, an animal model of DMD. Young mdx mice show no changes in cardiac performance, but do so after ∼8 months. Nevertheless, myocytes from mdx pups exhibited exaggerated Ca(2+) responses to mechanical stress and 'hypersensitive' excitation-contraction coupling, hallmarks of increased RyR Ca(2+) sensitivity. Both were normalized by antioxidants, inhibitors of NAD(P)H oxidase and CaMKII, but not by NO synthases and PKA antagonists. Sarcoplasmic reticulum Ca(2+) load and leak were unchanged in young mdx mice. However, by the age of 4-5 months and in senescence, leak was increased and load was reduced, indicating disease progression. By this age, all pharmacological interventions listed above normalized Ca(2+) signals and corrected changes in ECC, Ca(2+) load, and leak. CONCLUSION:Our findings suggest that increased RyR Ca(2+) sensitivity precedes and presumably drives the progression of dystrophic cardiomyopathy, with oxidative stress initiating its development. RyR oxidation followed by phosphorylation, first by CaMKII and later by PKA, synergistically contributes to cardiac deterioration.
Resumo:
Information theory-based metric such as mutual information (MI) is widely used as similarity measurement for multimodal registration. Nevertheless, this metric may lead to matching ambiguity for non-rigid registration. Moreover, maximization of MI alone does not necessarily produce an optimal solution. In this paper, we propose a segmentation-assisted similarity metric based on point-wise mutual information (PMI). This similarity metric, termed SPMI, enhances the registration accuracy by considering tissue classification probabilities as prior information, which is generated from an expectation maximization (EM) algorithm. Diffeomorphic demons is then adopted as the registration model and is optimized in a hierarchical framework (H-SPMI) based on different levels of anatomical structure as prior knowledge. The proposed method is evaluated using Brainweb synthetic data and clinical fMRI images. Both qualitative and quantitative assessment were performed as well as a sensitivity analysis to the segmentation error. Compared to the pure intensity-based approaches which only maximize mutual information, we show that the proposed algorithm provides significantly better accuracy on both synthetic and clinical data.
Resumo:
BACKGROUND Pathology studies have shown delayed arterial healing in culprit lesions of patients with acute coronary syndrome (ACS) compared with stable coronary artery disease (CAD) after placement of drug-eluting stents (DES). It is unknown whether similar differences exist in-vivo during long-term follow-up. Using optical coherence tomography (OCT), we assessed differences in arterial healing between patients with ACS and stable CAD five years after DES implantation. METHODS AND RESULTS A total of 88 patients comprised of 53 ACS lesions with 7864 struts and 35 stable lesions with 5298 struts were suitable for final OCT analysis five years after DES implantation. The analytical approach was based on a hierarchical Bayesian random-effects model. OCT endpoints were strut coverage, malapposition, protrusion, evaginations and cluster formation. Uncovered (1.7% vs. 0.7%, adjusted p=0.041) or protruding struts (0.50% vs. 0.13%, adjusted p=0.038) were more frequent among ACS compared with stable CAD lesions. A similar trend was observed for malapposed struts (1.33% vs. 0.45%, adj. p=0.072). Clusters of uncovered or malapposed/protruding struts were present in 34.0% of ACS and 14.1% of stable patients (adj. p=0.041). Coronary evaginations were more frequent in patients with ST-elevation myocardial infarction compared with stable CAD patients (0.16 vs. 0.13 per cross section, p=0.027). CONCLUSION Uncovered, malapposed, and protruding stent struts as well as clusters of delayed healing may be more frequent in culprit lesions of ACS compared with stable CAD patients late after DES implantation. Our observational findings suggest a differential healing response attributable to lesion characteristics of patients with ACS compared with stable CAD in-vivo.
Resumo:
This dissertation explores phase I dose-finding designs in cancer trials from three perspectives: the alternative Bayesian dose-escalation rules, a design based on a time-to-dose-limiting toxicity (DLT) model, and a design based on a discrete-time multi-state (DTMS) model. We list alternative Bayesian dose-escalation rules and perform a simulation study for the intra-rule and inter-rule comparisons based on two statistical models to identify the most appropriate rule under certain scenarios. We provide evidence that all the Bayesian rules outperform the traditional ``3+3'' design in the allocation of patients and selection of the maximum tolerated dose. The design based on a time-to-DLT model uses patients' DLT information over multiple treatment cycles in estimating the probability of DLT at the end of treatment cycle 1. Dose-escalation decisions are made whenever a cycle-1 DLT occurs, or two months after the previous check point. Compared to the design based on a logistic regression model, the new design shows more safety benefits for trials in which more late-onset toxicities are expected. As a trade-off, the new design requires more patients on average. The design based on a discrete-time multi-state (DTMS) model has three important attributes: (1) Toxicities are categorized over a distribution of severity levels, (2) Early toxicity may inform dose escalation, and (3) No suspension is required between accrual cohorts. The proposed model accounts for the difference in the importance of the toxicity severity levels and for transitions between toxicity levels. We compare the operating characteristics of the proposed design with those from a similar design based on a fully-evaluated model that directly models the maximum observed toxicity level within the patients' entire assessment window. We describe settings in which, under comparable power, the proposed design shortens the trial. The proposed design offers more benefit compared to the alternative design as patient accrual becomes slower.
Resumo:
In numerous intervention studies and education field trials, random assignment to treatment occurs in clusters rather than at the level of observation. This departure of random assignment of units may be due to logistics, political feasibility, or ecological validity. Data within the same cluster or grouping are often correlated. Application of traditional regression techniques, which assume independence between observations, to clustered data produce consistent parameter estimates. However such estimators are often inefficient as compared to methods which incorporate the clustered nature of the data into the estimation procedure (Neuhaus 1993).1 Multilevel models, also known as random effects or random components models, can be used to account for the clustering of data by estimating higher level, or group, as well as lower level, or individual variation. Designing a study, in which the unit of observation is nested within higher level groupings, requires the determination of sample sizes at each level. This study investigates the design and analysis of various sampling strategies for a 3-level repeated measures design on the parameter estimates when the outcome variable of interest follows a Poisson distribution. ^ Results study suggest that second order PQL estimation produces the least biased estimates in the 3-level multilevel Poisson model followed by first order PQL and then second and first order MQL. The MQL estimates of both fixed and random parameters are generally satisfactory when the level 2 and level 3 variation is less than 0.10. However, as the higher level error variance increases, the MQL estimates become increasingly biased. If convergence of the estimation algorithm is not obtained by PQL procedure and higher level error variance is large, the estimates may be significantly biased. In this case bias correction techniques such as bootstrapping should be considered as an alternative procedure. For larger sample sizes, those structures with 20 or more units sampled at levels with normally distributed random errors produced more stable estimates with less sampling variance than structures with an increased number of level 1 units. For small sample sizes, sampling fewer units at the level with Poisson variation produces less sampling variation, however this criterion is no longer important when sample sizes are large. ^ 1Neuhaus J (1993). “Estimation efficiency and Tests of Covariate Effects with Clustered Binary Data”. Biometrics , 49, 989–996^
Resumo:
Computer vision-based food recognition could be used to estimate a meal's carbohydrate content for diabetic patients. This study proposes a methodology for automatic food recognition, based on the Bag of Features (BoF) model. An extensive technical investigation was conducted for the identification and optimization of the best performing components involved in the BoF architecture, as well as the estimation of the corresponding parameters. For the design and evaluation of the prototype system, a visual dataset with nearly 5,000 food images was created and organized into 11 classes. The optimized system computes dense local features, using the scale-invariant feature transform on the HSV color space, builds a visual dictionary of 10,000 visual words by using the hierarchical k-means clustering and finally classifies the food images with a linear support vector machine classifier. The system achieved classification accuracy of the order of 78%, thus proving the feasibility of the proposed approach in a very challenging image dataset.
Resumo:
Information on the relationship between cumulative fossil CO2 emissions and multiple climate targets is essential to design emission mitigation and climate adaptation strategies. In this study, the transient response of a climate or environmental variable per trillion tonnes of CO2 emissions, termed TRE, is quantified for a set of impact-relevant climate variables and from a large set of multi-forcing scenarios extended to year 2300 towards stabilization. An ∼ 1000-member ensemble of the Bern3D-LPJ carbon–climate model is applied and model outcomes are constrained by 26 physical and biogeochemical observational data sets in a Bayesian, Monte Carlo-type framework. Uncertainties in TRE estimates include both scenario uncertainty and model response uncertainty. Cumulative fossil emissions of 1000 Gt C result in a global mean surface air temperature change of 1.9 °C (68 % confidence interval (c.i.): 1.3 to 2.7 °C), a decrease in surface ocean pH of 0.19 (0.18 to 0.22), and a steric sea level rise of 20 cm (13 to 27 cm until 2300). Linearity between cumulative emissions and transient response is high for pH and reasonably high for surface air and sea surface temperatures, but less pronounced for changes in Atlantic meridional overturning, Southern Ocean and tropical surface water saturation with respect to biogenic structures of calcium carbonate, and carbon stocks in soils. The constrained model ensemble is also applied to determine the response to a pulse-like emission and in idealized CO2-only simulations. The transient climate response is constrained, primarily by long-term ocean heat observations, to 1.7 °C (68 % c.i.: 1.3 to 2.2 °C) and the equilibrium climate sensitivity to 2.9 °C (2.0 to 4.2 °C). This is consistent with results by CMIP5 models but inconsistent with recent studies that relied on short-term air temperature data affected by natural climate variability.
Resumo:
BACKGROUND Cam-type femoroacetabular impingement (FAI) resulting from an abnormal nonspherical femoral head shape leads to chondrolabral damage and is considered a cause of early osteoarthritis. A previously developed experimental ovine FAI model induces a cam-type impingement that results in localized chondrolabral damage, replicating the patterns found in the human hip. Biochemical MRI modalities such as T2 and T2* may allow for evaluation of the cartilage biochemistry long before cartilage loss occurs and, for that reason, may be a worthwhile avenue of inquiry. QUESTIONS/PURPOSES We asked: (1) Does the histological grading of degenerated cartilage correlate with T2 or T2* values in this ovine FAI model? (2) How accurately can zones of degenerated cartilage be predicted with T2 or T2* MRI in this model? METHODS A cam-type FAI was induced in eight Swiss alpine sheep by performing a closing wedge intertrochanteric varus osteotomy. After ambulation of 10 to 14 weeks, the sheep were euthanized and a 3-T MRI of the hip was performed. T2 and T2* values were measured at six locations on the acetabulum and compared with the histological damage pattern using the Mankin score. This is an established histological scoring system to quantify cartilage degeneration. Both T2 and T2* values are determined by cartilage water content and its collagen fiber network. Of those, the T2* mapping is a more modern sequence with technical advantages (eg, shorter acquisition time). Correlation of the Mankin score and the T2 and T2* values, respectively, was evaluated using the Spearman's rank correlation coefficient. We used a hierarchical cluster analysis to calculate the positive and negative predictive values of T2 and T2* to predict advanced cartilage degeneration (Mankin ≥ 3). RESULTS We found a negative correlation between the Mankin score and both the T2 (p < 0.001, r = -0.79) and T2* values (p < 0.001, r = -0.90). For the T2 MRI technique, we found a positive predictive value of 100% (95% confidence interval [CI], 79%-100%) and a negative predictive value of 84% (95% CI, 67%-95%). For the T2* technique, we found a positive predictive value of 100% (95% CI, 79%-100%) and a negative predictive value of 94% (95% CI, 79%-99%). CONCLUSIONS T2 and T2* MRI modalities can reliably detect early cartilage degeneration in the experimental ovine FAI model. CLINICAL RELEVANCE T2 and T2* MRI modalities have the potential to allow for monitoring the natural course of osteoarthrosis noninvasively and to evaluate the results of surgical treatments targeted to joint preservation.
Resumo:
Mental imagery and perception are thought to rely on similar neural circuits, and many recent behavioral studies have attempted to demonstrate interactions between actual physical stimulation and sensory imagery in the corresponding sensory modality. However, there has been a lack of theoretical understanding of the nature of these interactions, and both interferential and facilitatory effects have been found. Facilitatory effects appear strikingly similar to those that arise due to experimental manipulations of expectation. Using a self-motion discrimination task, we try to disentangle the effects of mental imagery from those of expectation by using a hierarchical drift diffusion model to investigate both choice data and response times. Manipulations of expectation are reasonably well understood in terms of their selective influence on parameters of the drift diffusion model, and in this study, we make the first attempt to similarly characterize the effects of mental imagery. We investigate mental imagery within the computational framework of control theory and state estimation. • Mental imagery and perception are thought to rely on similar neural circuits; however, on more theoretical grounds, imagery seems to be closely related to the output of forward models (sensory predictions). • We reanalyzed data from a study of imagined self-motion. • Bayesian modeling of response times may allow us to disentangle the effects of mental imagery on behavior from other cognitive (top-down) effects, such as expectation.
Resumo:
Hierarchically clustered populations are often encountered in public health research, but the traditional methods used in analyzing this type of data are not always adequate. In the case of survival time data, more appropriate methods have only begun to surface in the last couple of decades. Such methods include multilevel statistical techniques which, although more complicated to implement than traditional methods, are more appropriate. ^ One population that is known to exhibit a hierarchical structure is that of patients who utilize the health care system of the Department of Veterans Affairs where patients are grouped not only by hospital, but also by geographic network (VISN). This project analyzes survival time data sets housed at the Houston Veterans Affairs Medical Center Research Department using two different Cox Proportional Hazards regression models, a traditional model and a multilevel model. VISNs that exhibit significantly higher or lower survival rates than the rest are identified separately for each model. ^ In this particular case, although there are differences in the results of the two models, it is not enough to warrant using the more complex multilevel technique. This is shown by the small estimates of variance associated with levels two and three in the multilevel Cox analysis. Much of the differences that are exhibited in identification of VISNs with high or low survival rates is attributable to computer hardware difficulties rather than to any significant improvements in the model. ^
Resumo:
Motivation: Population allele frequencies are correlated when populations have a shared history or when they exchange genes. Unfortunately, most models for allele frequency and inference about population structure ignore this correlation. Recent analytical results show that among populations, correlations can be very high, which could affect estimates of population genetic structure. In this study, we propose a mixture beta model to characterize the allele frequency distribution among populations. This formulation incorporates the correlation among populations as well as extending the model to data with different clusters of populations. Results: Using simulated data, we show that in general, the mixture model provides a good approximation of the among-population allele frequency distribution and a good estimate of correlation among populations. Results from fitting the mixture model to a dataset of genotypes at 377 autosomal microsatellite loci from human populations indicate high correlation among populations, which may not be appropriate to neglect. Traditional measures of population structure tend to over-estimate the amount of genetic differentiation when correlation is neglected. Inference is performed in a Bayesian framework.
Resumo:
This paper uses Bayesian vector autoregressive models to examine the usefulness of leading indicators in predicting US home sales. The benchmark Bayesian model includes home sales, the price of homes, the mortgage rate, real personal disposable income, and the unemployment rate. We evaluate the forecasting performance of six alternative leading indicators by adding each, in turn, to the benchmark model. Out-of-sample forecast performance over three periods shows that the model that includes building permits authorized consistently produces the most accurate forecasts. Thus, the intention to build in the future provides good information with which to predict home sales. Another finding suggests that leading indicators with longer leads outperform the short-leading indicators.
Resumo:
When conducting a randomized comparative clinical trial, ethical, scientific or economic considerations often motivate the use of interim decision rules after successive groups of patients have been treated. These decisions may pertain to the comparative efficacy or safety of the treatments under study, cost considerations, the desire to accelerate the drug evaluation process, or the likelihood of therapeutic benefit for future patients. At the time of each interim decision, an important question is whether patient enrollment should continue or be terminated; either due to a high probability that one treatment is superior to the other, or a low probability that the experimental treatment will ultimately prove to be superior. The use of frequentist group sequential decision rules has become routine in the conduct of phase III clinical trials. In this dissertation, we will present a new Bayesian decision-theoretic approach to the problem of designing a randomized group sequential clinical trial, focusing on two-arm trials with time-to-failure outcomes. Forward simulation is used to obtain optimal decision boundaries for each of a set of possible models. At each interim analysis, we use Bayesian model selection to adaptively choose the model having the largest posterior probability of being correct, and we then make the interim decision based on the boundaries that are optimal under the chosen model. We provide a simulation study to compare this method, which we call Bayesian Doubly Optimal Group Sequential (BDOGS), to corresponding frequentist designs using either O'Brien-Fleming (OF) or Pocock boundaries, as obtained from EaSt 2000. Our simulation results show that, over a wide variety of different cases, BDOGS either performs at least as well as both OF and Pocock, or on average provides a much smaller trial. ^
Resumo:
The joint modeling of longitudinal and survival data is a new approach to many applications such as HIV, cancer vaccine trials and quality of life studies. There are recent developments of the methodologies with respect to each of the components of the joint model as well as statistical processes that link them together. Among these, second order polynomial random effect models and linear mixed effects models are the most commonly used for the longitudinal trajectory function. In this study, we first relax the parametric constraints for polynomial random effect models by using Dirichlet process priors, then three longitudinal markers rather than only one marker are considered in one joint model. Second, we use a linear mixed effect model for the longitudinal process in a joint model analyzing the three markers. In this research these methods were applied to the Primary Biliary Cirrhosis sequential data, which were collected from a clinical trial of primary biliary cirrhosis (PBC) of the liver. This trial was conducted between 1974 and 1984 at the Mayo Clinic. The effects of three longitudinal markers (1) Total Serum Bilirubin, (2) Serum Albumin and (3) Serum Glutamic-Oxaloacetic transaminase (SGOT) on patients' survival were investigated. Proportion of treatment effect will also be studied using the proposed joint modeling approaches. ^ Based on the results, we conclude that the proposed modeling approaches yield better fit to the data and give less biased parameter estimates for these trajectory functions than previous methods. Model fit is also improved after considering three longitudinal markers instead of one marker only. The results from analysis of proportion of treatment effects from these joint models indicate same conclusion as that from the final model of Fleming and Harrington (1991), which is Bilirubin and Albumin together has stronger impact in predicting patients' survival and as a surrogate endpoints for treatment. ^
Resumo:
This study investigates a theoretical model where a longitudinal process, that is a stationary Markov-Chain, and a Weibull survival process share a bivariate random effect. Furthermore, a Quality-of-Life adjusted survival is calculated as the weighted sum of survival time. Theoretical values of population mean adjusted survival of the described model are computed numerically. The parameters of the bivariate random effect do significantly affect theoretical values of population mean. Maximum-Likelihood and Bayesian methods are applied on simulated data to estimate the model parameters. Based on the parameter estimates, predicated population mean adjusted survival can then be calculated numerically and compared with the theoretical values. Bayesian method and Maximum-Likelihood method provide parameter estimations and population mean prediction with comparable accuracy; however Bayesian method suffers from poor convergence due to autocorrelation and inter-variable correlation. ^