899 resultados para Simulation analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Numerous studies have demonstrated an association between endothelial shear stress (ESS) and neointimal formation after stent implantation. However, the role of ESS on the composition of neointima and underlying plaque remains unclear. METHODS Patients recruited in the Comfortable AMI-IBIS 4 study implanted with bare metal stents (BMS) or biolimus eluting stents (BES) that had biplane coronary angiography at 13month follow-up were included in the analysis. The intravascular ultrasound virtual-histology (IVUS-VH) and the angiographic data were used to reconstruct the luminal surface, and the stent in the stented segments. Blood flow simulation was performed in the stent surface, which was assumed to represent the luminal surface at baseline, to assess the association between ESS and neointima thickness. The predominant ESS was estimated in 3-mm segments and was correlated with the amount of neointima, neointimal tissue composition, and with the changes in the underlying plaque burden and composition. RESULTS Forty three patients (18 implanted with BMS and 25 with BES) were studied. In both stent groups negative correlations were noted between ESS and neointima thickness in BMS (P<0.001) and BES (P=0.002). In BMS there was a negative correlation between predominant ESS and the percentage of the neointimal necrotic core component (P=0.015). In BES group, the limited neointima formation did not allow evaluation of the effect of ESS on its tissue characteristics. ESS did not affect vessel wall remodeling and the plaque burden and composition behind BMS (P>0.10) and BES (P>0.45). CONCLUSIONS ESS determines neointimal formation in both BMS and BES and affects the composition of the neointima in BMS. Conversely, ESS does not impact the plaque behind struts irrespective of stent type throughout 13months of follow-up.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recurrent wheezing or asthma is a common problem in children that has increased considerably in prevalence in the past few decades. The causes and underlying mechanisms are poorly understood and it is thought that a numb er of distinct diseases causing similar symptoms are involved. Due to the lack of a biologically founded classification system, children are classified according to their observed disease related features (symptoms, signs, measurements) into phenotypes. The objectives of this PhD project were a) to develop tools for analysing phenotypic variation of a disease, and b) to examine phenotypic variability of wheezing among children by applying these tools to existing epidemiological data. A combination of graphical methods (multivariate co rrespondence analysis) and statistical models (latent variables models) was used. In a first phase, a model for discrete variability (latent class model) was applied to data on symptoms and measurements from an epidemiological study to identify distinct phenotypes of wheezing. In a second phase, the modelling framework was expanded to include continuous variability (e.g. along a severity gradient) and combinations of discrete and continuo us variability (factor models and factor mixture models). The third phase focused on validating the methods using simulation studies. The main body of this thesis consists of 5 articles (3 published, 1 submitted and 1 to be submitted) including applications, methodological contributions and a review. The main findings and contributions were: 1) The application of a latent class model to epidemiological data (symptoms and physiological measurements) yielded plausible pheno types of wheezing with distinguishing characteristics that have previously been used as phenotype defining characteristics. 2) A method was proposed for including responses to conditional questions (e.g. questions on severity or triggers of wheezing are asked only to children with wheeze) in multivariate modelling.ii 3) A panel of clinicians was set up to agree on a plausible model for wheezing diseases. The model can be used to generate datasets for testing the modelling approach. 4) A critical review of methods for defining and validating phenotypes of wheeze in children was conducted. 5) The simulation studies showed that a parsimonious parameterisation of the models is required to identify the true underlying structure of the data. The developed approach can deal with some challenges of real-life cohort data such as variables of mixed mode (continuous and categorical), missing data and conditional questions. If carefully applied, the approach can be used to identify whether the underlying phenotypic variation is discrete (classes), continuous (factors) or a combination of these. These methods could help improve precision of research into causes and mechanisms and contribute to the development of a new classification of wheezing disorders in children and other diseases which are difficult to classify.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Domestic dog rabies is an endemic disease in large parts of the developing world and also epidemic in previously free regions. For example, it continues to spread in eastern Indonesia and currently threatens adjacent rabies-free regions with high densities of free-roaming dogs, including remote northern Australia. Mathematical and simulation disease models are useful tools to provide insights on the most effective control strategies and to inform policy decisions. Existing rabies models typically focus on long-term control programs in endemic countries. However, simulation models describing the dog rabies incursion scenario in regions where rabies is still exotic are lacking. We here describe such a stochastic, spatially explicit rabies simulation model that is based on individual dog information collected in two remote regions in northern Australia. Illustrative simulations produced plausible results with epidemic characteristics expected for rabies outbreaks in disease free regions (mean R0 1.7, epidemic peak 97 days post-incursion, vaccination as the most effective response strategy). Systematic sensitivity analysis identified that model outcomes were most sensitive to seven of the 30 model parameters tested. This model is suitable for exploring rabies spread and control before an incursion in populations of largely free-roaming dogs that live close together with their owners. It can be used for ad-hoc contingency or response planning prior to and shortly after incursion of dog rabies in previously free regions. One challenge that remains is model parameterisation, particularly how dogs' roaming and contacts and biting behaviours change following a rabies incursion in a previously rabies free population.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The cerebellum is the major brain structure that contributes to our ability to improve movements through learning and experience. We have combined computer simulations with behavioral and lesion studies to investigate how modification of synaptic strength at two different sites within the cerebellum contributes to a simple form of motor learning—Pavlovian conditioning of the eyelid response. These studies are based on the wealth of knowledge about the intrinsic circuitry and physiology of the cerebellum and the straightforward manner in which this circuitry is engaged during eyelid conditioning. Thus, our simulations are constrained by the well-characterized synaptic organization of the cerebellum and further, the activity of cerebellar inputs during simulated eyelid conditioning is based on existing recording data. These simulations have allowed us to make two important predictions regarding the mechanisms underlying cerebellar function, which we have tested and confirmed with behavioral studies. The first prediction describes the mechanisms by which one of the sites of synaptic modification, the granule to Purkinje cell synapses (gr → Pkj) of the cerebellar cortex, could generate two time-dependent properties of eyelid conditioning—response timing and the ISI function. An empirical test of this prediction using small, electrolytic lesions of the cerebellar cortex revealed the pattern of results predicted by the simulations. The second prediction made by the simulations is that modification of synaptic strength at the other site of plasticity, the mossy fiber to deep nuclei synapses (mf → nuc), is under the control of Purkinje cell activity. The analysis predicts that this property should confer mf → nuc synapses with resistance to extinction. Thus, while extinction processes erase plasticity at the first site, residual plasticity at mf → nuc synapses remains. The residual plasticity at the mf → nuc site confers the cerebellum with the capability for rapid relearning long after the learned behavior has been extinguished. We confirmed this prediction using a lesion technique that reversibly disconnected the cerebellar cortex at various stages during extinction and reacquisition of eyelid responses. The results of these studies represent significant progress toward a complete understanding of how the cerebellum contributes to motor learning. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Analysis of recurrent events has been widely discussed in medical, health services, insurance, and engineering areas in recent years. This research proposes to use a nonhomogeneous Yule process with the proportional intensity assumption to model the hazard function on recurrent events data and the associated risk factors. This method assumes that repeated events occur for each individual, with given covariates, according to a nonhomogeneous Yule process with intensity function λx(t) = λ 0(t) · exp( x′β). One of the advantages of using a non-homogeneous Yule process for recurrent events is that it assumes that the recurrent rate is proportional to the number of events that occur up to time t. Maximum likelihood estimation is used to provide estimates of the parameters in the model, and a generalized scoring iterative procedure is applied in numerical computation. ^ Model comparisons between the proposed method and other existing recurrent models are addressed by simulation. One example concerning recurrent myocardial infarction events compared between two distinct populations, Mexican-American and Non-Hispanic Whites in the Corpus Christi Heart Project is examined. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Currently there is no general method to study the impact of population admixture within families on the assumptions of random mating and consequently, Hardy-Weinberg equilibrium (HWE) and linkage equilibrium (LE) and on the inference obtained from traditional linkage analysis. ^ First, through simulation, the effect of admixture of two populations on the log of the odds (LOD) score was assessed, using Prostate Cancer as the typical disease model. Comparisons between simulated mixed and homogeneous families were performed. LOD scores under both models of admixture (within families and within a data set of homogeneous families) were closest to the homogeneous family scores of the population having the highest mixing proportion. Random sampling of families or ascertainment of families with disease affection status did not affect this observation, nor did the mode of inheritance (dominant/recessive) or sample size. ^ Second, after establishing the effect of admixture on the LOD score and inference for linkage, the presence of induced disequilibria by population admixture within families was studied and an adjustment procedure was developed. The adjustment did not force all disequilibria to disappear but because the families were adjusted for the population admixture, those replicates where the disequilibria exist are no longer affected by the disequilibria in terms of maximization for linkage. Furthermore, the adjustment was able to exclude uninformative families or families that had such a high departure from HWE and/or LE that their LOD scores were not reliable. ^ Together these observations imply that the presence of families of mixed population ancestry impacts linkage analysis in terms of the LOD score and the estimate of the recombination fraction. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Do siblings of centenarians tend to have longer life spans? To answer this question, life spans of 184 siblings for 42 centenarians have been evaluated. Two important questions have been addressed in analyzing the sibling data. First, a standard needs to be established, to which the life spans of 184 siblings are compared. In this report, an external reference population is constructed from the U.S. life tables. Its estimated mortality rates are treated as baseline hazards from which the relative mortality of the siblings are estimated. Second, the standard survival models which assume independent observations are invalid when correlation within family exists, underestimating the true variance. Methods that allow correlations are illustrated by three different methods. First, the cumulative relative excess mortality between siblings and their comparison group is calculated and used as an effective graphic tool, along with the Product Limit estimator of the survival function. The variance estimator of the cumulative relative excess mortality is adjusted for the potential within family correlation using Taylor linearization approach. Second, approaches that adjust for the inflated variance are examined. They are adjusted one-sample log-rank test using design effect originally proposed by Rao and Scott in the correlated binomial or Poisson distribution setting and the robust variance estimator derived from the log-likelihood function of a multiplicative model. Nether of these two approaches provide correlation estimate within families, but the comparison with the comparison with the standard remains valid under dependence. Last, using the frailty model concept, the multiplicative model, where the baseline hazards are known, is extended by adding a random frailty term that is based on the positive stable or the gamma distribution. Comparisons between the two frailty distributions are performed by simulation. Based on the results from various approaches, it is concluded that the siblings of centenarians had significant lower mortality rates as compared to their cohorts. The frailty models also indicate significant correlations between the life spans of the siblings. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The discoveries of the BRCA1 and BRCA2 genes have made it possible for women of families with hereditary breast/ovarian cancer to determine if they carry cancer-predisposing genetic mutations. Women with germline mutations have significantly higher probabilities of developing both cancers than the general population. Since the presence of a BRCA1 or BRCA2 mutation does not guarantee future cancer development, the appropriate course of action remains uncertain for these women. Prophylactic mastectomy and oophorectomy remain controversial since the underlying premise for surgical intervention is based more upon reduction in the estimated risk of cancer than on actual evidence of clinical benefit. Issues that are incorporated in a woman's decision making process include quality of life without breasts, ovaries, attitudes toward possible surgical morbidity as well as a remaining risk of future development of breast/ovarian cancer despite prophylactic surgery. The incorporation of patient preferences into decision analysis models can determine the quality-adjusted survival of different prophylactic approaches to breast/ovarian cancer prevention. Monte Carlo simulation was conducted on 4 separate decision models representing prophylactic oophorectomy, prophylactic mastectomy, prophylactic oophorectomy/mastectomy and screening. The use of 3 separate preference assessment methods across different populations of women allows researchers to determine how quality adjusted survival varies according to clinical strategy, method of preference assessment and the population from which preferences are assessed. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Improvements in the analysis of microarray images are critical for accurately quantifying gene expression levels. The acquisition of accurate spot intensities directly influences the results and interpretation of statistical analyses. This dissertation discusses the implementation of a novel approach to the analysis of cDNA microarray images. We use a stellar photometric model, the Moffat function, to quantify microarray spots from nylon microarray images. The inherent flexibility of the Moffat shape model makes it ideal for quantifying microarray spots. We apply our novel approach to a Wilms' tumor microarray study and compare our results with a fixed-circle segmentation approach for spot quantification. Our results suggest that different spot feature extraction methods can have an impact on the ability of statistical methods to identify differentially expressed genes. We also used the Moffat function to simulate a series of microarray images under various experimental conditions. These simulations were used to validate the performance of various statistical methods for identifying differentially expressed genes. Our simulation results indicate that tests taking into account the dependency between mean spot intensity and variance estimation, such as the smoothened t-test, can better identify differentially expressed genes, especially when the number of replicates and mean fold change are low. The analysis of the simulations also showed that overall, a rank sum test (Mann-Whitney) performed well at identifying differentially expressed genes. Previous work has suggested the strengths of nonparametric approaches for identifying differentially expressed genes. We also show that multivariate approaches, such as hierarchical and k-means cluster analysis along with principal components analysis, are only effective at classifying samples when replicate numbers and mean fold change are high. Finally, we show how our stellar shape model approach can be extended to the analysis of 2D-gel images by adapting the Moffat function to take into account the elliptical nature of spots in such images. Our results indicate that stellar shape models offer a previously unexplored approach for the quantification of 2D-gel spots. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the recognition of the importance of evidence-based medicine, there is an emerging need for methods to systematically synthesize available data. Specifically, methods to provide accurate estimates of test characteristics for diagnostic tests are needed to help physicians make better clinical decisions. To provide more flexible approaches for meta-analysis of diagnostic tests, we developed three Bayesian generalized linear models. Two of these models, a bivariate normal and a binomial model, analyzed pairs of sensitivity and specificity values while incorporating the correlation between these two outcome variables. Noninformative independent uniform priors were used for the variance of sensitivity, specificity and correlation. We also applied an inverse Wishart prior to check the sensitivity of the results. The third model was a multinomial model where the test results were modeled as multinomial random variables. All three models can include specific imaging techniques as covariates in order to compare performance. Vague normal priors were assigned to the coefficients of the covariates. The computations were carried out using the 'Bayesian inference using Gibbs sampling' implementation of Markov chain Monte Carlo techniques. We investigated the properties of the three proposed models through extensive simulation studies. We also applied these models to a previously published meta-analysis dataset on cervical cancer as well as to an unpublished melanoma dataset. In general, our findings show that the point estimates of sensitivity and specificity were consistent among Bayesian and frequentist bivariate normal and binomial models. However, in the simulation studies, the estimates of the correlation coefficient from Bayesian bivariate models are not as good as those obtained from frequentist estimation regardless of which prior distribution was used for the covariance matrix. The Bayesian multinomial model consistently underestimated the sensitivity and specificity regardless of the sample size and correlation coefficient. In conclusion, the Bayesian bivariate binomial model provides the most flexible framework for future applications because of its following strengths: (1) it facilitates direct comparison between different tests; (2) it captures the variability in both sensitivity and specificity simultaneously as well as the intercorrelation between the two; and (3) it can be directly applied to sparse data without ad hoc correction. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Colorectal cancer is the forth most common diagnosed cancer in the United States. Every year about a hundred forty-seven thousand people will be diagnosed with colorectal cancer and fifty-six thousand people lose their lives due to this disease. Most of the hereditary nonpolyposis colorectal cancer (HNPCC) and 12% of the sporadic colorectal cancer show microsatellite instability. Colorectal cancer is a multistep progressive disease. It starts from a mutation in a normal colorectal cell and grows into a clone of cells that further accumulates mutations and finally develops into a malignant tumor. In terms of molecular evolution, the process of colorectal tumor progression represents the acquisition of sequential mutations. ^ Clinical studies use biomarkers such as microsatellite or single nucleotide polymorphisms (SNPs) to study mutation frequencies in colorectal cancer. Microsatellite data obtained from single genome equivalent PCR or small pool PCR can be used to infer tumor progression. Since tumor progression is similar to population evolution, we used an approach known as coalescent, which is well established in population genetics, to analyze this type of data. Coalescent theory has been known to infer the sample's evolutionary path through the analysis of microsatellite data. ^ The simulation results indicate that the constant population size pattern and the rapid tumor growth pattern have different genetic polymorphic patterns. The simulation results were compared with experimental data collected from HNPCC patients. The preliminary result shows the mutation rate in 6 HNPCC patients range from 0.001 to 0.01. The patients' polymorphic patterns are similar to the constant population size pattern which implies the tumor progression is through multilineage persistence instead of clonal sequential evolution. The results should be further verified using a larger dataset. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In geographical epidemiology, maps of disease rates and disease risk provide a spatial perspective for researching disease etiology. For rare diseases or when the population base is small, the rate and risk estimates may be unstable. Empirical Bayesian (EB) methods have been used to spatially smooth the estimates by permitting an area estimate to "borrow strength" from its neighbors. Such EB methods include the use of a Gamma model, of a James-Stein estimator, and of a conditional autoregressive (CAR) process. A fully Bayesian analysis of the CAR process is proposed. One advantage of this fully Bayesian analysis is that it can be implemented simply by using repeated sampling from the posterior densities. Use of a Markov chain Monte Carlo technique such as Gibbs sampler was not necessary. Direct resampling from the posterior densities provides exact small sample inferences instead of the approximate asymptotic analyses of maximum likelihood methods (Clayton & Kaldor, 1987). Further, the proposed CAR model provides for covariates to be included in the model. A simulation demonstrates the effect of sample size on the fully Bayesian analysis of the CAR process. The methods are applied to lip cancer data from Scotland, and the results are compared. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In regression analysis, covariate measurement error occurs in many applications. The error-prone covariates are often referred to as latent variables. In this proposed study, we extended the study of Chan et al. (2008) on recovering latent slope in a simple regression model to that in a multiple regression model. We presented an approach that applied the Monte Carlo method in the Bayesian framework to the parametric regression model with the measurement error in an explanatory variable. The proposed estimator applied the conditional expectation of latent slope given the observed outcome and surrogate variables in the multiple regression models. A simulation study was presented showing that the method produces estimator that is efficient in the multiple regression model, especially when the measurement error variance of surrogate variable is large.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mixture modeling is commonly used to model categorical latent variables that represent subpopulations in which population membership is unknown but can be inferred from the data. In relatively recent years, the potential of finite mixture models has been applied in time-to-event data. However, the commonly used survival mixture model assumes that the effects of the covariates involved in failure times differ across latent classes, but the covariate distribution is homogeneous. The aim of this dissertation is to develop a method to examine time-to-event data in the presence of unobserved heterogeneity under a framework of mixture modeling. A joint model is developed to incorporate the latent survival trajectory along with the observed information for the joint analysis of a time-to-event variable, its discrete and continuous covariates, and a latent class variable. It is assumed that the effects of covariates on survival times and the distribution of covariates vary across different latent classes. The unobservable survival trajectories are identified through estimating the probability that a subject belongs to a particular class based on observed information. We applied this method to a Hodgkin lymphoma study with long-term follow-up and observed four distinct latent classes in terms of long-term survival and distributions of prognostic factors. Our results from simulation studies and from the Hodgkin lymphoma study demonstrated the superiority of our joint model compared with the conventional survival model. This flexible inference method provides more accurate estimation and accommodates unobservable heterogeneity among individuals while taking involved interactions between covariates into consideration.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multi-center clinical trials are very common in the development of new drugs and devices. One concern in such trials, is the effect of individual investigational sites enrolling small numbers of patients on the overall result. Can the presence of small centers cause an ineffective treatment to appear effective when treatment-by-center interaction is not statistically significant?^ In this research, simulations are used to study the effect that centers enrolling few patients may have on the analysis of clinical trial data. A multi-center clinical trial with 20 sites is simulated to investigate the effect of a new treatment in comparison to a placebo treatment. Twelve of these 20 investigational sites are considered small, each enrolling less than four patients per treatment group. Three clinical trials are simulated with sample sizes of 100, 170 and 300. The simulated data is generated with various characteristics, one in which treatment should be considered effective and another where treatment is not effective. Qualitative interactions are also produced within the small sites to further investigate the effect of small centers under various conditions.^ Standard analysis of variance methods and the "sometimes-pool" testing procedure are applied to the simulated data. One model investigates treatment and center effect and treatment-by-center interaction. Another model investigates treatment effect alone. These analyses are used to determine the power to detect treatment-by-center interactions, and the probability of type I error.^ We find it is difficult to detect treatment-by-center interactions when only a few investigational sites enrolling a limited number of patients participate in the interaction. However, we find no increased risk of type I error in these situations. In a pooled analysis, when the treatment is not effective, the probability of finding a significant treatment effect in the absence of significant treatment-by-center interaction is well within standard limits of type I error. ^