153 resultados para Continuous-time Markov Chain


Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A statistical modeling method to accurately determine combustion chamber resonance is proposed and demonstrated. This method utilises Markov-chain Monte Carlo (MCMC) through the use of the Metropolis-Hastings (MH) algorithm to yield a probability density function for the combustion chamber frequency and find the best estimate of the resonant frequency, along with uncertainty. The accurate determination of combustion chamber resonance is then used to investigate various engine phenomena, with appropriate uncertainty, for a range of engine cycles. It is shown that, when operating on various ethanol/diesel fuel combinations, a 20% substitution yields the least amount of inter-cycle variability, in relation to combustion chamber resonance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Statistical modeling of traffic crashes has been of interest to researchers for decades. Over the most recent decade many crash models have accounted for extra-variation in crash counts—variation over and above that accounted for by the Poisson density. The extra-variation – or dispersion – is theorized to capture unaccounted for variation in crashes across sites. The majority of studies have assumed fixed dispersion parameters in over-dispersed crash models—tantamount to assuming that unaccounted for variation is proportional to the expected crash count. Miaou and Lord [Miaou, S.P., Lord, D., 2003. Modeling traffic crash-flow relationships for intersections: dispersion parameter, functional form, and Bayes versus empirical Bayes methods. Transport. Res. Rec. 1840, 31–40] challenged the fixed dispersion parameter assumption, and examined various dispersion parameter relationships when modeling urban signalized intersection accidents in Toronto. They suggested that further work is needed to determine the appropriateness of the findings for rural as well as other intersection types, to corroborate their findings, and to explore alternative dispersion functions. This study builds upon the work of Miaou and Lord, with exploration of additional dispersion functions, the use of an independent data set, and presents an opportunity to corroborate their findings. Data from Georgia are used in this study. A Bayesian modeling approach with non-informative priors is adopted, using sampling-based estimation via Markov Chain Monte Carlo (MCMC) and the Gibbs sampler. A total of eight model specifications were developed; four of them employed traffic flows as explanatory factors in mean structure while the remainder of them included geometric factors in addition to major and minor road traffic flows. The models were compared and contrasted using the significance of coefficients, standard deviance, chi-square goodness-of-fit, and deviance information criteria (DIC) statistics. The findings indicate that the modeling of the dispersion parameter, which essentially explains the extra-variance structure, depends greatly on how the mean structure is modeled. In the presence of a well-defined mean function, the extra-variance structure generally becomes insignificant, i.e. the variance structure is a simple function of the mean. It appears that extra-variation is a function of covariates when the mean structure (expected crash count) is poorly specified and suffers from omitted variables. In contrast, when sufficient explanatory variables are used to model the mean (expected crash count), extra-Poisson variation is not significantly related to these variables. If these results are generalizable, they suggest that model specification may be improved by testing extra-variation functions for significance. They also suggest that known influences of expected crash counts are likely to be different than factors that might help to explain unaccounted for variation in crashes across sites

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Statisticians along with other scientists have made significant computational advances that enable the estimation of formerly complex statistical models. The Bayesian inference framework combined with Markov chain Monte Carlo estimation methods such as the Gibbs sampler enable the estimation of discrete choice models such as the multinomial logit (MNL) model. MNL models are frequently applied in transportation research to model choice outcomes such as mode, destination, or route choices or to model categorical outcomes such as crash outcomes. Recent developments allow for the modification of the potentially limiting assumptions of MNL such as the independence from irrelevant alternatives (IIA) property. However, relatively little transportation-related research has focused on Bayesian MNL models, the tractability of which is of great value to researchers and practitioners alike. This paper addresses MNL model specification issues in the Bayesian framework, such as the value of including prior information on parameters, allowing for nonlinear covariate effects, and extensions to random parameter models, so changing the usual limiting IIA assumption. This paper also provides an example that demonstrates, using route-choice data, the considerable potential of the Bayesian MNL approach with many transportation applications. This paper then concludes with a discussion of the pros and cons of this Bayesian approach and identifies when its application is worthwhile

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Modern statistical models and computational methods can now incorporate uncertainty of the parameters used in Quantitative Microbial Risk Assessments (QMRA). Many QMRAs use Monte Carlo methods, but work from fixed estimates for means, variances and other parameters. We illustrate the ease of estimating all parameters contemporaneously with the risk assessment, incorporating all the parameter uncertainty arising from the experiments from which these parameters are estimated. A Bayesian approach is adopted, using Markov Chain Monte Carlo Gibbs sampling (MCMC) via the freely available software, WinBUGS. The method and its ease of implementation are illustrated by a case study that involves incorporating three disparate datasets into an MCMC framework. The probabilities of infection when the uncertainty associated with parameter estimation is incorporated into a QMRA are shown to be considerably more variable over various dose ranges than the analogous probabilities obtained when constants from the literature are simply ‘plugged’ in as is done in most QMRAs. Neglecting these sources of uncertainty may lead to erroneous decisions for public health and risk management.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dental pulp cells (DPCs) have shown promising potential in dental tissue repair and regeneration. However, during in vitro culture, these cells undergo replicative senescence and result in significant alteration in cell proliferation and differentiation. Recently, the transcription factors of Oct-4, Sox2, c-Myc, and Klf4 have been reported to play a regulatory role in the stem cell self-renewal process, namely cell reprogramming. Therefore, it is interesting to know whether the replicative senescence during the culture of dental pulp cells is related to the diminishing of the expression of these transcription factors. In this study, we investigated the expression of the reprogramming markers Oct-4, Sox2, and c-Myc in the in vitro explant cultured dental pulp tissues and explant cultured dental pulp cells (DPCs) at various passages by immunofluorescence staining and real-time polymerase chain reaction analysis. Our results demonstrated that Oct-4, Sox2, and c-Myc translocated from nucleus in the first 2 passages to cytoplasm after the third passage in explant cultured DPCs. The mRNA expression of Oct-4, Sox2, and c-Myc elevated significantly over the first 2 passages, peaked at second passage (P < .05), and then decreased along the number of passages afterwards (P < .05). For the first time we demonstrated that the expression of reprogramming markers Oct-4, Sox2, and c-Myc was detectable in the early passaged DPCs, and the sequential loss of these markers in the nucleus during DPC cultures might be related to the cell fate of dental pulp derived cells during the long-term in vitro cultivation under current culture conditions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most salad vegetables are eaten fresh by consumers. However, raw vegetables may pose a risk of transmitting opportunistic bacteria to immunocompromised people, including cystic fibrosis (CF) patients. In particular, CF patients are vulnerable to chronic Pseudomonas aeruginosa lung infections and this organism is the primary cause of morbidity and mortality in this group. Clonal variants of P. aeruginosa have been identified as emerging threats to people afflicted with CF; however it has not yet been proven from where these clones originate or how they are transmitted. Due to the organisms‟ aquatic environmental niche, it was hypothesised that vegetables may be a source of these clones. To test this hypothesis, lettuce, tomatoes, mushrooms and bean sprout packages (n = 150) were analysed from a green grocer, supermarket and farmers‟ market within the Brisbane region, availability permitting. The internal and external surfaces of the vegetables were separately analysed for the presence of clonal strains of P. aeruginosa using washings and homogenisation techniques, respectively. This separation was in an attempt to establish which surface was contaminated, so that recommendations could be made to decrease or eliminate P. aeruginosa from these foods prior to consumption. Soil and water samples (n = 17) from local farms were also analysed for the presence of P. aeruginosa. Presumptive identification of isolates recovered from these environmental samples was made based on growth on Cetrimide agar at 42°C, presence of the cytochrome-oxidase enzyme and inability to ferment lactose. P. aeruginosa duplex real-time polymerase chain reaction assay (PAduplex) was performed on all bacterial isolates presumptively identified as P. aeruginosa. Enterobacterial repetitive intergenic consensus strain typing PCR (ERIC-PCR) was subsequently performed on confirmed bacterial isolates. Although 72 P. aeruginosa were isolated, none of these proved to be clonal strains. The significance of these findings is that vegetables may pose a risk of transmitting sporadic strains of P. aeruginosa to people afflicted with CF and possibly, other immunocompromised people.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the identification of common single locus point mutations as risk factors for thrombophilia, many DNA testing methodologies have been described for detecting these variations. Traditionally, functional or immunological testing methods have been used to investigate quantitative anticoagulant deficiencies. However, with the emergence of the genetic variations, factor V Leiden, prothrombin 20210 and, to a lesser extent, the methylene tetrahydrofolate reductase (MTHFR677) and factor V HR2 haplotype, traditional testing methodologies have proved to be less useful and instead DNA technology is more commonly employed in diagnostics. This review considers many of the DNA techniques that have proved to be useful in the detection of common genetic variants that predispose to thrombophilia. Techniques involving gel analysis are used to detect the presence or absence of restriction sites, electrophoretic mobility shifts, as in single strand conformation polymorphism or denaturing gradient gel electrophoresis, and product formation in allele-specific amplification. Such techniques may be sensitive, but are unwielding and often need to be validated objectively. In order to overcome some of the limitations of gel analysis, especially when dealing with larger sample numbers, many alternative detection formats, such as closed tube systems, microplates and microarrays (minisequencing, real-time polymerase chain reaction, and oligonucleotide ligation assays) have been developed. In addition, many of the emerging technologies take advantage of colourimetric or fluorescence detection (including energy transfer) that allows qualitative and quantitative interpretation of results. With the large variety of DNA technologies available, the choice of methodology will depend on several factors including cost and the need for speed, simplicity and robustness. © 2000 Lippincott Williams & Wilkins.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the surprising recurring phenomena observed in experiments with boosting is that the test error of the generated classifier usually does not increase as its size becomes very large, and often is observed to decrease even after the training error reaches zero. In this paper, we show that this phenomenon is related to the distribution of margins of the training examples with respect to the generated voting classification rule, where the margin of an example is simply the difference between the number of correct votes and the maximum number of votes received by any incorrect label. We show that techniques used in the analysis of Vapnik's support vector classifiers and of neural networks with small weights can be applied to voting methods to relate the margin distribution to the test error. We also show theoretically and experimentally that boosting is especially effective at increasing the margins of the training examples. Finally, we compare our explanation to those based on the bias-variance decomposition.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the problem of prediction with expert advice in the setting where a forecaster is presented with several online prediction tasks. Instead of competing against the best expert separately on each task, we assume the tasks are related, and thus we expect that a few experts will perform well on the entire set of tasks. That is, our forecaster would like, on each task, to compete against the best expert chosen from a small set of experts. While we describe the “ideal” algorithm and its performance bound, we show that the computation required for this algorithm is as hard as computation of a matrix permanent. We present an efficient algorithm based on mixing priors, and prove a bound that is nearly as good for the sequential task presentation case. We also consider a harder case where the task may change arbitrarily from round to round, and we develop an efficient approximate randomized algorithm based on Markov chain Monte Carlo techniques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the problem of prediction with expert advice in the setting where a forecaster is presented with several online prediction tasks. Instead of competing against the best expert separately on each task, we assume the tasks are related, and thus we expect that a few experts will perform well on the entire set of tasks. That is, our forecaster would like, on each task, to compete against the best expert chosen from a small set of experts. While we describe the "ideal" algorithm and its performance bound, we show that the computation required for this algorithm is as hard as computation of a matrix permanent. We present an efficient algorithm based on mixing priors, and prove a bound that is nearly as good for the sequential task presentation case. We also consider a harder case where the task may change arbitrarily from round to round, and we develop an efficient approximate randomized algorithm based on Markov chain Monte Carlo techniques.