939 resultados para Markov process modeling


Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this modern complex world, stress at work is found to be increasingly a common feature in day to day life. For the same reason, job stress is one of the active areas in occupational health and safety research for over last four decades and is continuing to attract researchers in academia and industry. Job stress in process industries is of concern due to its influence on process safety, and worker‘s safety and health. Safety in process (chemical and nuclear material) industry is of paramount importance, especially in a thickly populated country like India. Stress at job is the main vector in inducing work related musculoskeletal disorders which in turn can affect the worker health and safety in process industries. In view of the above, the process industries should try to minimize the job stress in workers to ensure a safe and healthy working climate for the industry and the worker. This research is mainly aimed at assessing the influence of job stress in inducing work related musculoskeletal disorders in chemical process industries in India

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Analyses of high-density single-nucleotide polymorphism (SNP) data, such as genetic mapping and linkage disequilibrium (LD) studies, require phase-known haplotypes to allow for the correlation between tightly linked loci. However, current SNP genotyping technology cannot determine phase, which must be inferred statistically. In this paper, we present a new Bayesian Markov chain Monte Carlo (MCMC) algorithm for population haplotype frequency estimation, particulary in the context of LD assessment. The novel feature of the method is the incorporation of a log-linear prior model for population haplotype frequencies. We present simulations to suggest that 1) the log-linear prior model is more appropriate than the standard coalescent process in the presence of recombination (>0.02cM between adjacent loci), and 2) there is substantial inflation in measures of LD obtained by a "two-stage" approach to the analysis by treating the "best" haplotype configuration as correct, without regard to uncertainty in the recombination process. Genet Epidemiol 25:106-114, 2003. (C) 2003 Wiley-Liss, Inc.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We analyze the large time behavior of a stochastic model for the lay down of fibers on a moving conveyor belt in the production process of nonwovens. It is shown that under weak conditions this degenerate diffusion process has a unique invariant distribution and is even geometrically ergodic. This generalizes results from previous works [M. Grothaus and A. Klar, SIAM J. Math. Anal., 40 (2008), pp. 968–983; J. Dolbeault et al., arXiv:1201.2156] concerning the case of a stationary conveyor belt, in which the situation of a moving conveyor belt has been left open.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This work presents a Bayesian semiparametric approach for dealing with regression models where the covariate is measured with error. Given that (1) the error normality assumption is very restrictive, and (2) assuming a specific elliptical distribution for errors (Student-t for example), may be somewhat presumptuous; there is need for more flexible methods, in terms of assuming only symmetry of errors (admitting unknown kurtosis). In this sense, the main advantage of this extended Bayesian approach is the possibility of considering generalizations of the elliptical family of models by using Dirichlet process priors in dependent and independent situations. Conditional posterior distributions are implemented, allowing the use of Markov Chain Monte Carlo (MCMC), to generate the posterior distributions. An interesting result shown is that the Dirichlet process prior is not updated in the case of the dependent elliptical model. Furthermore, an analysis of a real data set is reported to illustrate the usefulness of our approach, in dealing with outliers. Finally, semiparametric proposed models and parametric normal model are compared, graphically with the posterior distribution density of the coefficients. (C) 2009 Elsevier Inc. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The design and implementation of an ERP system involves capturing the information necessary for implementing the system's structure and behavior that support enterprise management. This process should start on the enterprise modeling level and finish at the coding level, going down through different abstraction layers. For the case of Free/Open Source ERP, the lack of proper modeling methods and tools jeopardizes the advantages of source code availability. Moreover, the distributed, decentralized decision-making, and source-code driven development culture of open source communities, generally doesn't rely on methods for modeling the higher abstraction levels necessary for an ERP solution. The aim of this paper is to present a model driven development process for the open source ERP ERP5. The proposed process covers the different abstraction levels involved, taking into account well established standards and common practices, as well as new approaches, by supplying Enterprise, Requirements, Analysis, Design, and Implementation workflows. Copyright 2008 ACM.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this study, the flocculation process in continuous systems with chambers in series was analyzed using the classical kinetic model of aggregation and break-up proposed by Argaman and Kaufman, which incorporates two main parameters: K (a) and K (b). Typical values for these parameters were used, i. e., K (a) = 3.68 x 10(-5)-1.83 x 10(-4) and K (b) = 1.83 x 10(-7)-2.30 x 10(-7) s(-1). The analysis consisted of performing simulations of system behavior under different operating conditions, including variations in the number of chambers used and the utilization of fixed or scaled velocity gradients in the units. The response variable analyzed in all simulations was the total retention time necessary to achieve a given flocculation efficiency, which was determined by means of conventional solution methods of nonlinear algebraic equations, corresponding to the material balances on the system. Values for the number of chambers ranging from 1 to 5, velocity gradients of 20-60 s(-1) and flocculation efficiencies of 50-90 % were adopted.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Experiments of continuous alcoholic fermentation of sugarcane juice with flocculating yeast recycle were conducted in a system of two 0.22-L tower bioreactors in series, operated at a range of dilution rates (D (1) = D (2) = 0.27-0.95 h(-1)), constant recycle ratio (alpha = F (R) /F = 4.0) and a sugar concentration in the feed stream (S (0)) around 150 g/L. The data obtained in these experimental conditions were used to adjust the parameters of a mathematical model previously developed for the single-stage process. This model considers each of the tower bioreactors as a perfectly mixed continuous reactor and the kinetics of cell growth and product formation takes into account the limitation by substrate and the inhibition by ethanol and biomass, as well as the substrate consumption for cellular maintenance. The model predictions agreed satisfactorily with the measurements taken in both stages of the cascade. The major differences with respect to the kinetic parameters previously estimated for a single-stage system were observed for the maximum specific growth rate, for the inhibition constants of cell growth and for the specific rate of substrate consumption for cell maintenance. Mathematical models were validated and used to simulate alternative operating conditions as well as to analyze the performance of the two-stage process against that of the single-stage process.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Genomic alterations have been linked to the development and progression of cancer. The technique of Comparative Genomic Hybridization (CGH) yields data consisting of fluorescence intensity ratios of test and reference DNA samples. The intensity ratios provide information about the number of copies in DNA. Practical issues such as the contamination of tumor cells in tissue specimens and normalization errors necessitate the use of statistics for learning about the genomic alterations from array-CGH data. As increasing amounts of array CGH data become available, there is a growing need for automated algorithms for characterizing genomic profiles. Specifically, there is a need for algorithms that can identify gains and losses in the number of copies based on statistical considerations, rather than merely detect trends in the data. We adopt a Bayesian approach, relying on the hidden Markov model to account for the inherent dependence in the intensity ratios. Posterior inferences are made about gains and losses in copy number. Localized amplifications (associated with oncogene mutations) and deletions (associated with mutations of tumor suppressors) are identified using posterior probabilities. Global trends such as extended regions of altered copy number are detected. Since the posterior distribution is analytically intractable, we implement a Metropolis-within-Gibbs algorithm for efficient simulation-based inference. Publicly available data on pancreatic adenocarcinoma, glioblastoma multiforme and breast cancer are analyzed, and comparisons are made with some widely-used algorithms to illustrate the reliability and success of the technique.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This work presents a 1-D process scale model used to investigate the chemical dynamics and temporal variability of nitrogen oxides (NOx) and ozone (O3) within and above snowpack at Summit, Greenland for March-May 2009 and estimates surface exchange of NOx between the snowpack and surface layer in April-May 2009. The model assumes the surface of snowflakes have a Liquid Like Layer (LLL) where aqueous chemistry occurs and interacts with the interstitial air of the snowpack. Model parameters and initialization are physically and chemically representative of snowpack at Summit, Greenland and model results are compared to measurements of NOx and O3 collected by our group at Summit, Greenland from 2008-2010. The model paired with measurements confirmed the main hypothesis in literature that photolysis of nitrate on the surface of snowflakes is responsible for nitrogen dioxide (NO2) production in the top ~50 cm of the snowpack at solar noon for March – May time periods in 2009. Nighttime peaks of NO2 in the snowpack for April and May were reproduced with aqueous formation of peroxynitric acid (HNO4) in the top ~50 cm of the snowpack with subsequent mass transfer to the gas phase, decomposition to form NO2 at nighttime, and transportation of the NO2 to depths of 2 meters. Modeled production of HNO4 was hindered in March 2009 due to the low production of its precursor, hydroperoxy radical, resulting in underestimation of nighttime NO2 in the snowpack for March 2009. The aqueous reaction of O3 with formic acid was the major sync of O3 in the snowpack for March-May, 2009. Nitrogen monoxide (NO) production in the top ~50 cm of the snowpack is related to the photolysis of NO2, which underrepresents NO in May of 2009. Modeled surface exchange of NOx in April and May are on the order of 1011 molecules m-2 s-1. Removal of measured downward fluxes of NO and NO2 in measured fluxes resulted in agreement between measured NOx fluxes and modeled surface exchange in April and an order of magnitude deviation in May. Modeled transport of NOx above the snowpack in May shows an order of magnitude increase of NOx fluxes in the first 50 cm of the snowpack and is attributed to the production of NO2 during the day from the thermal decomposition and photolysis of peroxynitric acid with minor contributions of NO from HONO photolysis in the early morning.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Objective: Processes occurring in the course of psychotherapy are characterized by the simple fact that they unfold in time and that the multiple factors engaged in change processes vary highly between individuals (idiographic phenomena). Previous research, however, has neglected the temporal perspective by its traditional focus on static phenomena, which were mainly assessed at the group level (nomothetic phenomena). To support a temporal approach, the authors introduce time-series panel analysis (TSPA), a statistical methodology explicitly focusing on the quantification of temporal, session-to-session aspects of change in psychotherapy. TSPA-models are initially built at the level of individuals and are subsequently aggregated at the group level, thus allowing the exploration of prototypical models. Method: TSPA is based on vector auto-regression (VAR), an extension of univariate auto-regression models to multivariate time-series data. The application of TSPA is demonstrated in a sample of 87 outpatient psychotherapy patients who were monitored by postsession questionnaires. Prototypical mechanisms of change were derived from the aggregation of individual multivariate models of psychotherapy process. In a 2nd step, the associations between mechanisms of change (TSPA) and pre- to postsymptom change were explored. Results: TSPA allowed a prototypical process pattern to be identified, where patient's alliance and self-efficacy were linked by a temporal feedback-loop. Furthermore, therapist's stability over time in both mastery and clarification interventions was positively associated with better outcomes. Conclusions: TSPA is a statistical tool that sheds new light on temporal mechanisms of change. Through this approach, clinicians may gain insight into prototypical patterns of change in psychotherapy.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this work we will present a model that describes how the number of healthy and unhealthy subjects that belong to a cohort, changes through time when there are occurrences of health promotion campaigns aiming to change the undesirable behavior. This model also includes immigration and emigration components for each group and a component taking into account when a subject that used to perform a healthy behavior changes to perform the unhealthy behavior. We will express the model in terms of a bivariate probability generating function and in addition we will simulate the model. ^ An illustrative example on how to apply the model to the promotion of condom use among adolescents will be created and we will use it to compare the results obtained from the simulations and the results obtained by the probability generating function. ^