883 resultados para Hidden Markov Model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study a general stochastic rumour model in which an ignorant individual has a certain probability of becoming a stifler immediately upon hearing the rumour. We refer to this special kind of stifler as an uninterested individual. Our model also includes distinct rates for meetings between two spreaders in which both become stiflers or only one does, so that particular cases are the classical Daley-Kendall and Maki-Thompson models. We prove a Law of Large Numbers and a Central Limit Theorem for the proportions of those who ultimately remain ignorant and those who have heard the rumour but become uninterested in it.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work we study an agent based model to investigate the role of asymmetric information degrees for market evolution. This model is quite simple and may be treated analytically since the consumers evaluate the quality of a certain good taking into account only the quality of the last good purchased plus her perceptive capacity beta. As a consequence, the system evolves according to a stationary Markov chain. The value of a good offered by the firms increases along with quality according to an exponent alpha, which is a measure of the technology. It incorporates all the technological capacity of the production systems such as education, scientific development and techniques that change the productivity rates. The technological level plays an important role to explain how the asymmetry of information may affect the market evolution in this model. We observe that, for high technological levels, the market can detect adverse selection. The model allows us to compute the maximum asymmetric information degree before the market collapses. Below this critical point the market evolves during a limited period of time and then dies out completely. When beta is closer to 1 (symmetric information), the market becomes more profitable for high quality goods, although high and low quality markets coexist. The maximum asymmetric information level is a consequence of an ergodicity breakdown in the process of quality evaluation. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper deals with the expected discounted continuous control of piecewise deterministic Markov processes (PDMP`s) using a singular perturbation approach for dealing with rapidly oscillating parameters. The state space of the PDMP is written as the product of a finite set and a subset of the Euclidean space a""e (n) . The discrete part of the state, called the regime, characterizes the mode of operation of the physical system under consideration, and is supposed to have a fast (associated to a small parameter epsilon > 0) and a slow behavior. By using a similar approach as developed in Yin and Zhang (Continuous-Time Markov Chains and Applications: A Singular Perturbation Approach, Applications of Mathematics, vol. 37, Springer, New York, 1998, Chaps. 1 and 3) the idea in this paper is to reduce the number of regimes by considering an averaged model in which the regimes within the same class are aggregated through the quasi-stationary distribution so that the different states in this class are replaced by a single one. The main goal is to show that the value function of the control problem for the system driven by the perturbed Markov chain converges to the value function of this limit control problem as epsilon goes to zero. This convergence is obtained by, roughly speaking, showing that the infimum and supremum limits of the value functions satisfy two optimality inequalities as epsilon goes to zero. This enables us to show the result by invoking a uniqueness argument, without needing any kind of Lipschitz continuity condition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Joint generalized linear models and double generalized linear models (DGLMs) were designed to model outcomes for which the variability can be explained using factors and/or covariates. When such factors operate, the usual normal regression models, which inherently exhibit constant variance, will under-represent variation in the data and hence may lead to erroneous inferences. For count and proportion data, such noise factors can generate a so-called overdispersion effect, and the use of binomial and Poisson models underestimates the variability and, consequently, incorrectly indicate significant effects. In this manuscript, we propose a DGLM from a Bayesian perspective, focusing on the case of proportion data, where the overdispersion can be modeled using a random effect that depends on some noise factors. The posterior joint density function was sampled using Monte Carlo Markov Chain algorithms, allowing inferences over the model parameters. An application to a data set on apple tissue culture is presented, for which it is shown that the Bayesian approach is quite feasible, even when limited prior information is available, thereby generating valuable insight for the researcher about its experimental results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A significant problem in the collection of responses to potentially sensitive questions, such as relating to illegal, immoral or embarrassing activities, is non-sampling error due to refusal to respond or false responses. Eichhorn & Hayre (1983) suggested the use of scrambled responses to reduce this form of bias. This paper considers a linear regression model in which the dependent variable is unobserved but for which the sum or product with a scrambling random variable of known distribution, is known. The performance of two likelihood-based estimators is investigated, namely of a Bayesian estimator achieved through a Markov chain Monte Carlo (MCMC) sampling scheme, and a classical maximum-likelihood estimator. These two estimators and an estimator suggested by Singh, Joarder & King (1996) are compared. Monte Carlo results show that the Bayesian estimator outperforms the classical estimators in almost all cases, and the relative performance of the Bayesian estimator improves as the responses become more scrambled.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The elevated plus-maze is an animal model of anxiety used to study the effect of different drugs on the behavior of the animal It consists of a plus-shaped maze with two open and two closed arms elevated 50 cm from the floor The standard measures used to characterize exploratory behavior in the elevated plus-maze are the time spent and the number of entries in the open arms In this work we use Markov chains to characterize the exploratory behavior of the rat in the elevated plus-maze under three different conditions normal and under the effects of anxiogenic and anxiolytic drugs The spatial structure of the elevated plus-maze is divided into squares which are associated with states of a Markov chain By counting the frequencies of transitions between states during 5-min sessions in the elevated plus-maze we constructed stochastic matrices for the three conditions studied The stochastic matrices show specific patterns which correspond to the observed behaviors of the rat under the three different conditions For the control group the stochastic matrix shows a clear preference for places in the closed arms This preference is enhanced for the anxiogenic group For the anxiolytic group the stochastic matrix shows a pattern similar to a random walk Our results suggest that Markov chains can be used together with the standard measures to characterize the rat behavior in the elevated plus-maze (C) 2010 Elsevier B V All rights reserved

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For the purpose of developing a longitudinal model to predict hand-and-foot syndrome (HFS) dynamics in patients receiving capecitabine, data from two large phase III studies were used. Of 595 patients in the capecitabine arms, 400 patients were randomly selected to build the model, and the other 195 were assigned for model validation. A score for risk of developing HFS was modeled using the proportional odds model, a sigmoidal maximum effect model driven by capecitabine accumulation as estimated through a kinetic-pharmacodynamic model and a Markov process. The lower the calculated creatinine clearance value at inclusion, the higher was the risk of HFS. Model validation was performed by visual and statistical predictive checks. The predictive dynamic model of HFS in patients receiving capecitabine allows the prediction of toxicity risk based on cumulative capecitabine dose and previous HFS grade. This dose-toxicity model will be useful in developing Bayesian individual treatment adaptations and may be of use in the clinic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Higgs boson recently discovered at the Large Hadron Collider has shown to have couplings to the remaining particles well within what is predicted by the Standard Model. The search for other new heavy scalar states has so far revealed to be fruitless, imposing constraints on the existence of new scalar particles. However, it is still possible that any existing heavy scalars would preferentially decay to final states involving the light Higgs boson thus evading the current LHC bounds on heavy scalar states. Moreover, decays of the heavy scalars could increase the number of light Higgs bosons being produced. Since the number of light Higgs bosons decaying to Standard Model particles is within the predicted range, this could mean that part of the light Higgs bosons could have their origin in heavy scalar decays. This situation would occur if the light Higgs couplings to Standard Model particles were reduced by a concomitant amount. Using a very simple extension of the SM - the two-Higgs doublet model we show that in fact we could already be observing the effect of the heavy scalar states even if all results related to the Higgs are in excellent agreement with the Standard Model predictions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação para a obtenção do grau de Mestre em Engenharia Electrotécnica Ramo de Energia

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this paper is to conduct a methodical drawback analysis of a financial supplier risk management approach which is currently implemented in the automotive industry. Based on identified methodical flaws, the risk assessment model is further developed by introducing a malus system which incorporates hidden risks into the model and by revising the derivation of the most central risk measure in the current model. Both methodical changes lead to significant enhancements in terms of risk assessment accuracy, supplier identification and workload efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is a first step in the search for the characteristics of funders, and the underlying motivation that drives them to participate in crowdfunding. The purpose of the study is to identify demographics and psychographics that influence a funder’s willingness to financially support a crowdfunding project (WFS). Crowdfunding, crowdsourcing and donation literature are combined to create a conceptual model in which age, gender, altruism and income, together with several control variables, are expected to have an influence on a funder’s WFS. Primary data collection was conducted using a survey, and a dataset of 175 potential crowdfunders was created. The data is analysed using a multiple regression and provided several interesting results. First of all, age and gender have a significant effect on WFS, males and young adults until the age of 30 have a higher intention to give money to crowdfunding projects. Second, altruism is significantly positively related to WFS, meaning that the funders do not just care about the potential rewards they could receive, but also about the benefits that they create for the entrepreneur and the people affected by the crowdfunding project. Third, the moderation effect of income was found to be insignificant in this model. It shows that income does not affect the strength of the relationship between the age, gender and altruism, and WFS. This study provides important theoretical contributions by, to the best of my knowledge, being the first study to quantitatively investigate the characteristics of funders and using the funder as the unit of analysis. Moreover, the study provides important insights for entrepreneurs who wish to target the crowd better in order to attract and retain more funders, thereby increasing the chance of success of their project.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper discusses models, associations and causation in psychiatry. The different types of association (linear, positive, negative, exponential, partial, U shaped relationship, hidden and spurious) between variables involved in mental disorders are presented as well as the use of multiple regression analysis to disentangle interrelatedness amongst multiple variables. A useful model should have internal consistency, external validity and predictive power; be dynamic in order to accommodate new sound knowledge; and should fit facts rather than they other way around. It is argued that whilst models are theoretical constructs they also convey a style of reasoning and can change clinical practice. Cause and effect are complex phenomena in that the same cause can yield different effects. Conversely, the same effect can have a different range of causes. In mental disorders and human behaviour there is always a chain of events initiated by the indirect and remote cause; followed by intermediate causes; and finally the direct and more immediate cause. Causes of mental disorders are grouped as those: (i) which are necessary and sufficient; (ii) which are necessary but not sufficient; and (iii) which are neither necessary nor sufficient, but when present increase the risk for mental disorders.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

"Es tracta d'un projecte dividit en dues parts independents però complementàries, realitzades per autors diferents. Aquest document conté originàriament altre material i/o programari només consultable a la Biblioteca de Ciència i Tecnologia"

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are both theoretical and empirical reasons for believing that the parameters of macroeconomic models may vary over time. However, work with time-varying parameter models has largely involved Vector autoregressions (VARs), ignoring cointegration. This is despite the fact that cointegration plays an important role in informing macroeconomists on a range of issues. In this paper we develop time varying parameter models which permit cointegration. Time-varying parameter VARs (TVP-VARs) typically use state space representations to model the evolution of parameters. In this paper, we show that it is not sensible to use straightforward extensions of TVP-VARs when allowing for cointegration. Instead we develop a specification which allows for the cointegrating space to evolve over time in a manner comparable to the random walk variation used with TVP-VARs. The properties of our approach are investigated before developing a method of posterior simulation. We use our methods in an empirical investigation involving a permanent/transitory variance decomposition for inflation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper contributes to the on-going empirical debate regarding the role of the RBC model and in particular of technology shocks in explaining aggregate fluctuations. To this end we estimate the model’s posterior density using Markov-Chain Monte-Carlo (MCMC) methods. Within this framework we extend Ireland’s (2001, 2004) hybrid estimation approach to allow for a vector autoregressive moving average (VARMA) process to describe the movements and co-movements of the model’s errors not explained by the basic RBC model. The results of marginal likelihood ratio tests reveal that the more general model of the errors significantly improves the model’s fit relative to the VAR and AR alternatives. Moreover, despite setting the RBC model a more difficult task under the VARMA specification, our analysis, based on forecast error and spectral decompositions, suggests that the RBC model is still capable of explaining a significant fraction of the observed variation in macroeconomic aggregates in the post-war U.S. economy.