934 resultados para Random parameter Logit Model
Resumo:
Magdeburg, Univ., Fak. für Elektrotechnik und Informationstechnik, Diss., 2014
Resumo:
There are both theoretical and empirical reasons for believing that the parameters of macroeconomic models may vary over time. However, work with time-varying parameter models has largely involved Vector autoregressions (VARs), ignoring cointegration. This is despite the fact that cointegration plays an important role in informing macroeconomists on a range of issues. In this paper we develop time varying parameter models which permit cointegration. Time-varying parameter VARs (TVP-VARs) typically use state space representations to model the evolution of parameters. In this paper, we show that it is not sensible to use straightforward extensions of TVP-VARs when allowing for cointegration. Instead we develop a specification which allows for the cointegrating space to evolve over time in a manner comparable to the random walk variation used with TVP-VARs. The properties of our approach are investigated before developing a method of posterior simulation. We use our methods in an empirical investigation involving a permanent/transitory variance decomposition for inflation.
Resumo:
The space subdivision in cells resulting from a process of random nucleation and growth is a subject of interest in many scientific fields. In this paper, we deduce the expected value and variance of these distributions while assuming that the space subdivision process is in accordance with the premises of the Kolmogorov-Johnson-Mehl-Avrami model. We have not imposed restrictions on the time dependency of nucleation and growth rates. We have also developed an approximate analytical cell size probability density function. Finally, we have applied our approach to the distributions resulting from solid phase crystallization under isochronal heating conditions
Resumo:
In the context of Systems Biology, computer simulations of gene regulatory networks provide a powerful tool to validate hypotheses and to explore possible system behaviors. Nevertheless, modeling a system poses some challenges of its own: especially the step of model calibration is often difficult due to insufficient data. For example when considering developmental systems, mostly qualitative data describing the developmental trajectory is available while common calibration techniques rely on high-resolution quantitative data. Focusing on the calibration of differential equation models for developmental systems, this study investigates different approaches to utilize the available data to overcome these difficulties. More specifically, the fact that developmental processes are hierarchically organized is exploited to increase convergence rates of the calibration process as well as to save computation time. Using a gene regulatory network model for stem cell homeostasis in Arabidopsis thaliana the performance of the different investigated approaches is evaluated, documenting considerable gains provided by the proposed hierarchical approach.
Resumo:
In Quantitative Microbial Risk Assessment, it is vital to understand how lag times of individual cells are distributed over a bacterial population. Such identified distributions can be used to predict the time by which, in a growth-supporting environment, a few pathogenic cells can multiply to a poisoning concentration level. We model the lag time of a single cell, inoculated into a new environment, by the delay of the growth function characterizing the generated subpopulation. We introduce an easy-to-implement procedure, based on the method of moments, to estimate the parameters of the distribution of single cell lag times. The advantage of the method is especially apparent for cases where the initial number of cells is small and random, and the culture is detectable only in the exponential growth phase.
Resumo:
This paper generalizes the original random matching model of money byKiyotaki and Wright (1989) (KW) in two aspects: first, the economy ischaracterized by an arbitrary distribution of agents who specialize in producing aparticular consumption good; and second, these agents have preferences suchthat they want to consume any good with some probability. The resultsdepend crucially on the size of the fraction of producers of each goodand the probability with which different agents want to consume eachgood. KW and other related models are shown to be parameterizations ofthis more general one.
Resumo:
We apply the formalism of the continuous-time random walk to the study of financial data. The entire distribution of prices can be obtained once two auxiliary densities are known. These are the probability densities for the pausing time between successive jumps and the corresponding probability density for the magnitude of a jump. We have applied the formalism to data on the U.S. dollardeutsche mark future exchange, finding good agreement between theory and the observed data.
Resumo:
The objective of this work was to compare the relative efficiency of initial selection and genetic parameter estimation, using augmented blocks design (ABD), augmented blocks twice replicated design (DABD) and group of randomised block design experiments with common treatments (ERBCT), by simulations, considering fixed effect model and mixed model with regular treatment effects as random. For the simulations, eight different conditions (scenarios) were considered. From the 600 simulations in each scenario, the mean percentage selection coincidence, the Pearsons´s correlation estimates between adjusted means for the fixed effects model, and the heritability estimates for the mixed model were evaluated. DABD and ERBCT were very similar in their comparisons and slightly superior to ABD. Considering the initial stages of selection in a plant breeding program, ABD is a good alternative for selecting superior genotypes, although none of the designs had been effective to estimate heritability in all the different scenarios evaluated.
Resumo:
When individuals learn by trial-and-error, they perform randomly chosen actions and then reinforce those actions that led to a high payoff. However, individuals do not always have to physically perform an action in order to evaluate its consequences. Rather, they may be able to mentally simulate actions and their consequences without actually performing them. Such fictitious learners can select actions with high payoffs without making long chains of trial-and-error learning. Here, we analyze the evolution of an n-dimensional cultural trait (or artifact) by learning, in a payoff landscape with a single optimum. We derive the stochastic learning dynamics of the distance to the optimum in trait space when choice between alternative artifacts follows the standard logit choice rule. We show that for both trial-and-error and fictitious learners, the learning dynamics stabilize at an approximate distance of root n/(2 lambda(e)) away from the optimum, where lambda(e) is an effective learning performance parameter depending on the learning rule under scrutiny. Individual learners are thus unlikely to reach the optimum when traits are complex (n large), and so face a barrier to further improvement of the artifact. We show, however, that this barrier can be significantly reduced in a large population of learners performing payoff-biased social learning, in which case lambda(e) becomes proportional to population size. Overall, our results illustrate the effects of errors in learning, levels of cognition, and population size for the evolution of complex cultural traits. (C) 2013 Elsevier Inc. All rights reserved.
Resumo:
We report Monte Carlo results for a nonequilibrium Ising-like model in two and three dimensions. Nearest-neighbor interactions J change sign randomly with time due to competing kinetics. There follows a fast and random, i.e., spin-configuration-independent diffusion of Js, of the kind that takes place in dilute metallic alloys when magnetic ions diffuse. The system exhibits steady states of the ferromagnetic (antiferromagnetic) type when the probability p that J>0 is large (small) enough. No counterpart to the freezing phenomena found in quenched spin glasses occurs. We compare our results with existing mean-field and exact ones, and obtain information about critical behavior.
Resumo:
Le problème de tarification qui nous intéresse ici consiste à maximiser le revenu généré par les usagers d'un réseau de transport. Pour se rendre à leurs destinations, les usagers font un choix de route et utilisent des arcs sur lesquels nous imposons des tarifs. Chaque route est caractérisée (aux yeux de l'usager) par sa "désutilité", une mesure de longueur généralisée tenant compte à la fois des tarifs et des autres coûts associés à son utilisation. Ce problème a surtout été abordé sous une modélisation déterministe de la demande selon laquelle seules des routes de désutilité minimale se voient attribuer une mesure positive de flot. Le modèle déterministe se prête bien à une résolution globale, mais pèche par manque de réalisme. Nous considérons ici une extension probabiliste de ce modèle, selon laquelle les usagers d'un réseau sont alloués aux routes d'après un modèle de choix discret logit. Bien que le problème de tarification qui en résulte est non linéaire et non convexe, il conserve néanmoins une forte composante combinatoire que nous exploitons à des fins algorithmiques. Notre contribution se répartit en trois articles. Dans le premier, nous abordons le problème d'un point de vue théorique pour le cas avec une paire origine-destination. Nous développons une analyse de premier ordre qui exploite les propriétés analytiques de l'affectation logit et démontrons la validité de règles de simplification de la topologie du réseau qui permettent de réduire la dimension du problème sans en modifier la solution. Nous établissons ensuite l'unimodalité du problème pour une vaste gamme de topologies et nous généralisons certains de nos résultats au problème de la tarification d'une ligne de produits. Dans le deuxième article, nous abordons le problème d'un point de vue numérique pour le cas avec plusieurs paires origine-destination. Nous développons des algorithmes qui exploitent l'information locale et la parenté des formulations probabilistes et déterministes. Un des résultats de notre analyse est l'obtention de bornes sur l'erreur commise par les modèles combinatoires dans l'approximation du revenu logit. Nos essais numériques montrent qu'une approximation combinatoire rudimentaire permet souvent d'identifier des solutions quasi-optimales. Dans le troisième article, nous considérons l'extension du problème à une demande hétérogène. L'affectation de la demande y est donnée par un modèle de choix discret logit mixte où la sensibilité au prix d'un usager est aléatoire. Sous cette modélisation, l'expression du revenu n'est pas analytique et ne peut être évaluée de façon exacte. Cependant, nous démontrons que l'utilisation d'approximations non linéaires et combinatoires permet d'identifier des solutions quasi-optimales. Finalement, nous en profitons pour illustrer la richesse du modèle, par le biais d'une interprétation économique, et examinons plus particulièrement la contribution au revenu des différents groupes d'usagers.
Resumo:
We complete the development of a testing ground for axioms of discrete stochastic choice. Our contribution here is to develop new posterior simulation methods for Bayesian inference, suitable for a class of prior distributions introduced by McCausland and Marley (2013). These prior distributions are joint distributions over various choice distributions over choice sets of di fferent sizes. Since choice distributions over di fferent choice sets can be mutually dependent, previous methods relying on conjugate prior distributions do not apply. We demonstrate by analyzing data from a previously reported experiment and report evidence for and against various axioms.
Resumo:
La crisis que se desató en el mercado hipotecario en Estados Unidos en 2008 y que logró propagarse a lo largo de todo sistema financiero, dejó en evidencia el nivel de interconexión que actualmente existe entre las entidades del sector y sus relaciones con el sector productivo, dejando en evidencia la necesidad de identificar y caracterizar el riesgo sistémico inherente al sistema, para que de esta forma las entidades reguladoras busquen una estabilidad tanto individual, como del sistema en general. El presente documento muestra, a través de un modelo que combina el poder informativo de las redes y su adecuación a un modelo espacial auto regresivo (tipo panel), la importancia de incorporar al enfoque micro-prudencial (propuesto en Basilea II), una variable que capture el efecto de estar conectado con otras entidades, realizando así un análisis macro-prudencial (propuesto en Basilea III).