7 resultados para catene di Markov catene di Markov reversibili simulazione metodo Montecarlo
em Archivo Digital para la Docencia y la Investigación - Repositorio Institucional de la Universidad del País Vasco
Resumo:
This paper analyzes the stationarity of this ratio in the context of a Markov-switching model à la Hamilton (1989) where an asymmetric speed of adjustment is introduced. This particular specification robustly supports a nonlinear reversion process and identifies two relevant episodes: the post-war period from the mid-50’s to the mid-70’s and the so called “90’s boom” period. A three-regime Markov-switching model displays the best regime identification and reveals that only the first part of the 90’s boom (1985-1995) and the post-war period are near-nonstationary states. Interestingly, the last part of the 90’s boom (1996-2000), characterized by a growing price-dividend ratio, is entirely attributed to a regime featuring a highly reverting process.
Resumo:
This paper considers the basic present value model of interest rates under rational expectations with two additional features. First, following McCallum (1994), the model assumes a policy reaction function where changes in the short-term interest rate are determined by the long-short spread. Second, the short-term interest rate and the risk premium processes are characterized by a Markov regime-switching model. Using US post-war interest rate data, this paper finds evidence that a two-regime switching model fits the data better than the basic model. The estimation results also show the presence of two alternative states displaying quite different features.
Resumo:
Methods for generating a new population are a fundamental component of estimation of distribution algorithms (EDAs). They serve to transfer the information contained in the probabilistic model to the new generated population. In EDAs based on Markov networks, methods for generating new populations usually discard information contained in the model to gain in efficiency. Other methods like Gibbs sampling use information about all interactions in the model but are computationally very costly. In this paper we propose new methods for generating new solutions in EDAs based on Markov networks. We introduce approaches based on inference methods for computing the most probable configurations and model-based template recombination. We show that the application of different variants of inference methods can increase the EDAs’ convergence rate and reduce the number of function evaluations needed to find the optimum of binary and non-binary discrete functions.
Resumo:
In this work the state of the art of the automatic dialogue strategy management using Markov decision processes (MDP) with reinforcement learning (RL) is described. Partially observable Markov decision processes (POMDP) are also described. To test the validity of these methods, two spoken dialogue systems have been developed. The first one is a spoken dialogue system for weather forecast providing, and the second one is a more complex system for train information. With the first system, comparisons between a rule-based system and an automatically trained system have been done, using a real corpus to train the automatic strategy. In the second system, the scalability of these methods when used in larger systems has been tested.
Resumo:
We consider cooperation situations where players have network relations. Networks evolve according to a stationary transition probability matrix and at each moment in time players receive payoffs from a stationary allocation rule. Players discount the future by a common factor. The pair formed by an allocation rule and a transition probability matrix is called a forward-looking network formation scheme if, first, the probability that a link is created is positive if the discounted, expected gains to its two participants are positive, and if, second, the probability that a link is eliminated is positive if the discounted, expected gains to at least one of its two participants are positive. The main result is the existence, for all discount factors and all value functions, of a forward-looking network formation scheme. Furthermore, we can always nd a forward-looking network formation scheme such that (i) the allocation rule is component balanced and (ii) the transition probabilities increase in the di erence in payo s for the corresponding players responsible for the transition. We use this dynamic solution concept to explore the tension between e ciency and stability.
Resumo:
[EN] Combination of polycarboxylate anions and dipyridyl ligands is an effective strategy to produce solid coordination frameworks (SCF) which are crystalline materials based on connections between metal ions through organic ligands. In this sense, combination of polycarboxylate anions and dipyridyl ligands is an effective strategy to produce extended structures. In this context, this work is focused on two novel CuII-based SCFs exhibiting PDC (2,5-pyridinedicarboxylate) and bpa (1,2-di(4-pyridyl)ethane), being the first structures reported in literature containing both ligands. Chemical formula are [Cu2[(PDC)2(bpa)(H2O)2]•3H2O•DMF (1), and [Cu2(PDC)2(bpa)(H2O)2]•7H2O (2), where DMF is dimethylformamide. Compounds 1 and 2 have been characterized by means of XRD, IR, TG/DTG, and DTA analysis.
Resumo:
Comunicación a congreso (póster): XXIV Simposio del Grupo Especializado de Cristalografía y Crecimiento Cristalino, GE3C. 23-26 de junio de 2014, Bilbao