4 resultados para Modèle Markov-modulé
Resumo:
This paper analyzes the stationarity of this ratio in the context of a Markov-switching model à la Hamilton (1989) where an asymmetric speed of adjustment is introduced. This particular specification robustly supports a nonlinear reversion process and identifies two relevant episodes: the post-war period from the mid-50’s to the mid-70’s and the so called “90’s boom” period. A three-regime Markov-switching model displays the best regime identification and reveals that only the first part of the 90’s boom (1985-1995) and the post-war period are near-nonstationary states. Interestingly, the last part of the 90’s boom (1996-2000), characterized by a growing price-dividend ratio, is entirely attributed to a regime featuring a highly reverting process.
Resumo:
This paper considers the basic present value model of interest rates under rational expectations with two additional features. First, following McCallum (1994), the model assumes a policy reaction function where changes in the short-term interest rate are determined by the long-short spread. Second, the short-term interest rate and the risk premium processes are characterized by a Markov regime-switching model. Using US post-war interest rate data, this paper finds evidence that a two-regime switching model fits the data better than the basic model. The estimation results also show the presence of two alternative states displaying quite different features.
Resumo:
Methods for generating a new population are a fundamental component of estimation of distribution algorithms (EDAs). They serve to transfer the information contained in the probabilistic model to the new generated population. In EDAs based on Markov networks, methods for generating new populations usually discard information contained in the model to gain in efficiency. Other methods like Gibbs sampling use information about all interactions in the model but are computationally very costly. In this paper we propose new methods for generating new solutions in EDAs based on Markov networks. We introduce approaches based on inference methods for computing the most probable configurations and model-based template recombination. We show that the application of different variants of inference methods can increase the EDAs’ convergence rate and reduce the number of function evaluations needed to find the optimum of binary and non-binary discrete functions.
Resumo:
In this work the state of the art of the automatic dialogue strategy management using Markov decision processes (MDP) with reinforcement learning (RL) is described. Partially observable Markov decision processes (POMDP) are also described. To test the validity of these methods, two spoken dialogue systems have been developed. The first one is a spoken dialogue system for weather forecast providing, and the second one is a more complex system for train information. With the first system, comparisons between a rule-based system and an automatically trained system have been done, using a real corpus to train the automatic strategy. In the second system, the scalability of these methods when used in larger systems has been tested.