955 resultados para Markov, Processos de
Resumo:
Multi temporal land use information were derived using two decades remote sensing data and simulated for 2012 and 2020 with Cellular Automata (CA) considering scenarios, change probabilities (through Markov chain) and Multi Criteria Evaluation (MCE). Agents and constraints were considered for modeling the urbanization process. Agents were nornmlized through fiizzyfication and priority weights were assigned through Analytical Hierarchical Process (AHP) pairwise comparison for each factor (in MCE) to derive behavior-oriented rules of transition for each land use class. Simulation shows a good agreement with the classified data. Fuzzy and AHP helped in analyzing the effects of agents of growth clearly and CA-Markov proved as a powerful tool in modelling and helped in capturing and visualizing the spatiotemporal patterns of urbanization. This provided rapid land evaluation framework with the essential insights of the urban trajectory for effective sustainable city planning.
Resumo:
We develop a general theory of Markov chains realizable as random walks on R-trivial monoids. It provides explicit and simple formulas for the eigenvalues of the transition matrix, for multiplicities of the eigenvalues via Mobius inversion along a lattice, a condition for diagonalizability of the transition matrix and some techniques for bounding the mixing time. In addition, we discuss several examples, such as Toom-Tsetlin models, an exchange walk for finite Coxeter groups, as well as examples previously studied by the authors, such as nonabelian sandpile models and the promotion Markov chain on posets. Many of these examples can be viewed as random walks on quotients of free tree monoids, a new class of monoids whose combinatorics we develop.
Resumo:
Monte Carlo simulation methods involving splitting of Markov chains have been used in evaluation of multi-fold integrals in different application areas. We examine in this paper the performance of these methods in the context of evaluation of reliability integrals from the point of view of characterizing the sampling fluctuations. The methods discussed include the Au-Beck subset simulation, Holmes-Diaconis-Ross method, and generalized splitting algorithm. A few improvisations based on first order reliability method are suggested to select algorithmic parameters of the latter two methods. The bias and sampling variance of the alternative estimators are discussed. Also, an approximation to the sampling distribution of some of these estimators is obtained. Illustrative examples involving component and series system reliability analyses are presented with a view to bring out the relative merits of alternative methods. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
In this article, we study risk-sensitive control problem with controlled continuous time Markov chain state dynamics. Using multiplicative dynamic programming principle along with the atomic structure of the state dynamics, we prove the existence and a characterization of optimal risk-sensitive control under geometric ergodicity of the state dynamics along with a smallness condition on the running cost.
Resumo:
We present a stochastic simulation technique for subset selection in time series models, based on the use of indicator variables with the Gibbs sampler within a hierarchical Bayesian framework. As an example, the method is applied to the selection of subset linear AR models, in which only significant lags are included. Joint sampling of the indicators and parameters is found to speed convergence. We discuss the possibility of model mixing where the model is not well determined by the data, and the extension of the approach to include non-linear model terms.
Resumo:
Approximate Bayesian computation (ABC) is a popular technique for analysing data for complex models where the likelihood function is intractable. It involves using simulation from the model to approximate the likelihood, with this approximate likelihood then being used to construct an approximate posterior. In this paper, we consider methods that estimate the parameters by maximizing the approximate likelihood used in ABC. We give a theoretical analysis of the asymptotic properties of the resulting estimator. In particular, we derive results analogous to those of consistency and asymptotic normality for standard maximum likelihood estimation. We also discuss how sequential Monte Carlo methods provide a natural method for implementing our likelihood-based ABC procedures.
Resumo:
This work addresses the problem of estimating the optimal value function in a Markov Decision Process from observed state-action pairs. We adopt a Bayesian approach to inference, which allows both the model to be estimated and predictions about actions to be made in a unified framework, providing a principled approach to mimicry of a controller on the basis of observed data. A new Markov chain Monte Carlo (MCMC) sampler is devised for simulation from theposterior distribution over the optimal value function. This step includes a parameter expansion step, which is shown to be essential for good convergence properties of the MCMC sampler. As an illustration, the method is applied to learning a human controller.
Resumo:
使用Markov链理论,基于16Mn钢小试样疲劳裂纹扩展试验,构造了一个物理短裂纹扩展的概率演化模型。该模型对裂纹扩展的循环数分布以及分布的演化过程的模拟,表明了与实验结果良好的吻合程度,从而为物理短裂纹扩展的概率分析及可靠性评价提供了手段。