72 resultados para Matriz de Markov
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Resumo:
Glioxal pode ser obtido a partir de biomassa (como da oxidação de lipídeos) e não é tóxico ou volátil, tendo sido por isso utilizado no presente trabalho como substituto de formaldeído na preparação de resina fenólica do tipo novolaca, sendo usado como catalisador o ácido oxálico, que também pode ser obtido de fontes renováveis. A resina glioxal-fenol foi utilizada na preparação de compósitos reforçados com celulose microcristalina (CM, 30, 50 e 70% em massa), uma celulose com elevada área superficial. As imagens de microscopia eletrônica de varredura (MEV) das superfícies fraturadas demonstraram que os compósitos apresentaram boa interface reforço/matriz, consequência da elevada área superficial da CM e presença de grupos polares (hidroxilas) tanto na matriz como na celulose, o que permitiu a formação de ligações hidrogênio, favorecendo a compatibilidade entre ambas. A análise térmica dinâmico-mecânica (DMTA) demonstrou que todos os compósitos apresentaram elevado módulo de armazenamento à temperatura ambiente. Além disso, o compósito reforçado com 30% de CM apresentou baixa absorção de água, comparável à do termorrígido fenólico, que é utilizado em escala industrial. Os resultados demonstraram que compósitos com boas propriedades podem ser preparados usando elevada proporção de materiais obtidos de biomassa.
Resumo:
Os mecanismos biológicos desenvolvidos para aumentar a qualidade da regeneração óssea e da reparação tecidual de sítios periodontais específicos continuam a ser um desafio e têm sido complementado pela capacidade de adesão celular do colágeno do tipo I, promovida por um peptídeo sintético de adesão celular (P-15), associado a uma matriz inorgânica de osso (MIO) para formar MIO/P-15. O objetivo deste estudo foi avaliar a perda do nível clínico de inserção e a resposta da bolsa periodontal em dentes após 3 e 6 meses da aplicação de enxerto com MIO/P-15. Vinte e um cães do Hospital Veterinário da Universidade de São Paulo foram anestesiados para realização de tratamento periodontal e 132 faces dentais com perda de nível clínico de inserção foram tratadas, sendo que 36,4% (48 faces) receberam o peptídeo de adesão celular e 63,6% (84 faces) compuseram o grupo controle que recebeu tratamento convencional (retalho muco-gengival e aplainamento radicular). O procedimento foi documentado através de radiografia intra-oral e todas as sondagens de bolsas periodontais foram fotografadas. Depois de 3 e de 6 meses, os animais foram re-anestesiados a fim de se obter novas avaliações, radiografias, fotografias e sondagens periodontais. As 48 faces com perda de nível clínico de inserção que receberam material de enxertia apresentaram taxa de 40% de recuperação do nível clínico de inserção após 6 meses. O grupo controle de faces dentais não apresentou alteração do nível clínico de inserção. A face palatina foi a que apresentou melhor taxa de regeneração (40%) e os dentes caninos e molares mostraram as melhores respostas (57,14% e 65%, respectivamente). Não houve sinais de infecção pós-cirúrgica relacionadas à falta de higienização oral dos animais. Pode-se concluir que o MIO/P-15 auxilia na regeneração e re-aderência das estruturas periodontais, incluindo osso alveolar. Sua aplicação mostrou-se fácil e prática e a incidência de complicações pós-cirúrgicas foi baixa. Ainda assim, mais estudos e pesquisas são necessários para que se avalie a quantidade e a qualidade do osso e do ligamento periodontal formados.
Resumo:
The main goal of this paper is to establish some equivalence results on stability, recurrence, and ergodicity between a piecewise deterministic Markov process ( PDMP) {X( t)} and an embedded discrete-time Markov chain {Theta(n)} generated by a Markov kernel G that can be explicitly characterized in terms of the three local characteristics of the PDMP, leading to tractable criterion results. First we establish some important results characterizing {Theta(n)} as a sampling of the PDMP {X( t)} and deriving a connection between the probability of the first return time to a set for the discrete-time Markov chains generated by G and the resolvent kernel R of the PDMP. From these results we obtain equivalence results regarding irreducibility, existence of sigma-finite invariant measures, and ( positive) recurrence and ( positive) Harris recurrence between {X( t)} and {Theta(n)}, generalizing the results of [ F. Dufour and O. L. V. Costa, SIAM J. Control Optim., 37 ( 1999), pp. 1483-1502] in several directions. Sufficient conditions in terms of a modified Foster-Lyapunov criterion are also presented to ensure positive Harris recurrence and ergodicity of the PDMP. We illustrate the use of these conditions by showing the ergodicity of a capacity expansion model.
Resumo:
This paper deals with the long run average continuous control problem of piecewise deterministic Markov processes (PDMPs) taking values in a general Borel space and with compact action space depending on the state variable. The control variable acts on the jump rate and transition measure of the PDMP, and the running and boundary costs are assumed to be positive but not necessarily bounded. Our first main result is to obtain an optimality equation for the long run average cost in terms of a discrete-time optimality equation related to the embedded Markov chain given by the postjump location of the PDMP. Our second main result guarantees the existence of a feedback measurable selector for the discrete-time optimality equation by establishing a connection between this equation and an integro-differential equation. Our final main result is to obtain some sufficient conditions for the existence of a solution for a discrete-time optimality inequality and an ordinary optimal feedback control for the long run average cost using the so-called vanishing discount approach. Two examples are presented illustrating the possible applications of the results developed in the paper.
Resumo:
We propose and analyze two different Bayesian online algorithms for learning in discrete Hidden Markov Models and compare their performance with the already known Baldi-Chauvin Algorithm. Using the Kullback-Leibler divergence as a measure of generalization we draw learning curves in simplified situations for these algorithms and compare their performances.
Resumo:
The main goal of this paper is to apply the so-called policy iteration algorithm (PIA) for the long run average continuous control problem of piecewise deterministic Markov processes (PDMP`s) taking values in a general Borel space and with compact action space depending on the state variable. In order to do that we first derive some important properties for a pseudo-Poisson equation associated to the problem. In the sequence it is shown that the convergence of the PIA to a solution satisfying the optimality equation holds under some classical hypotheses and that this optimal solution yields to an optimal control strategy for the average control problem for the continuous-time PDMP in a feedback form.
Resumo:
This work is concerned with the existence of an optimal control strategy for the long-run average continuous control problem of piecewise-deterministic Markov processes (PDMPs). In Costa and Dufour (2008), sufficient conditions were derived to ensure the existence of an optimal control by using the vanishing discount approach. These conditions were mainly expressed in terms of the relative difference of the alpha-discount value functions. The main goal of this paper is to derive tractable conditions directly related to the primitive data of the PDMP to ensure the existence of an optimal control. The present work can be seen as a continuation of the results derived in Costa and Dufour (2008). Our main assumptions are written in terms of some integro-differential inequalities related to the so-called expected growth condition, and geometric convergence of the post-jump location kernel associated to the PDMP. An example based on the capacity expansion problem is presented, illustrating the possible applications of the results developed in the paper.
Resumo:
We consider in this paper the optimal stationary dynamic linear filtering problem for continuous-time linear systems subject to Markovian jumps in the parameters (LSMJP) and additive noise (Wiener process). It is assumed that only an output of the system is available and therefore the values of the jump parameter are not accessible. It is a well known fact that in this setting the optimal nonlinear filter is infinite dimensional, which makes the linear filtering a natural numerically, treatable choice. The goal is to design a dynamic linear filter such that the closed loop system is mean square stable and minimizes the stationary expected value of the mean square estimation error. It is shown that an explicit analytical solution to this optimal filtering problem is obtained from the stationary solution associated to a certain Riccati equation. It is also shown that the problem can be formulated using a linear matrix inequalities (LMI) approach, which can be extended to consider convex polytopic uncertainties on the parameters of the possible modes of operation of the system and on the transition rate matrix of the Markov process. As far as the authors are aware of this is the first time that this stationary filtering problem (exact and robust versions) for LSMJP with no knowledge of the Markov jump parameters is considered in the literature. Finally, we illustrate the results with an example.
Resumo:
This paper deals with the expected discounted continuous control of piecewise deterministic Markov processes (PDMP`s) using a singular perturbation approach for dealing with rapidly oscillating parameters. The state space of the PDMP is written as the product of a finite set and a subset of the Euclidean space a""e (n) . The discrete part of the state, called the regime, characterizes the mode of operation of the physical system under consideration, and is supposed to have a fast (associated to a small parameter epsilon > 0) and a slow behavior. By using a similar approach as developed in Yin and Zhang (Continuous-Time Markov Chains and Applications: A Singular Perturbation Approach, Applications of Mathematics, vol. 37, Springer, New York, 1998, Chaps. 1 and 3) the idea in this paper is to reduce the number of regimes by considering an averaged model in which the regimes within the same class are aggregated through the quasi-stationary distribution so that the different states in this class are replaced by a single one. The main goal is to show that the value function of the control problem for the system driven by the perturbed Markov chain converges to the value function of this limit control problem as epsilon goes to zero. This convergence is obtained by, roughly speaking, showing that the infimum and supremum limits of the value functions satisfy two optimality inequalities as epsilon goes to zero. This enables us to show the result by invoking a uniqueness argument, without needing any kind of Lipschitz continuity condition.
Resumo:
In this paper we consider the existence of the maximal and mean square stabilizing solutions for a set of generalized coupled algebraic Riccati equations (GCARE for short) associated to the infinite-horizon stochastic optimal control problem of discrete-time Markov jump with multiplicative noise linear systems. The weighting matrices of the state and control for the quadratic part are allowed to be indefinite. We present a sufficient condition, based only on some positive semi-definite and kernel restrictions on some matrices, under which there exists the maximal solution and a necessary and sufficient condition under which there exists the mean square stabilizing solution fir the GCARE. We also present a solution for the discounted and long run average cost problems when the performance criterion is assumed be composed by a linear combination of an indefinite quadratic part and a linear part in the state and control variables. The paper is concluded with a numerical example for pension fund with regime switching.
Resumo:
In this paper we obtain the linear minimum mean square estimator (LMMSE) for discrete-time linear systems subject to state and measurement multiplicative noises and Markov jumps on the parameters. It is assumed that the Markov chain is not available. By using geometric arguments we obtain a Kalman type filter conveniently implementable in a recurrence form. The stationary case is also studied and a proof for the convergence of the error covariance matrix of the LMMSE to a stationary value under the assumption of mean square stability of the system and ergodicity of the associated Markov chain is obtained. It is shown that there exists a unique positive semi-definite solution for the stationary Riccati-like filter equation and, moreover, this solution is the limit of the error covariance matrix of the LMMSE. The advantage of this scheme is that it is very easy to implement and all calculations can be performed offline. (c) 2011 Elsevier Ltd. All rights reserved.
Resumo:
In this paper, we deal with a generalized multi-period mean-variance portfolio selection problem with market parameters Subject to Markov random regime switchings. Problems of this kind have been recently considered in the literature for control over bankruptcy, for cases in which there are no jumps in market parameters (see [Zhu, S. S., Li, D., & Wang, S. Y. (2004). Risk control over bankruptcy in dynamic portfolio selection: A generalized mean variance formulation. IEEE Transactions on Automatic Control, 49, 447-457]). We present necessary and Sufficient conditions for obtaining an optimal control policy for this Markovian generalized multi-period meal-variance problem, based on a set of interconnected Riccati difference equations, and oil a set of other recursive equations. Some closed formulas are also derived for two special cases, extending some previous results in the literature. We apply the results to a numerical example with real data for Fisk control over bankruptcy Ill a dynamic portfolio selection problem with Markov jumps selection problem. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
This paper analyzes the geography of regional competitiveness in manufacturing in Brazil. The authors estimate stochastic frontiers to calculate regional efficiency of representative firms in 137 regions in the period 2000-2006, in four sectors defined by technological intensity. The efficiency results are analyzed using Markov Spatial Transition Matrices to provide insights into the transition of regions between efficiency levels, considering their local spatial context. The results indicate that geography plays an important role in manufacturing competitiveness. In particular, regions with more competitive neighbors are more likely to improve their relative efficiency (pull effect) over time, and regions with less competitive neighbors are more likely to lose relative efficiency (drag effect). The authors find that the pull effect is stronger than the drag effect.
Resumo:
The elevated plus-maze is an animal model of anxiety used to study the effect of different drugs on the behavior of the animal It consists of a plus-shaped maze with two open and two closed arms elevated 50 cm from the floor The standard measures used to characterize exploratory behavior in the elevated plus-maze are the time spent and the number of entries in the open arms In this work we use Markov chains to characterize the exploratory behavior of the rat in the elevated plus-maze under three different conditions normal and under the effects of anxiogenic and anxiolytic drugs The spatial structure of the elevated plus-maze is divided into squares which are associated with states of a Markov chain By counting the frequencies of transitions between states during 5-min sessions in the elevated plus-maze we constructed stochastic matrices for the three conditions studied The stochastic matrices show specific patterns which correspond to the observed behaviors of the rat under the three different conditions For the control group the stochastic matrix shows a clear preference for places in the closed arms This preference is enhanced for the anxiogenic group For the anxiolytic group the stochastic matrix shows a pattern similar to a random walk Our results suggest that Markov chains can be used together with the standard measures to characterize the rat behavior in the elevated plus-maze (C) 2010 Elsevier B V All rights reserved