957 resultados para Markov, Campos aleatórios de
Resumo:
Neste trabalho é apresentado um experimento incluído no contexto de experimentos longos adotado nas disciplinas experimentais de eletricidade, magnetismo e óptica, e consiste na caracterização de um seletor de velocidades que funciona com campos elétricos e magnéticos cruzados. Utiliza-se um tubo de raios catódicos para gerar um feixe de elétrons. As placas de deflexão vertical do tubo geram o campo elétrico e um par de bobinas, com os eixos perpendiculares ao eixo do tubo, gera o campo magnético. São realizados estudos de trajetória dos elétrons com auxílio de um programa de simulação de elétrons.
Resumo:
O objetivo do estudo foi mensurar os gastos diretos do Sistema Único de Saúde (SUS) com internações por causas externas em São José dos Campos, São Paulo, Brasil. Foram estudadas as internações por lesões decorrentes de causas externas, respectivamente capítulos XIX e XX da CID-10, no primeiro semestre de 2003, no Hospital Municipal Dr. José de Carvalho Florence. Foram analisados os valores pagos através do SUS, após a verificação da qualidade dos dados nos prontuários de 976 internações. Os maiores gastos totais foram por internações decorrentes de acidentes de transporte e quedas. O maior gasto médio de internação foi por acidentes de transporte (R$ 614,63), seguido das agressões (R$ 594,90). As lesões que representaram maior gasto médio foram as fraturas de pescoço (R$ 1.191,42) e traumatismo intracraniano (R$ 1.000,44). As internações com maior custo-dia foram fraturas do crânio e dos ossos da face (R$ 166,72) e traumatismo intra-abdominal (R$ 148,26). Os resultados encontrados demonstraram que os acidentes de transporte, as quedas e as agressões são importantes fontes de gastos com internações por causas externas no município.
Resumo:
OBJETIVO: Conhecer a qualidade dos dados de internação por causas externas em São José dos Campos, São Paulo. MÉTODO: Foram estudadas as internações pelo Sistema Único de Saúde por lesões decorrentes de causas externas no primeiro semestre de 2003, no Hospital Municipal, referência para o atendimento ao trauma no Município, por meio da comparação dos dados registrados no Sistema de Informações Hospitalares com os prontuários de 990 internações. A concordância das variáveis relativas à vítima, à internação e ao agravo foi avaliada pela taxa bruta de concordância e pelo coeficiente Kappa. As lesões e as causas externas foram codificadas segundo a 10ª revisão da Classificação Internacional de Doenças, respectivamente, capítulos XIX e XX. RESULTADOS: A taxa de concordância bruta foi de boa qualidade para as variáveis relativas à vítima e à internação, variando de 89,0% a 99,2%. As lesões tiveram concordância ótima, exceto os traumatismos do pescoço (k=0,73), traumatismos múltiplos (k=0,67) e fraturas do tórax (k=0,49). As causas externas tiveram concordância ótima para acidentes de transporte (k=0,90) e quedas (k=0,83). A confiabilidade foi menor para agressões (k=0,50), causas indeterminadas (k=0,37), e complicações da assistência médica (k=0,03). Houve concordância ótima nos acidentes de transporte em pedestres, ciclistas e motociclistas. CONCLUSÃO: A maioria das variáveis de estudo teve boa qualidade no nível de agregação analisado. Algumas variáveis relativas à vítima e alguns tipos de causas externas necessitam de aperfeiçoamento da qualidade dos dados. O perfil da morbidade hospitalar encontrado confirmou os acidentes de transporte como importante causa externa de internação hospitalar no Município.
Resumo:
O objetivo dos estudo foi conhecer o perfil da morbidade das internações hospitalares por causas externas no Município de São José dos Campos, Estado de São Paulo, Brasil. Foram estudadas as internações pelo Sistema Único de Saúde (SUS) por lesões decorrentes de causas externas no primeiro semestre de 2003, no Hospital Municipal. Este hospital é a principal referência para o atendimento ao trama e foi responsável por 92,3% das internações pelo SUS por causas externas no período estudado. Entre os 873 pacientes internados, as lesões decorrentes de acidentes de transporte foram resposáveis por 31,8% dos casos, as quedas por 26,7% e as causas indeterminadas por 19,5%. A razão de masculinidade foi de 3,1:1 e a faixa etária predominante de 20-29 anos, com 23,3% das internações. As lesões mais freqüentes foram as fraturas (49,8%) e o traumatismo intracraniano (13,5%). Entre as fraturas, predominaram as do fêmur e as da perna, que representaram, respectivamente, 10,8% e 10,1%. A maior taxa de internação por local de residência ocorreu na região Norte do Município, com 470,0 internações por 100.000 habitantes. O perfil da morbidade hospitalar encontrado confirmou os acidentes de transporte como importante causa de internação hospitalar no Município e contrariou a tendência geral das quedas como principal causa externa de internação hospitalar. A distribuição por sexo, idade e natureza da lesão foi semelhante aos dados encontrados na literatura. A taxa de internação por causas externas por região de residência contribuiu para o mapeamento da violência em São José dos Campos-SP
Resumo:
The main goal of this paper is to establish some equivalence results on stability, recurrence, and ergodicity between a piecewise deterministic Markov process ( PDMP) {X( t)} and an embedded discrete-time Markov chain {Theta(n)} generated by a Markov kernel G that can be explicitly characterized in terms of the three local characteristics of the PDMP, leading to tractable criterion results. First we establish some important results characterizing {Theta(n)} as a sampling of the PDMP {X( t)} and deriving a connection between the probability of the first return time to a set for the discrete-time Markov chains generated by G and the resolvent kernel R of the PDMP. From these results we obtain equivalence results regarding irreducibility, existence of sigma-finite invariant measures, and ( positive) recurrence and ( positive) Harris recurrence between {X( t)} and {Theta(n)}, generalizing the results of [ F. Dufour and O. L. V. Costa, SIAM J. Control Optim., 37 ( 1999), pp. 1483-1502] in several directions. Sufficient conditions in terms of a modified Foster-Lyapunov criterion are also presented to ensure positive Harris recurrence and ergodicity of the PDMP. We illustrate the use of these conditions by showing the ergodicity of a capacity expansion model.
Resumo:
This paper deals with the long run average continuous control problem of piecewise deterministic Markov processes (PDMPs) taking values in a general Borel space and with compact action space depending on the state variable. The control variable acts on the jump rate and transition measure of the PDMP, and the running and boundary costs are assumed to be positive but not necessarily bounded. Our first main result is to obtain an optimality equation for the long run average cost in terms of a discrete-time optimality equation related to the embedded Markov chain given by the postjump location of the PDMP. Our second main result guarantees the existence of a feedback measurable selector for the discrete-time optimality equation by establishing a connection between this equation and an integro-differential equation. Our final main result is to obtain some sufficient conditions for the existence of a solution for a discrete-time optimality inequality and an ordinary optimal feedback control for the long run average cost using the so-called vanishing discount approach. Two examples are presented illustrating the possible applications of the results developed in the paper.
Resumo:
We propose and analyze two different Bayesian online algorithms for learning in discrete Hidden Markov Models and compare their performance with the already known Baldi-Chauvin Algorithm. Using the Kullback-Leibler divergence as a measure of generalization we draw learning curves in simplified situations for these algorithms and compare their performances.
Resumo:
The main goal of this paper is to apply the so-called policy iteration algorithm (PIA) for the long run average continuous control problem of piecewise deterministic Markov processes (PDMP`s) taking values in a general Borel space and with compact action space depending on the state variable. In order to do that we first derive some important properties for a pseudo-Poisson equation associated to the problem. In the sequence it is shown that the convergence of the PIA to a solution satisfying the optimality equation holds under some classical hypotheses and that this optimal solution yields to an optimal control strategy for the average control problem for the continuous-time PDMP in a feedback form.
Resumo:
This work is concerned with the existence of an optimal control strategy for the long-run average continuous control problem of piecewise-deterministic Markov processes (PDMPs). In Costa and Dufour (2008), sufficient conditions were derived to ensure the existence of an optimal control by using the vanishing discount approach. These conditions were mainly expressed in terms of the relative difference of the alpha-discount value functions. The main goal of this paper is to derive tractable conditions directly related to the primitive data of the PDMP to ensure the existence of an optimal control. The present work can be seen as a continuation of the results derived in Costa and Dufour (2008). Our main assumptions are written in terms of some integro-differential inequalities related to the so-called expected growth condition, and geometric convergence of the post-jump location kernel associated to the PDMP. An example based on the capacity expansion problem is presented, illustrating the possible applications of the results developed in the paper.
Resumo:
We consider in this paper the optimal stationary dynamic linear filtering problem for continuous-time linear systems subject to Markovian jumps in the parameters (LSMJP) and additive noise (Wiener process). It is assumed that only an output of the system is available and therefore the values of the jump parameter are not accessible. It is a well known fact that in this setting the optimal nonlinear filter is infinite dimensional, which makes the linear filtering a natural numerically, treatable choice. The goal is to design a dynamic linear filter such that the closed loop system is mean square stable and minimizes the stationary expected value of the mean square estimation error. It is shown that an explicit analytical solution to this optimal filtering problem is obtained from the stationary solution associated to a certain Riccati equation. It is also shown that the problem can be formulated using a linear matrix inequalities (LMI) approach, which can be extended to consider convex polytopic uncertainties on the parameters of the possible modes of operation of the system and on the transition rate matrix of the Markov process. As far as the authors are aware of this is the first time that this stationary filtering problem (exact and robust versions) for LSMJP with no knowledge of the Markov jump parameters is considered in the literature. Finally, we illustrate the results with an example.
Resumo:
This paper deals with the expected discounted continuous control of piecewise deterministic Markov processes (PDMP`s) using a singular perturbation approach for dealing with rapidly oscillating parameters. The state space of the PDMP is written as the product of a finite set and a subset of the Euclidean space a""e (n) . The discrete part of the state, called the regime, characterizes the mode of operation of the physical system under consideration, and is supposed to have a fast (associated to a small parameter epsilon > 0) and a slow behavior. By using a similar approach as developed in Yin and Zhang (Continuous-Time Markov Chains and Applications: A Singular Perturbation Approach, Applications of Mathematics, vol. 37, Springer, New York, 1998, Chaps. 1 and 3) the idea in this paper is to reduce the number of regimes by considering an averaged model in which the regimes within the same class are aggregated through the quasi-stationary distribution so that the different states in this class are replaced by a single one. The main goal is to show that the value function of the control problem for the system driven by the perturbed Markov chain converges to the value function of this limit control problem as epsilon goes to zero. This convergence is obtained by, roughly speaking, showing that the infimum and supremum limits of the value functions satisfy two optimality inequalities as epsilon goes to zero. This enables us to show the result by invoking a uniqueness argument, without needing any kind of Lipschitz continuity condition.
Resumo:
In this paper we consider the existence of the maximal and mean square stabilizing solutions for a set of generalized coupled algebraic Riccati equations (GCARE for short) associated to the infinite-horizon stochastic optimal control problem of discrete-time Markov jump with multiplicative noise linear systems. The weighting matrices of the state and control for the quadratic part are allowed to be indefinite. We present a sufficient condition, based only on some positive semi-definite and kernel restrictions on some matrices, under which there exists the maximal solution and a necessary and sufficient condition under which there exists the mean square stabilizing solution fir the GCARE. We also present a solution for the discounted and long run average cost problems when the performance criterion is assumed be composed by a linear combination of an indefinite quadratic part and a linear part in the state and control variables. The paper is concluded with a numerical example for pension fund with regime switching.
Resumo:
In this paper we obtain the linear minimum mean square estimator (LMMSE) for discrete-time linear systems subject to state and measurement multiplicative noises and Markov jumps on the parameters. It is assumed that the Markov chain is not available. By using geometric arguments we obtain a Kalman type filter conveniently implementable in a recurrence form. The stationary case is also studied and a proof for the convergence of the error covariance matrix of the LMMSE to a stationary value under the assumption of mean square stability of the system and ergodicity of the associated Markov chain is obtained. It is shown that there exists a unique positive semi-definite solution for the stationary Riccati-like filter equation and, moreover, this solution is the limit of the error covariance matrix of the LMMSE. The advantage of this scheme is that it is very easy to implement and all calculations can be performed offline. (c) 2011 Elsevier Ltd. All rights reserved.
Resumo:
In this paper, we deal with a generalized multi-period mean-variance portfolio selection problem with market parameters Subject to Markov random regime switchings. Problems of this kind have been recently considered in the literature for control over bankruptcy, for cases in which there are no jumps in market parameters (see [Zhu, S. S., Li, D., & Wang, S. Y. (2004). Risk control over bankruptcy in dynamic portfolio selection: A generalized mean variance formulation. IEEE Transactions on Automatic Control, 49, 447-457]). We present necessary and Sufficient conditions for obtaining an optimal control policy for this Markovian generalized multi-period meal-variance problem, based on a set of interconnected Riccati difference equations, and oil a set of other recursive equations. Some closed formulas are also derived for two special cases, extending some previous results in the literature. We apply the results to a numerical example with real data for Fisk control over bankruptcy Ill a dynamic portfolio selection problem with Markov jumps selection problem. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Krylov subspace techniques have been shown to yield robust methods for the numerical computation of large sparse matrix exponentials and especially the transient solutions of Markov Chains. The attractiveness of these methods results from the fact that they allow us to compute the action of a matrix exponential operator on an operand vector without having to compute, explicitly, the matrix exponential in isolation. In this paper we compare a Krylov-based method with some of the current approaches used for computing transient solutions of Markov chains. After a brief synthesis of the features of the methods used, wide-ranging numerical comparisons are performed on a power challenge array supercomputer on three different models. (C) 1999 Elsevier Science B.V. All rights reserved.AMS Classification: 65F99; 65L05; 65U05.