991 resultados para Markov Branching-processes
Resumo:
Let (Phi(t))(t is an element of R+) be a Harris ergodic continuous-time Markov process on a general state space, with invariant probability measure pi. We investigate the rates of convergence of the transition function P-t(x, (.)) to pi; specifically, we find conditions under which r(t) vertical bar vertical bar P-t (x, (.)) - pi vertical bar vertical bar -> 0 as t -> infinity, for suitable subgeometric rate functions r(t), where vertical bar vertical bar - vertical bar vertical bar denotes the usual total variation norm for a signed measure. We derive sufficient conditions for the convergence to hold, in terms of the existence of suitable points on which the first hitting time moments are bounded. In particular, for stochastically ordered Markov processes, explicit bounds on subgeometric rates of convergence are obtained. These results are illustrated in several examples.
Resumo:
Let S be a countable set and let Q = (q(ij), i, j is an element of S) be a conservative q-matrix over S with a single instantaneous state b. Suppose that we are given a real number mu >= 0 and a strictly positive probability measure m = (m(j), j is an element of S) such that Sigma(i is an element of S) m(i)q(ij) = -mu m(j), j 0 b. We prove that there exists a Q-process P(t) = (p(ij) (t), i, j E S) for which m is a mu-invariant measure, that is Sigma(i is an element of s) m(i)p(ij)(t) = e(-mu t)m(j), j is an element of S. We illustrate our results with reference to the Kolmogorov 'K 1' chain and a birth-death process with catastrophes and instantaneous resurrection.
Resumo:
In this paper we develop set of novel Markov chain Monte Carlo algorithms for Bayesian smoothing of partially observed non-linear diffusion processes. The sampling algorithms developed herein use a deterministic approximation to the posterior distribution over paths as the proposal distribution for a mixture of an independence and a random walk sampler. The approximating distribution is sampled by simulating an optimized time-dependent linear diffusion process derived from the recently developed variational Gaussian process approximation method. Flexible blocking strategies are introduced to further improve mixing, and thus the efficiency, of the sampling algorithms. The algorithms are tested on two diffusion processes: one with double-well potential drift and another with SINE drift. The new algorithm's accuracy and efficiency is compared with state-of-the-art hybrid Monte Carlo based path sampling. It is shown that in practical, finite sample, applications the algorithm is accurate except in the presence of large observation errors and low observation densities, which lead to a multi-modal structure in the posterior distribution over paths. More importantly, the variational approximation assisted sampling algorithm outperforms hybrid Monte Carlo in terms of computational efficiency, except when the diffusion process is densely observed with small errors in which case both algorithms are equally efficient.
Resumo:
The assessment of the reliability of systems which learn from data is a key issue to investigate thoroughly before the actual application of information processing techniques to real-world problems. Over the recent years Gaussian processes and Bayesian neural networks have come to the fore and in this thesis their generalisation capabilities are analysed from theoretical and empirical perspectives. Upper and lower bounds on the learning curve of Gaussian processes are investigated in order to estimate the amount of data required to guarantee a certain level of generalisation performance. In this thesis we analyse the effects on the bounds and the learning curve induced by the smoothness of stochastic processes described by four different covariance functions. We also explain the early, linearly-decreasing behaviour of the curves and we investigate the asymptotic behaviour of the upper bounds. The effect of the noise and the characteristic lengthscale of the stochastic process on the tightness of the bounds are also discussed. The analysis is supported by several numerical simulations. The generalisation error of a Gaussian process is affected by the dimension of the input vector and may be decreased by input-variable reduction techniques. In conventional approaches to Gaussian process regression, the positive definite matrix estimating the distance between input points is often taken diagonal. In this thesis we show that a general distance matrix is able to estimate the effective dimensionality of the regression problem as well as to discover the linear transformation from the manifest variables to the hidden-feature space, with a significant reduction of the input dimension. Numerical simulations confirm the significant superiority of the general distance matrix with respect to the diagonal one.In the thesis we also present an empirical investigation of the generalisation errors of neural networks trained by two Bayesian algorithms, the Markov Chain Monte Carlo method and the evidence framework; the neural networks have been trained on the task of labelling segmented outdoor images.
Resumo:
The maximum M of a critical Bienaymé-Galton-Watson process conditioned on the total progeny N is studied. Imbedding of the process in a random walk is used. A limit theorem for the distribution of M as N → ∞ is proved. The result is trasferred to the non-critical processes. A corollary for the maximal strata of a random rooted labeled tree is obtained.
Resumo:
2000 Mathematics Subject Classi cation: 49L60, 60J60, 93E20.
Resumo:
2000 Mathematics Subject Classification: 60G15, 60G60; secondary 31B15, 31B25, 60H15
Resumo:
In this paper we develop set of novel Markov Chain Monte Carlo algorithms for Bayesian smoothing of partially observed non-linear diffusion processes. The sampling algorithms developed herein use a deterministic approximation to the posterior distribution over paths as the proposal distribution for a mixture of an independence and a random walk sampler. The approximating distribution is sampled by simulating an optimized time-dependent linear diffusion process derived from the recently developed variational Gaussian process approximation method. The novel diffusion bridge proposal derived from the variational approximation allows the use of a flexible blocking strategy that further improves mixing, and thus the efficiency, of the sampling algorithms. The algorithms are tested on two diffusion processes: one with double-well potential drift and another with SINE drift. The new algorithm's accuracy and efficiency is compared with state-of-the-art hybrid Monte Carlo based path sampling. It is shown that in practical, finite sample applications the algorithm is accurate except in the presence of large observation errors and low to a multi-modal structure in the posterior distribution over paths. More importantly, the variational approximation assisted sampling algorithm outperforms hybrid Monte Carlo in terms of computational efficiency, except when the diffusion process is densely observed with small errors in which case both algorithms are equally efficient. © 2011 Springer-Verlag.
Resumo:
An emergency is a deviation from a planned course of events that endangers people, properties, or the environment. It can be described as an unexpected event that causes economic damage, destruction, and human suffering. When a disaster happens, Emergency Managers are expected to have a response plan to most likely disaster scenarios. Unlike earthquakes and terrorist attacks, a hurricane response plan can be activated ahead of time, since a hurricane is predicted at least five days before it makes landfall. This research looked into the logistics aspects of the problem, in an attempt to develop a hurricane relief distribution network model. We addressed the problem of how to efficiently and effectively deliver basic relief goods to victims of a hurricane disaster. Specifically, where to preposition State Staging Areas (SSA), which Points of Distributions (PODs) to activate, and the allocation of commodities to each POD. Previous research has addressed several of these issues, but not with the incorporation of the random behavior of the hurricane's intensity and path. This research presents a stochastic meta-model that deals with the location of SSAs and the allocation of commodities. The novelty of the model is that it treats the strength and path of the hurricane as stochastic processes, and models them as Discrete Markov Chains. The demand is also treated as stochastic parameter because it depends on the stochastic behavior of the hurricane. However, for the meta-model, the demand is an input that is determined using Hazards United States (HAZUS), a software developed by the Federal Emergency Management Agency (FEMA) that estimates losses due to hurricanes and floods. A solution heuristic has been developed based on simulated annealing. Since the meta-model is a multi-objective problem, the heuristic is a multi-objective simulated annealing (MOSA), in which the initial solution and the cooling rate were determined via a Design of Experiments. The experiment showed that the initial temperature (T0) is irrelevant, but temperature reduction (δ) must be very gradual. Assessment of the meta-model indicates that the Markov Chains performed as well or better than forecasts made by the National Hurricane Center (NHC). Tests of the MOSA showed that it provides solutions in an efficient manner. Thus, an illustrative example shows that the meta-model is practical.
Resumo:
In this work, we present our understanding about the article of Aksoy [1], which uses Markov chains to model the flow of intermittent rivers. Then, we executed an application of his model in order to generate data for intermittent streamflows, based on a data set of Brazilian streams. After that, we build a hidden Markov model as a proposed new approach to the problem of simulation of such flows. We used the Gamma distribution to simulate the increases and decreases in river flows, along with a two-state Markov chain. The motivation for us to use a hidden Markov model comes from the possibility of obtaining the same information that the Aksoy’s model provides, but using a single tool capable of treating the problem as a whole, and not through multiple independent processes
Resumo:
Ocean acidification represents a key threat to coral reefs by reducing the calcification rate of framework builders. In addition, acidification is likely to affect the relationship between corals and their symbiotic dinoflagellates and the productivity of this association. However, little is known about how acidification impacts on the physiology of reef builders and how acidification interacts with warming. Here, we report on an 8-week study that compared bleaching, productivity, and calcification responses of crustose coralline algae (CCA) and branching (Acropora) and massive (Porites) coral species in response to acidification and warming. Using a 30-tank experimental system, we manipulated CO2 levels to simulate doubling and three- to fourfold increases [Intergovernmental Panel on Climate Change (IPCC) projection categories IV and VI] relative to present-day levels under cool and warm scenarios. Results indicated that high CO2 is a bleaching agent for corals and CCA under high irradiance, acting synergistically with warming to lower thermal bleaching thresholds. We propose that CO2 induces bleaching via its impact on photoprotective mechanisms of the photosystems. Overall, acidification impacted more strongly on bleaching and productivity than on calcification. Interestingly, the intermediate, warm CO2 scenario led to a 30% increase in productivity in Acropora, whereas high CO2 lead to zero productivity in both corals. CCA were most sensitive to acidification, with high CO2 leading to negative productivity and high rates of net dissolution. Our findings suggest that sensitive reef-building species such as CCA may be pushed beyond their thresholds for growth and survival within the next few decades whereas corals will show delayed and mixed responses.
Resumo:
A non-Markovian process is one that retains `memory' of its past. A systematic understanding of these processes is necessary to fully describe and harness a vast range of complex phenomena; however, no such general characterisation currently exists. This long-standing problem has hindered advances in understanding physical, chemical and biological processes, where often dubious theoretical assumptions are made to render a dynamical description tractable. Moreover, the methods currently available to treat non-Markovian quantum dynamics are plagued with unphysical results, like non-positive dynamics. Here we develop an operational framework to characterise arbitrary non-Markovian quantum processes. We demonstrate the universality of our framework and how the characterisation can be rendered efficient, before formulating a necessary and sufficient condition for quantum Markov processes. Finally, we stress how our framework enables the actual systematic analysis of non-Markovian processes, the understanding of their typicality, and the development of new master equations for the effective description of memory-bearing open-system evolution.
Resumo:
The Dirichlet process mixture model (DPMM) is a ubiquitous, flexible Bayesian nonparametric statistical model. However, full probabilistic inference in this model is analytically intractable, so that computationally intensive techniques such as Gibbs sampling are required. As a result, DPMM-based methods, which have considerable potential, are restricted to applications in which computational resources and time for inference is plentiful. For example, they would not be practical for digital signal processing on embedded hardware, where computational resources are at a serious premium. Here, we develop a simplified yet statistically rigorous approximate maximum a-posteriori (MAP) inference algorithm for DPMMs. This algorithm is as simple as DP-means clustering, solves the MAP problem as well as Gibbs sampling, while requiring only a fraction of the computational effort. (For freely available code that implements the MAP-DP algorithm for Gaussian mixtures see http://www.maxlittle.net/.) Unlike related small variance asymptotics (SVA), our method is non-degenerate and so inherits the “rich get richer” property of the Dirichlet process. It also retains a non-degenerate closed-form likelihood which enables out-of-sample calculations and the use of standard tools such as cross-validation. We illustrate the benefits of our algorithm on a range of examples and contrast it to variational, SVA and sampling approaches from both a computational complexity perspective as well as in terms of clustering performance. We demonstrate the wide applicabiity of our approach by presenting an approximate MAP inference method for the infinite hidden Markov model whose performance contrasts favorably with a recently proposed hybrid SVA approach. Similarly, we show how our algorithm can applied to a semiparametric mixed-effects regression model where the random effects distribution is modelled using an infinite mixture model, as used in longitudinal progression modelling in population health science. Finally, we propose directions for future research on approximate MAP inference in Bayesian nonparametrics.
Resumo:
The purpose of this thesis is to clarify the role of non-equilibrium stationary currents of Markov processes in the context of the predictability of future states of the system. Once the connection between the predictability and the conditional entropy is established, we provide a comprehensive approach to the definition of a multi-particle Markov system. In particular, starting from the well-known theory of random walk on network, we derive the non-linear master equation for an interacting multi-particle system under the one-step process hypothesis, highlighting the limits of its tractability and the prop- erties of its stationary solution. Lastly, in order to study the impact of the NESS on the predictability at short times, we analyze the conditional entropy by modulating the intensity of the stationary currents, both for a single-particle and a multi-particle Markov system. The results obtained analytically are numerically tested on a 5-node cycle network and put in correspondence with the stationary entropy production. Furthermore, because of the low dimensionality of the single-particle system, an analysis of its spectral properties as a function of the modulated stationary currents is performed.