937 resultados para Markov chains, uniformization, inexact methods, relaxed matrix-vector
Resumo:
An important variant of a key problem for multi-attribute decision making is considered. We study the extension of the pairwise comparison matrix to the case when only partial information is available: for some pairs no comparison is given. It is natural to define the inconsistency of a partially filled matrix as the inconsistency of its best, completely filled completion. We study here the uniqueness problem of the best completion for two weighting methods, the Eigen-vector Method and the Logarithmic Least Squares Method. In both settings we obtain the same simple graph theoretic characterization of the uniqueness. The optimal completion will be unique if and only if the graph associated with the partially defined matrix is connected. Some numerical experiences are discussed at the end of the paper.
Resumo:
An emergency is a deviation from a planned course of events that endangers people, properties, or the environment. It can be described as an unexpected event that causes economic damage, destruction, and human suffering. When a disaster happens, Emergency Managers are expected to have a response plan to most likely disaster scenarios. Unlike earthquakes and terrorist attacks, a hurricane response plan can be activated ahead of time, since a hurricane is predicted at least five days before it makes landfall. This research looked into the logistics aspects of the problem, in an attempt to develop a hurricane relief distribution network model. We addressed the problem of how to efficiently and effectively deliver basic relief goods to victims of a hurricane disaster. Specifically, where to preposition State Staging Areas (SSA), which Points of Distributions (PODs) to activate, and the allocation of commodities to each POD. Previous research has addressed several of these issues, but not with the incorporation of the random behavior of the hurricane's intensity and path. This research presents a stochastic meta-model that deals with the location of SSAs and the allocation of commodities. The novelty of the model is that it treats the strength and path of the hurricane as stochastic processes, and models them as Discrete Markov Chains. The demand is also treated as stochastic parameter because it depends on the stochastic behavior of the hurricane. However, for the meta-model, the demand is an input that is determined using Hazards United States (HAZUS), a software developed by the Federal Emergency Management Agency (FEMA) that estimates losses due to hurricanes and floods. A solution heuristic has been developed based on simulated annealing. Since the meta-model is a multi-objective problem, the heuristic is a multi-objective simulated annealing (MOSA), in which the initial solution and the cooling rate were determined via a Design of Experiments. The experiment showed that the initial temperature (T0) is irrelevant, but temperature reduction (δ) must be very gradual. Assessment of the meta-model indicates that the Markov Chains performed as well or better than forecasts made by the National Hurricane Center (NHC). Tests of the MOSA showed that it provides solutions in an efficient manner. Thus, an illustrative example shows that the meta-model is practical.
Resumo:
In this work, we present our understanding about the article of Aksoy [1], which uses Markov chains to model the flow of intermittent rivers. Then, we executed an application of his model in order to generate data for intermittent streamflows, based on a data set of Brazilian streams. After that, we build a hidden Markov model as a proposed new approach to the problem of simulation of such flows. We used the Gamma distribution to simulate the increases and decreases in river flows, along with a two-state Markov chain. The motivation for us to use a hidden Markov model comes from the possibility of obtaining the same information that the Aksoy’s model provides, but using a single tool capable of treating the problem as a whole, and not through multiple independent processes
Resumo:
Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.
Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.
One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.
Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.
In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.
Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.
The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.
Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.
Resumo:
O estudo do crescimento econômico é de suma importância para que possamos averiguar a trajetória de uma economia ao longo do tempo, a proposta desse trabalho é analisar o crescimento econômico no estado do Rio Grande do Sul, através do instrumental das cadeias de Markov, a ideia principal do estudo está na hipótese de convergência de renda. Primeiramente será testado a hipótese de convergência de renda do estado por meio das microrregiões, para isso serão utilizados dados de produto per capita dos anos de 1990, 2000 e 2010. Também será testado a hipótese de convergência para os municípios do Conselho Regional de Desenvolvimento Sul, situado no Rio Grande do Sul, utilizando dados de renda per capita dos anos de 1991, 2000 e 2010. Os resultados obtidos para as microrregiões do Rio Grande do Sul mostram que as economias não estão convergindo em sua totalidade para uma classe de renda especifica, porém é percebido que no longo prazo haverá uma maior concentração das microrregiões nos extratos de renda próximos a média, o tempo esperado para que as economias cheguem ao seu estado estacionário é de seis períodos. Por meio dos resultados obtidos para a região do Corede Sul, temos que as economias convergirão em sua maioria para a classe de renda médio pobre, seguido pela classe dos médios ricos. Ambas as classes estão situadas próximas a média regional, sendo que as classes de renda pobre e rico situadas aos extremos serão extintas no longo prazo. O tempo esperado para que as economias cheguem ao estado estacionário é de onze períodos.
Resumo:
O estudo do crescimento econômico é de suma importância para que possamos averiguar a trajetória de uma economia ao longo do tempo, a proposta desse trabalho é analisar o crescimento econômico no estado do Rio Grande do Sul, através do instrumental das cadeias de Markov, a ideia principal do estudo está na hipótese de convergência de renda. Primeiramente será testado a hipótese de convergência de renda do estado por meio das microrregiões, para isso serão utilizados dados de produto per capita dos anos de 1990, 2000 e 2010. Também será testado a hipótese de convergência para os municípios do Conselho Regional de Desenvolvimento Sul, situado no Rio Grande do Sul, utilizando dados de renda per capita dos anos de 1991, 2000 e 2010. Os resultados obtidos para as microrregiões do Rio Grande do Sul mostram que as economias não estão convergindo em sua totalidade para uma classe de renda especifica, porém é percebido que no longo prazo haverá uma maior concentração das microrregiões nos extratos de renda próximos a média, o tempo esperado para que as economias cheguem ao seu estado estacionário é de seis períodos. Por meio dos resultados obtidos para a região do Corede Sul, temos que as economias convergirão em sua maioria para a classe de renda médio pobre, seguido pela classe dos médio ricos. Ambas as classes estão situadas próximas a média regional, sendo que as classes de renda pobre e rico situadas aos extremos serão extintas no longo prazo. O tempo esperado para que as economias cheguem ao estado estacionário é de onze períodos.
Resumo:
We start in Chapter 2 to investigate linear matrix-valued SDEs and the Itô-stochastic Magnus expansion. The Itô-stochastic Magnus expansion provides an efficient numerical scheme to solve matrix-valued SDEs. We show convergence of the expansion up to a stopping time τ and provide an asymptotic estimate of the cumulative distribution function of τ. Moreover, we show how to apply it to solve SPDEs with one and two spatial dimensions by combining it with the method of lines with high accuracy. We will see that the Magnus expansion allows us to use GPU techniques leading to major performance improvements compared to a standard Euler-Maruyama scheme. In Chapter 3, we study a short-rate model in a Cox-Ingersoll-Ross (CIR) framework for negative interest rates. We define the short rate as the difference of two independent CIR processes and add a deterministic shift to guarantee a perfect fit to the market term structure. We show how to use the Gram-Charlier expansion to efficiently calibrate the model to the market swaption surface and price Bermudan swaptions with good accuracy. We are taking two different perspectives for rating transition modelling. In Section 4.4, we study inhomogeneous continuous-time Markov chains (ICTMC) as a candidate for a rating model with deterministic rating transitions. We extend this model by taking a Lie group perspective in Section 4.5, to allow for stochastic rating transitions. In both cases, we will compare the most popular choices for a change of measure technique and show how to efficiently calibrate both models to the available historical rating data and market default probabilities. At the very end, we apply the techniques shown in this thesis to minimize the collateral-inclusive Credit/ Debit Valuation Adjustments under the constraint of small collateral postings by using a collateral account dependent on rating trigger.
Resumo:
Neste artigo apresentamos uma análise Bayesiana para o modelo de volatilidade estocástica (SV) e uma forma generalizada deste, cujo objetivo é estimar a volatilidade de séries temporais financeiras. Considerando alguns casos especiais dos modelos SV usamos algoritmos de Monte Carlo em Cadeias de Markov e o software WinBugs para obter sumários a posteriori para as diferentes formas de modelos SV. Introduzimos algumas técnicas Bayesianas de discriminação para a escolha do melhor modelo a ser usado para estimar as volatilidades e fazer previsões de séries financeiras. Um exemplo empírico de aplicação da metodologia é introduzido com a série financeira do IBOVESPA.
Resumo:
Diagnostic methods have been an important tool in regression analysis to detect anomalies, such as departures from error assumptions and the presence of outliers and influential observations with the fitted models. Assuming censored data, we considered a classical analysis and Bayesian analysis assuming no informative priors for the parameters of the model with a cure fraction. A Bayesian approach was considered by using Markov Chain Monte Carlo Methods with Metropolis-Hasting algorithms steps to obtain the posterior summaries of interest. Some influence methods, such as the local influence, total local influence of an individual, local influence on predictions and generalized leverage were derived, analyzed and discussed in survival data with a cure fraction and covariates. The relevance of the approach was illustrated with a real data set, where it is shown that, by removing the most influential observations, the decision about which model best fits the data is changed.
Resumo:
Large-scale cortical networks exhibit characteristic topological properties that shape communication between brain regions and global cortical dynamics. Analysis of complex networks allows the description of connectedness, distance, clustering, and centrality that reveal different aspects of how the network's nodes communicate. Here, we focus on a novel analysis of complex walks in a series of mammalian cortical networks that model potential dynamics of information flow between individual brain regions. We introduce two new measures called absorption and driftness. Absorption is the average length of random walks between any two nodes, and takes into account all paths that may diffuse activity throughout the network. Driftness is the ratio between absorption and the corresponding shortest path length. For a given node of the network, we also define four related measurements, namely in-and out-absorption as well as in-and out-driftness, as the averages of the corresponding measures from all nodes to that node, and from that node to all nodes, respectively. We find that the cat thalamo-cortical system incorporates features of two classic network topologies, Erdos-Renyi graphs with respect to in-absorption and in-driftness, and configuration models with respect to out-absorption and out-driftness. Moreover, taken together these four measures separate the network nodes based on broad functional roles (visual, auditory, somatomotor, and frontolimbic).
Resumo:
Context tree models have been introduced by Rissanen in [25] as a parsimonious generalization of Markov models. Since then, they have been widely used in applied probability and statistics. The present paper investigates non-asymptotic properties of two popular procedures of context tree estimation: Rissanen's algorithm Context and penalized maximum likelihood. First showing how they are related, we prove finite horizon bounds for the probability of over- and under-estimation. Concerning overestimation, no boundedness or loss-of-memory conditions are required: the proof relies on new deviation inequalities for empirical probabilities of independent interest. The under-estimation properties rely on classical hypotheses for processes of infinite memory. These results improve on and generalize the bounds obtained in Duarte et al. (2006) [12], Galves et al. (2008) [18], Galves and Leonardi (2008) [17], Leonardi (2010) [22], refining asymptotic results of Buhlmann and Wyner (1999) [4] and Csiszar and Talata (2006) [9]. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Efficient automatic protein classification is of central importance in genomic annotation. As an independent way to check the reliability of the classification, we propose a statistical approach to test if two sets of protein domain sequences coming from two families of the Pfam database are significantly different. We model protein sequences as realizations of Variable Length Markov Chains (VLMC) and we use the context trees as a signature of each protein family. Our approach is based on a Kolmogorov-Smirnov-type goodness-of-fit test proposed by Balding et at. [Limit theorems for sequences of random trees (2008), DOI: 10.1007/s11749-008-0092-z]. The test statistic is a supremum over the space of trees of a function of the two samples; its computation grows, in principle, exponentially fast with the maximal number of nodes of the potential trees. We show how to transform this problem into a max-flow over a related graph which can be solved using a Ford-Fulkerson algorithm in polynomial time on that number. We apply the test to 10 randomly chosen protein domain families from the seed of Pfam-A database (high quality, manually curated families). The test shows that the distributions of context trees coming from different families are significantly different. We emphasize that this is a novel mathematical approach to validate the automatic clustering of sequences in any context. We also study the performance of the test via simulations on Galton-Watson related processes.
Resumo:
We consider a polling model with multiple stations, each with Poisson arrivals and a queue of infinite capacity. The service regime is exhaustive and there is Jacksonian feedback of served customers. What is new here is that when the server comes to a station it chooses the service rate and the feedback parameters at random; these remain valid during the whole stay of the server at that station. We give criteria for recurrence, transience and existence of the sth moment of the return time to the empty state for this model. This paper generalizes the model, when only two stations accept arriving jobs, which was considered in [Ann. Appl. Probab. 17 (2007) 1447-1473]. Our results are stated in terms of Lyapunov exponents for random matrices. From the recurrence criteria it can be seen that the polling model with parameter regeneration can exhibit the unusual phenomenon of null recurrence over a thick region of parameter space.
Resumo:
We consider binary infinite order stochastic chains perturbed by a random noise. This means that at each time step, the value assumed by the chain can be randomly and independently flipped with a small fixed probability. We show that the transition probabilities of the perturbed chain are uniformly close to the corresponding transition probabilities of the original chain. As a consequence, in the case of stochastic chains with unbounded but otherwise finite variable length memory, we show that it is possible to recover the context tree of the original chain, using a suitable version of the algorithm Context, provided that the noise is small enough.
Resumo:
When building genetic maps, it is necessary to choose from several marker ordering algorithms and criteria, and the choice is not always simple. In this study, we evaluate the efficiency of algorithms try (TRY), seriation (SER), rapid chain delineation (RCD), recombination counting and ordering (RECORD) and unidirectional growth (UG), as well as the criteria PARF (product of adjacent recombination fractions), SARF (sum of adjacent recombination fractions), SALOD (sum of adjacent LOD scores) and LHMC (likelihood through hidden Markov chains), used with the RIPPLE algorithm for error verification, in the construction of genetic linkage maps. A linkage map of a hypothetical diploid and monoecious plant species was simulated containing one linkage group and 21 markers with fixed distance of 3 cM between them. In all, 700 F(2) populations were randomly simulated with and 400 individuals with different combinations of dominant and co-dominant markers, as well as 10 and 20% of missing data. The simulations showed that, in the presence of co-dominant markers only, any combination of algorithm and criteria may be used, even for a reduced population size. In the case of a smaller proportion of dominant markers, any of the algorithms and criteria (except SALOD) investigated may be used. In the presence of high proportions of dominant markers and smaller samples (around 100), the probability of repulsion linkage increases between them and, in this case, use of the algorithms TRY and SER associated to RIPPLE with criterion LHMC would provide better results. Heredity (2009) 103, 494-502; doi:10.1038/hdy.2009.96; published online 29 July 2009