998 resultados para Normal approximation
Resumo:
Having the ability to work with complex models can be highly beneficial, but the computational cost of doing so is often large. Complex models often have intractable likelihoods, so methods that directly use the likelihood function are infeasible. In these situations, the benefits of working with likelihood-free methods become apparent. Likelihood-free methods, such as parametric Bayesian indirect likelihood that uses the likelihood of an alternative parametric auxiliary model, have been explored throughout the literature as a good alternative when the model of interest is complex. One of these methods is called the synthetic likelihood (SL), which assumes a multivariate normal approximation to the likelihood of a summary statistic of interest. This paper explores the accuracy and computational efficiency of the Bayesian version of the synthetic likelihood (BSL) approach in comparison to a competitor known as approximate Bayesian computation (ABC) and its sensitivity to its tuning parameters and assumptions. We relate BSL to pseudo-marginal methods and propose to use an alternative SL that uses an unbiased estimator of the exact working normal likelihood when the summary statistic has a multivariate normal distribution. Several applications of varying complexity are considered to illustrate the findings of this paper.
Resumo:
English: We describe an age-structured statistical catch-at-length analysis (A-SCALA) based on the MULTIFAN-CL model of Fournier et al. (1998). The analysis is applied independently to both the yellowfin and the bigeye tuna populations of the eastern Pacific Ocean (EPO). We model the populations from 1975 to 1999, based on quarterly time steps. Only a single stock for each species is assumed for each analysis, but multiple fisheries that are spatially separate are modeled to allow for spatial differences in catchability and selectivity. The analysis allows for error in the effort-fishing mortality relationship, temporal trends in catchability, temporal variation in recruitment, relationships between the environment and recruitment and between the environment and catchability, and differences in selectivity and catchability among fisheries. The model is fit to total catch data and proportional catch-at-length data conditioned on effort. The A-SCALA method is a statistical approach, and therefore recognizes that the data collected from the fishery do not perfectly represent the population. Also, there is uncertainty in our knowledge about the dynamics of the system and uncertainty about how the observed data relate to the real population. The use of likelihood functions allow us to model the uncertainty in the data collected from the population, and the inclusion of estimable process error allows us to model the uncertainties in the dynamics of the system. The statistical approach allows for the calculation of confidence intervals and the testing of hypotheses. We use a Bayesian version of the maximum likelihood framework that includes distributional constraints on temporal variation in recruitment, the effort-fishing mortality relationship, and catchability. Curvature penalties for selectivity parameters and penalties on extreme fishing mortality rates are also included in the objective function. The mode of the joint posterior distribution is used as an estimate of the model parameters. Confidence intervals are calculated using the normal approximation method. It should be noted that the estimation method includes constraints and priors and therefore the confidence intervals are different from traditionally calculated confidence intervals. Management reference points are calculated, and forward projections are carried out to provide advice for making management decisions for the yellowfin and bigeye populations. Spanish: Describimos un análisis estadístico de captura a talla estructurado por edad, A-SCALA (del inglés age-structured statistical catch-at-length analysis), basado en el modelo MULTIFAN- CL de Fournier et al. (1998). Se aplica el análisis independientemente a las poblaciones de atunes aleta amarilla y patudo del Océano Pacífico oriental (OPO). Modelamos las poblaciones de 1975 a 1999, en pasos trimestrales. Se supone solamente una sola población para cada especie para cada análisis, pero se modelan pesquerías múltiples espacialmente separadas para tomar en cuenta diferencias espaciales en la capturabilidad y selectividad. El análisis toma en cuenta error en la relación esfuerzo-mortalidad por pesca, tendencias temporales en la capturabilidad, variación temporal en el reclutamiento, relaciones entre el medio ambiente y el reclutamiento y entre el medio ambiente y la capturabilidad, y diferencias en selectividad y capturabilidad entre pesquerías. Se ajusta el modelo a datos de captura total y a datos de captura a talla proporcional condicionados sobre esfuerzo. El método A-SCALA es un enfoque estadístico, y reconoce por lo tanto que los datos obtenidos de la pesca no representan la población perfectamente. Además, hay incertidumbre en nuestros conocimientos de la dinámica del sistema e incertidumbre sobre la relación entre los datos observados y la población real. El uso de funciones de verosimilitud nos permite modelar la incertidumbre en los datos obtenidos de la población, y la inclusión de un error de proceso estimable nos permite modelar las incertidumbres en la dinámica del sistema. El enfoque estadístico permite calcular intervalos de confianza y comprobar hipótesis. Usamos una versión bayesiana del marco de verosimilitud máxima que incluye constreñimientos distribucionales sobre la variación temporal en el reclutamiento, la relación esfuerzo-mortalidad por pesca, y la capturabilidad. Se incluyen también en la función objetivo penalidades por curvatura para los parámetros de selectividad y penalidades por tasas extremas de mortalidad por pesca. Se usa la moda de la distribución posterior conjunta como estimación de los parámetros del modelo. Se calculan los intervalos de confianza usando el método de aproximación normal. Cabe destacar que el método de estimación incluye constreñimientos y distribuciones previas y por lo tanto los intervalos de confianza son diferentes de los intervalos de confianza calculados de forma tradicional. Se calculan puntos de referencia para el ordenamiento, y se realizan proyecciones a futuro para asesorar la toma de decisiones para el ordenamiento de las poblaciones de aleta amarilla y patudo.
Resumo:
Cognitive radio has been proposed as a means of improving the spectrum utilisation and increasing spectrum efficiency of wireless systems. This can be achieved by allowing cognitive radio terminals to monitor their spectral environment and opportunistically access the unoccupied frequency channels. Due to the opportunistic nature of cognitive radio, the overall performance of such networks depends on the spectrum occupancy or availability patterns. Appropriate knowledge on channel availability can optimise the sensing performance in terms of spectrum and energy efficiency. This work proposes a statistical framework for the channel availability in the polarization domain. A Gaussian Normal approximation is used to model real-world occupancy data obtained through a measurement campaign in the cellular frequency bands within a realistic scenario.
Resumo:
Ma thèse est composée de trois essais sur l'inférence par le bootstrap à la fois dans les modèles de données de panel et les modèles à grands nombres de variables instrumentales #VI# dont un grand nombre peut être faible. La théorie asymptotique n'étant pas toujours une bonne approximation de la distribution d'échantillonnage des estimateurs et statistiques de tests, je considère le bootstrap comme une alternative. Ces essais tentent d'étudier la validité asymptotique des procédures bootstrap existantes et quand invalides, proposent de nouvelles méthodes bootstrap valides. Le premier chapitre #co-écrit avec Sílvia Gonçalves# étudie la validité du bootstrap pour l'inférence dans un modèle de panel de données linéaire, dynamique et stationnaire à effets fixes. Nous considérons trois méthodes bootstrap: le recursive-design bootstrap, le fixed-design bootstrap et le pairs bootstrap. Ces méthodes sont des généralisations naturelles au contexte des panels des méthodes bootstrap considérées par Gonçalves et Kilian #2004# dans les modèles autorégressifs en séries temporelles. Nous montrons que l'estimateur MCO obtenu par le recursive-design bootstrap contient un terme intégré qui imite le biais de l'estimateur original. Ceci est en contraste avec le fixed-design bootstrap et le pairs bootstrap dont les distributions sont incorrectement centrées à zéro. Cependant, le recursive-design bootstrap et le pairs bootstrap sont asymptotiquement valides quand ils sont appliqués à l'estimateur corrigé du biais, contrairement au fixed-design bootstrap. Dans les simulations, le recursive-design bootstrap est la méthode qui produit les meilleurs résultats. Le deuxième chapitre étend les résultats du pairs bootstrap aux modèles de panel non linéaires dynamiques avec des effets fixes. Ces modèles sont souvent estimés par l'estimateur du maximum de vraisemblance #EMV# qui souffre également d'un biais. Récemment, Dhaene et Johmans #2014# ont proposé la méthode d'estimation split-jackknife. Bien que ces estimateurs ont des approximations asymptotiques normales centrées sur le vrai paramètre, de sérieuses distorsions demeurent à échantillons finis. Dhaene et Johmans #2014# ont proposé le pairs bootstrap comme alternative dans ce contexte sans aucune justification théorique. Pour combler cette lacune, je montre que cette méthode est asymptotiquement valide lorsqu'elle est utilisée pour estimer la distribution de l'estimateur split-jackknife bien qu'incapable d'estimer la distribution de l'EMV. Des simulations Monte Carlo montrent que les intervalles de confiance bootstrap basés sur l'estimateur split-jackknife aident grandement à réduire les distorsions liées à l'approximation normale en échantillons finis. En outre, j'applique cette méthode bootstrap à un modèle de participation des femmes au marché du travail pour construire des intervalles de confiance valides. Dans le dernier chapitre #co-écrit avec Wenjie Wang#, nous étudions la validité asymptotique des procédures bootstrap pour les modèles à grands nombres de variables instrumentales #VI# dont un grand nombre peu être faible. Nous montrons analytiquement qu'un bootstrap standard basé sur les résidus et le bootstrap restreint et efficace #RE# de Davidson et MacKinnon #2008, 2010, 2014# ne peuvent pas estimer la distribution limite de l'estimateur du maximum de vraisemblance à information limitée #EMVIL#. La raison principale est qu'ils ne parviennent pas à bien imiter le paramètre qui caractérise l'intensité de l'identification dans l'échantillon. Par conséquent, nous proposons une méthode bootstrap modifiée qui estime de facon convergente cette distribution limite. Nos simulations montrent que la méthode bootstrap modifiée réduit considérablement les distorsions des tests asymptotiques de type Wald #$t$# dans les échantillons finis, en particulier lorsque le degré d'endogénéité est élevé.
Resumo:
In this article we propose a bootstrap test for the probability of ruin in the compound Poisson risk process. We adopt the P-value approach, which leads to a more complete assessment of the underlying risk than the probability of ruin alone. We provide second-order accurate P-values for this testing problem and consider both parametric and nonparametric estimators of the individual claim amount distribution. Simulation studies show that the suggested bootstrap P-values are very accurate and outperform their analogues based on the asymptotic normal approximation.
Resumo:
A stochastic metapopulation model accounting for habitat dynamics is presented. This is the stochastic SIS logistic model with the novel aspect that it incorporates varying carrying capacity. We present results of Kurtz and Barbour, that provide deterministic and diffusion approximations for a wide class of stochastic models, in a form that most easily allows their direct application to population models. These results are used to show that a suitably scaled version of the metapopulation model converges, uniformly in probability over finite time intervals, to a deterministic model previously studied in the ecological literature. Additionally, they allow us to establish a bivariate normal approximation to the quasi-stationary distribution of the process. This allows us to consider the effects of habitat dynamics on metapopulation modelling through a comparison with the stochastic SIS logistic model and provides an effective means for modelling metapopulations inhabiting dynamic landscapes.
Resumo:
2000 Mathematics Subject Classification: 62G32, 62G20.
Resumo:
A new approximate solution for the first passage probability of a stationary Gaussian random process is presented which is based on the estimation of the mean clump size. A simple expression for the mean clump size is derived in terms of the cumulative normal distribution function, which avoids the lengthy numerical integrations which are required by similar existing techniques. The method is applied to a linear oscillator and an ideal bandpass process and good agreement with published results is obtained. By making a slight modification to an existing analysis it is shown that a widely used empirical result for the asymptotic form of the first passage probability can be deduced theoretically.
Resumo:
The theory of Varley and Cumberbatch [l] giving the intensity of discontinuities in the normal derivatives of the dependent variables at a wave front can be deduced from the more general results of Prasad which give the complete history of a disturbance not only at the wave front but also within a short distance behind the wave front. In what follows we omit the index M in Eq. (2.25) of Prasad [2].
Resumo:
We consider numerical solutions of nonlinear multiterm fractional integrodifferential equations, where the order of the highest derivative is fractional and positive but is otherwise arbitrary. Here, we extend and unify our previous work, where a Galerkin method was developed for efficiently approximating fractional order operators and where elements of the present differential algebraic equation (DAE) formulation were introduced. The DAE system developed here for arbitrary orders of the fractional derivative includes an added block of equations for each fractional order operator, as well as forcing terms arising from nonzero initial conditions. We motivate and explain the structure of the DAE in detail. We explain how nonzero initial conditions should be incorporated within the approximation. We point out that our approach approximates the system and not a specific solution. Consequently, some questions not easily accessible to solvers of initial value problems, such as stability analyses, can be tackled using our approach. Numerical examples show excellent accuracy. DOI: 10.1115/1.4002516]
Resumo:
Boxicity of a graph G(V, E) is the minimum integer k such that G can be represented as the intersection graph of k-dimensional axis parallel boxes in Rk. Equivalently, it is the minimum number of interval graphs on the vertex set V such that the intersection of their edge sets is E. It is known that boxicity cannot be approximated even for graph classes like bipartite, co-bipartite and split graphs below O(n0.5-ε)-factor, for any ε > 0 in polynomial time unless NP = ZPP. Till date, there is no well known graph class of unbounded boxicity for which even an nε-factor approximation algorithm for computing boxicity is known, for any ε < 1. In this paper, we study the boxicity problem on Circular Arc graphs - intersection graphs of arcs of a circle. We give a (2+ 1/k)-factor polynomial time approximation algorithm for computing the boxicity of any circular arc graph along with a corresponding box representation, where k ≥ 1 is its boxicity. For Normal Circular Arc(NCA) graphs, with an NCA model given, this can be improved to an additive 2-factor approximation algorithm. The time complexity of the algorithms to approximately compute the boxicity is O(mn+n2) in both these cases and in O(mn+kn2) which is at most O(n3) time we also get their corresponding box representations, where n is the number of vertices of the graph and m is its number of edges. The additive 2-factor algorithm directly works for any Proper Circular Arc graph, since computing an NCA model for it can be done in polynomial time.
Resumo:
The boxicity (resp. cubicity) of a graph G(V, E) is the minimum integer k such that G can be represented as the intersection graph of axis parallel boxes (resp. cubes) in R-k. Equivalently, it is the minimum number of interval graphs (resp. unit interval graphs) on the vertex set V, such that the intersection of their edge sets is E. The problem of computing boxicity (resp. cubicity) is known to be inapproximable, even for restricted graph classes like bipartite, co-bipartite and split graphs, within an O(n(1-epsilon))-factor for any epsilon > 0 in polynomial time, unless NP = ZPP. For any well known graph class of unbounded boxicity, there is no known approximation algorithm that gives n(1-epsilon)-factor approximation algorithm for computing boxicity in polynomial time, for any epsilon > 0. In this paper, we consider the problem of approximating the boxicity (cubicity) of circular arc graphs intersection graphs of arcs of a circle. Circular arc graphs are known to have unbounded boxicity, which could be as large as Omega(n). We give a (2 + 1/k) -factor (resp. (2 + log n]/k)-factor) polynomial time approximation algorithm for computing the boxicity (resp. cubicity) of any circular arc graph, where k >= 1 is the value of the optimum solution. For normal circular arc (NCA) graphs, with an NCA model given, this can be improved to an additive two approximation algorithm. The time complexity of the algorithms to approximately compute the boxicity (resp. cubicity) is O(mn + n(2)) in both these cases, and in O(mn + kn(2)) = O(n(3)) time we also get their corresponding box (resp. cube) representations, where n is the number of vertices of the graph and m is its number of edges. Our additive two approximation algorithm directly works for any proper circular arc graph, since their NCA models can be computed in polynomial time. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
This paper presents numerical simulation of the evolution of one-dimensional normal shocks, their propagation, reflection and interaction in air using a single diaphragm Riemann shock tube and validate them using experimental results. Mathematical model is derived for one-dimensional compressible flow of viscous and conducting medium. Dimensionless form of the mathematical model is used to construct space-time finite element processes based on minimization of the space-time residual functional. The space-time local approximation functions for space-time p-version hierarchical finite elements are considered in higher order GRAPHICS] spaces that permit desired order of global differentiability of local approximations in space and time. The resulting algebraic systems from this approach yield unconditionally positive-definite coefficient matrices, hence ensure unique numerical solution. The evolution is computed for a space-time strip corresponding to a time increment Delta t and then time march to obtain the evolution up to any desired value of time. Numerical studies are designed using recently invented hand-driven shock tube (Reddy tube) parameters, high/low side density and pressure values, high- and low-pressure side shock tube lengths, so that numerically computed results can be compared with actual experimental measurements.