926 resultados para Approximate Bayesian Computation
Resumo:
Changepoint analysis is a well established area of statistical research, but in the context of spatio-temporal point processes it is as yet relatively unexplored. Some substantial differences with regard to standard changepoint analysis have to be taken into account: firstly, at every time point the datum is an irregular pattern of points; secondly, in real situations issues of spatial dependence between points and temporal dependence within time segments raise. Our motivating example consists of data concerning the monitoring and recovery of radioactive particles from Sandside beach, North of Scotland; there have been two major changes in the equipment used to detect the particles, representing known potential changepoints in the number of retrieved particles. In addition, offshore particle retrieval campaigns are believed may reduce the particle intensity onshore with an unknown temporal lag; in this latter case, the problem concerns multiple unknown changepoints. We therefore propose a Bayesian approach for detecting multiple changepoints in the intensity function of a spatio-temporal point process, allowing for spatial and temporal dependence within segments. We use Log-Gaussian Cox Processes, a very flexible class of models suitable for environmental applications that can be implemented using integrated nested Laplace approximation (INLA), a computationally efficient alternative to Monte Carlo Markov Chain methods for approximating the posterior distribution of the parameters. Once the posterior curve is obtained, we propose a few methods for detecting significant change points. We present a simulation study, which consists in generating spatio-temporal point pattern series under several scenarios; the performance of the methods is assessed in terms of type I and II errors, detected changepoint locations and accuracy of the segment intensity estimates. We finally apply the above methods to the motivating dataset and find good and sensible results about the presence and quality of changes in the process.
Resumo:
In technical design processes in the automotive industry, digital prototypes rapidly gain importance, because they allow for a detection of design errors in early development stages. The technical design process includes the computation of swept volumes for maintainability analysis and clearance checks. The swept volume is very useful, for example, to identify problem areas where a safety distance might not be kept. With the explicit construction of the swept volume an engineer gets evidence on how the shape of components that come too close have to be modified.rnIn this thesis a concept for the approximation of the outer boundary of a swept volume is developed. For safety reasons, it is essential that the approximation is conservative, i.e., that the swept volume is completely enclosed by the approximation. On the other hand, one wishes to approximate the swept volume as precisely as possible. In this work, we will show, that the one-sided Hausdorff distance is the adequate measure for the error of the approximation, when the intended usage is clearance checks, continuous collision detection and maintainability analysis in CAD. We present two implementations that apply the concept and generate a manifold triangle mesh that approximates the outer boundary of a swept volume. Both algorithms are two-phased: a sweeping phase which generates a conservative voxelization of the swept volume, and the actual mesh generation which is based on restricted Delaunay refinement. This approach ensures a high precision of the approximation while respecting conservativeness.rnThe benchmarks for our test are amongst others real world scenarios that come from the automotive industry.rnFurther, we introduce a method to relate parts of an already computed swept volume boundary to those triangles of the generator, that come closest during the sweep. We use this to verify as well as to colorize meshes resulting from our implementations.
Resumo:
Forest models are tools for explaining and predicting the dynamics of forest ecosystems. They simulate forest behavior by integrating information on the underlying processes in trees, soil and atmosphere. Bayesian calibration is the application of probability theory to parameter estimation. It is a method, applicable to all models, that quantifies output uncertainty and identifies key parameters and variables. This study aims at testing the Bayesian procedure for calibration to different types of forest models, to evaluate their performances and the uncertainties associated with them. In particular,we aimed at 1) applying a Bayesian framework to calibrate forest models and test their performances in different biomes and different environmental conditions, 2) identifying and solve structure-related issues in simple models, and 3) identifying the advantages of additional information made available when calibrating forest models with a Bayesian approach. We applied the Bayesian framework to calibrate the Prelued model on eight Italian eddy-covariance sites in Chapter 2. The ability of Prelued to reproduce the estimated Gross Primary Productivity (GPP) was tested over contrasting natural vegetation types that represented a wide range of climatic and environmental conditions. The issues related to Prelued's multiplicative structure were the main topic of Chapter 3: several different MCMC-based procedures were applied within a Bayesian framework to calibrate the model, and their performances were compared. A more complex model was applied in Chapter 4, focusing on the application of the physiology-based model HYDRALL to the forest ecosystem of Lavarone (IT) to evaluate the importance of additional information in the calibration procedure and their impact on model performances, model uncertainties, and parameter estimation. Overall, the Bayesian technique proved to be an excellent and versatile tool to successfully calibrate forest models of different structure and complexity, on different kind and number of variables and with a different number of parameters involved.
Resumo:
Il presente lavoro si propone di sviluppare una analogia formale tra sistemi dinamici e teoria della computazione in relazione all’emergenza di proprietà biologiche da tali sistemi. Il primo capitolo sarà dedicato all’estensione della teoria delle macchine di Turing ad un più ampio contesto di funzioni computabili e debolmente computabili. Mostreremo quindi come un sistema dinamico continuo possa essere elaborato da una macchina computante, e come proprietà informative quali l’universalità possano essere naturalmente estese alla fisica attraverso questo ponte formale. Nel secondo capitolo applicheremo i risultati teorici derivati nel primo allo sviluppo di un sistema chimico che mostri tali proprietà di universalità, ponendo particolare attenzione alla plausibilità fisica di tale sistema.
Resumo:
A general approach is presented for implementing discrete transforms as a set of first-order or second-order recursive digital filters. Clenshaw's recurrence formulae are used to formulate the second-order filters. The resulting structure is suitable for efficient implementation of discrete transforms in VLSI or FPGA circuits. The general approach is applied to the discrete Legendre transform as an illustration.
Resumo:
The Rankin convolution type Dirichlet series D-F,D-G(s) of Siegel modular forms F and G of degree two, which was introduced by Kohnen and the second author, is computed numerically for various F and G. In particular, we prove that the series D-F,D-G(s), which shares the same functional equation and analytic behavior with the spinor L-functions of eigenforms of the same weight are not linear combinations of those. In order to conduct these experiments a numerical method to compute the Petersson scalar products of Jacobi Forms is developed and discussed in detail.
Resumo:
This letter presents a new recursive method for computing discrete polynomial transforms. The method is shown for forward and inverse transforms of the Hermite, binomial, and Laguerre transforms. The recursive flow diagrams require only 2 additions, 2( +1) memory units, and +1multipliers for the +1-point Hermite and binomial transforms. The recursive flow diagram for the +1-point Laguerre transform requires 2 additions, 2( +1) memory units, and 2( +1) multipliers. The transform computation time for all of these transforms is ( )
Resumo:
Clenshaw’s recurrenee formula is used to derive recursive algorithms for the discrete cosine transform @CT) and the inverse discrete cosine transform (IDCT). The recursive DCT algorithm presented here requires one fewer delay element per coefficient and one fewer multiply operation per coeflident compared with two recently proposed methods. Clenshaw’s recurrence formula provides a unified development for the recursive DCT and IDCT algorithms. The M v e al gorithms apply to arbitrary lengtb algorithms and are appropriate for VLSI implementation.
Resumo:
Recombinant human growth hormone (rhGH) therapy is used in the long-term treatment of children with growth disorders, but there is considerable treatment response variability. The exon 3-deleted growth hormone receptor polymorphism (GHR(d3)) may account for some of this variability. The authors performed a systematic review (to April 2011), including investigator-only data, to quantify the effects of the GHR(fl-d3) and GHR(d3-d3) genotypes on rhGH therapy response and used a recently established Bayesian inheritance model-free approach to meta-analyze the data. The primary outcome was the 1-year change-in-height standard-deviation score for the 2 genotypes. Eighteen data sets from 12 studies (1,527 children) were included. After several prior assumptions were tested, the most appropriate inheritance model was codominant (posterior probability = 0.93). Compared with noncarriers, carriers had median differences in 1-year change-in-height standard-deviation score of 0.09 (95% credible interval (CrI): 0.01, 0.17) for GHR(fl-d3) and of 0.14 (95% CrI: 0.02, 0.26) for GHR(d3-d3). However, the between-study standard deviation of 0.18 (95% CrI: 0.10, 0.33) was considerable. The authors tested by meta-regression for potential modifiers and found no substantial influence. They conclude that 1) the GHR(d3) polymorphism inheritance is codominant, contrasting with previous reports; 2) GHR(d3) genotypes account for modest increases in rhGH effects in children; and 3) considerable unexplained variability in responsiveness remains.