115 resultados para Approximate equation
Resumo:
The classic vertical advection-diffusion (VAD) balance is a central concept in studying the ocean heat budget, in particular in simple climate models (SCMs). Here we present a new framework to calibrate the parameters of the VAD equation to the vertical ocean heat balance of two fully-coupled climate models that is traceable to the models’ circulation as well as to vertical mixing and diffusion processes. Based on temperature diagnostics, we derive an effective vertical velocity w∗ and turbulent diffusivity k∗ for each individual physical process. In steady-state, we find that the residual vertical velocity and diffusivity change sign in mid-depth, highlighting the different regional contributions of isopycnal and diapycnal diffusion in balancing the models’ residual advection and vertical mixing. We quantify the impacts of the time-evolution of the effective quantities under a transient 1%CO2 simulation and make the link to the parameters of currently employed SCMs.
Resumo:
The disadvantage of the majority of data assimilation schemes is the assumption that the conditional probability density function of the state of the system given the observations [posterior probability density function (PDF)] is distributed either locally or globally as a Gaussian. The advantage, however, is that through various different mechanisms they ensure initial conditions that are predominantly in linear balance and therefore spurious gravity wave generation is suppressed. The equivalent-weights particle filter is a data assimilation scheme that allows for a representation of a potentially multimodal posterior PDF. It does this via proposal densities that lead to extra terms being added to the model equations and means the advantage of the traditional data assimilation schemes, in generating predominantly balanced initial conditions, is no longer guaranteed. This paper looks in detail at the impact the equivalent-weights particle filter has on dynamical balance and gravity wave generation in a primitive equation model. The primary conclusions are that (i) provided the model error covariance matrix imposes geostrophic balance, then each additional term required by the equivalent-weights particle filter is also geostrophically balanced; (ii) the relaxation term required to ensure the particles are in the locality of the observations has little effect on gravity waves and actually induces a reduction in gravity wave energy if sufficiently large; and (iii) the equivalent-weights term, which leads to the particles having equivalent significance in the posterior PDF, produces a change in gravity wave energy comparable to the stochastic model error. Thus, the scheme does not produce significant spurious gravity wave energy and so has potential for application in real high-dimensional geophysical applications.
Resumo:
This paper investigates the feasibility of using approximate Bayesian computation (ABC) to calibrate and evaluate complex individual-based models (IBMs). As ABC evolves, various versions are emerging, but here we only explore the most accessible version, rejection-ABC. Rejection-ABC involves running models a large number of times, with parameters drawn randomly from their prior distributions, and then retaining the simulations closest to the observations. Although well-established in some fields, whether ABC will work with ecological IBMs is still uncertain. Rejection-ABC was applied to an existing 14-parameter earthworm energy budget IBM for which the available data consist of body mass growth and cocoon production in four experiments. ABC was able to narrow the posterior distributions of seven parameters, estimating credible intervals for each. ABC’s accepted values produced slightly better fits than literature values do. The accuracy of the analysis was assessed using cross-validation and coverage, currently the best available tests. Of the seven unnarrowed parameters, ABC revealed that three were correlated with other parameters, while the remaining four were found to be not estimable given the data available. It is often desirable to compare models to see whether all component modules are necessary. Here we used ABC model selection to compare the full model with a simplified version which removed the earthworm’s movement and much of the energy budget. We are able to show that inclusion of the energy budget is necessary for a good fit to the data. We show how our methodology can inform future modelling cycles, and briefly discuss how more advanced versions of ABC may be applicable to IBMs. We conclude that ABC has the potential to represent uncertainty in model structure, parameters and predictions, and to embed the often complex process of optimizing an IBM’s structure and parameters within an established statistical framework, thereby making the process more transparent and objective.
Resumo:
In this paper an equation is derived for the mean backscatter cross section of an ensemble of snowflakes at centimeter and millimeter wavelengths. It uses the Rayleigh–Gans approximation, which has previously been found to be applicable at these wavelengths due to the low density of snow aggregates. Although the internal structure of an individual snowflake is random and unpredictable, the authors find from simulations of the aggregation process that their structure is “self-similar” and can be described by a power law. This enables an analytic expression to be derived for the backscatter cross section of an ensemble of particles as a function of their maximum dimension in the direction of propagation of the radiation, the volume of ice they contain, a variable describing their mean shape, and two variables describing the shape of the power spectrum. The exponent of the power law is found to be −. In the case of 1-cm snowflakes observed by a 3.2-mm-wavelength radar, the backscatter is 40–100 times larger than that of a homogeneous ice–air spheroid with the same mass, size, and aspect ratio.
Discontinuous Galerkin methods for the p-biharmonic equation from a discrete variational perspective
Resumo:
We study discontinuous Galerkin approximations of the p-biharmonic equation for p∈(1,∞) from a variational perspective. We propose a discrete variational formulation of the problem based on an appropriate definition of a finite element Hessian and study convergence of the method (without rates) using a semicontinuity argument. We also present numerical experiments aimed at testing the robustness of the method.
Resumo:
An equation of Monge-Ampère type has, for the first time, been solved numerically on the surface of the sphere in order to generate optimally transported (OT) meshes, equidistributed with respect to a monitor function. Optimal transport generates meshes that keep the same connectivity as the original mesh, making them suitable for r-adaptive simulations, in which the equations of motion can be solved in a moving frame of reference in order to avoid mapping the solution between old and new meshes and to avoid load balancing problems on parallel computers. The semi-implicit solution of the Monge-Ampère type equation involves a new linearisation of the Hessian term, and exponential maps are used to map from old to new meshes on the sphere. The determinant of the Hessian is evaluated as the change in volume between old and new mesh cells, rather than using numerical approximations to the gradients. OT meshes are generated to compare with centroidal Voronoi tesselations on the sphere and are found to have advantages and disadvantages; OT equidistribution is more accurate, the number of iterations to convergence is independent of the mesh size, face skewness is reduced and the connectivity does not change. However anisotropy is higher and the OT meshes are non-orthogonal. It is shown that optimal transport on the sphere leads to meshes that do not tangle. However, tangling can be introduced by numerical errors in calculating the gradient of the mesh potential. Methods for alleviating this problem are explored. Finally, OT meshes are generated using observed precipitation as a monitor function, in order to demonstrate the potential power of the technique.
Resumo:
The goal of this work is the efficient solution of the heat equation with Dirichlet or Neumann boundary conditions using the Boundary Elements Method (BEM). Efficiently solving the heat equation is useful, as it is a simple model problem for other types of parabolic problems. In complicated spatial domains as often found in engineering, BEM can be beneficial since only the boundary of the domain has to be discretised. This makes BEM easier than domain methods such as finite elements and finite differences, conventionally combined with time-stepping schemes to solve this problem. The contribution of this work is to further decrease the complexity of solving the heat equation, leading both to speed gains (in CPU time) as well as requiring smaller amounts of memory to solve the same problem. To do this we will combine the complexity gains of boundary reduction by integral equation formulations with a discretisation using wavelet bases. This reduces the total work to O(h
Resumo:
Bloom filters are a data structure for storing data in a compressed form. They offer excellent space and time efficiency at the cost of some loss of accuracy (so-called lossy compression). This work presents a yes-no Bloom filter, which as a data structure consisting of two parts: the yes-filter which is a standard Bloom filter and the no-filter which is another Bloom filter whose purpose is to represent those objects that were recognised incorrectly by the yes-filter (that is, to recognise the false positives of the yes-filter). By querying the no-filter after an object has been recognised by the yes-filter, we get a chance of rejecting it, which improves the accuracy of data recognition in comparison with the standard Bloom filter of the same total length. A further increase in accuracy is possible if one chooses objects to include in the no-filter so that the no-filter recognises as many as possible false positives but no true positives, thus producing the most accurate yes-no Bloom filter among all yes-no Bloom filters. This paper studies how optimization techniques can be used to maximize the number of false positives recognised by the no-filter, with the constraint being that it should recognise no true positives. To achieve this aim, an Integer Linear Program (ILP) is proposed for the optimal selection of false positives. In practice the problem size is normally large leading to intractable optimal solution. Considering the similarity of the ILP with the Multidimensional Knapsack Problem, an Approximate Dynamic Programming (ADP) model is developed making use of a reduced ILP for the value function approximation. Numerical results show the ADP model works best comparing with a number of heuristics as well as the CPLEX built-in solver (B&B), and this is what can be recommended for use in yes-no Bloom filters. In a wider context of the study of lossy compression algorithms, our researchis an example showing how the arsenal of optimization methods can be applied to improving the accuracy of compressed data.
Resumo:
Approximate Bayesian computation (ABC) is a popular family of algorithms which perform approximate parameter inference when numerical evaluation of the likelihood function is not possible but data can be simulated from the model. They return a sample of parameter values which produce simulations close to the observed dataset. A standard approach is to reduce the simulated and observed datasets to vectors of summary statistics and accept when the difference between these is below a specified threshold. ABC can also be adapted to perform model choice. In this article, we present a new software package for R, abctools which provides methods for tuning ABC algorithms. This includes recent dimension reduction algorithms to tune the choice of summary statistics, and coverage methods to tune the choice of threshold. We provide several illustrations of these routines on applications taken from the ABC literature.