80 resultados para 010301 Numerical Analysis
Resumo:
The paper considers second kind equations of the form (abbreviated x=y + K2x) in which and the factor z is bounded but otherwise arbitrary so that equations of Wiener-Hopf type are included as a special case. Conditions on a set are obtained such that a generalized Fredholm alternative is valid: if W satisfies these conditions and I − Kz, is injective for each z ε W then I − Kz is invertible for each z ε W and the operators (I − Kz)−1 are uniformly bounded. As a special case some classical results relating to Wiener-Hopf operators are reproduced. A finite section version of the above equation (with the range of integration reduced to [−a, a]) is considered, as are projection and iterated projection methods for its solution. The operators (where denotes the finite section version of Kz) are shown uniformly bounded (in z and a) for all a sufficiently large. Uniform stability and convergence results, for the projection and iterated projection methods, are obtained. The argument generalizes an idea in collectively compact operator theory. Some new results in this theory are obtained and applied to the analysis of projection methods for the above equation when z is compactly supported and k(s − t) replaced by the general kernel k(s,t). A boundary integral equation of the above type, which models outdoor sound propagation over inhomogeneous level terrain, illustrates the application of the theoretical results developed.
Resumo:
In this paper we propose and analyze a hybrid $hp$ boundary element method for the solution of problems of high frequency acoustic scattering by sound-soft convex polygons, in which the approximation space is enriched with oscillatory basis functions which efficiently capture the high frequency asymptotics of the solution. We demonstrate, both theoretically and via numerical examples, exponential convergence with respect to the order of the polynomials, moreover providing rigorous error estimates for our approximations to the solution and to the far field pattern, in which the dependence on the frequency of all constants is explicit. Importantly, these estimates prove that, to achieve any desired accuracy in the computation of these quantities, it is sufficient to increase the number of degrees of freedom in proportion to the logarithm of the frequency as the frequency increases, in contrast to the at least linear growth required by conventional methods.
Resumo:
We derive energy-norm a posteriori error bounds, using gradient recovery (ZZ) estimators to control the spatial error, for fully discrete schemes for the linear heat equation. This appears to be the �rst completely rigorous derivation of ZZ estimators for fully discrete schemes for evolution problems, without any restrictive assumption on the timestep size. An essential tool for the analysis is the elliptic reconstruction technique.Our theoretical results are backed with extensive numerical experimentation aimed at (a) testing the practical sharpness and asymptotic behaviour of the error estimator against the error, and (b) deriving an adaptive method based on our estimators. An extra novelty provided is an implementation of a coarsening error "preindicator", with a complete implementation guide in ALBERTA in the appendix.
Resumo:
We study the approximation of harmonic functions by means of harmonic polynomials in two-dimensional, bounded, star-shaped domains. Assuming that the functions possess analytic extensions to a delta-neighbourhood of the domain, we prove exponential convergence of the approximation error with respect to the degree of the approximating harmonic polynomial. All the constants appearing in the bounds are explicit and depend only on the shape-regularity of the domain and on delta. We apply the obtained estimates to show exponential convergence with rate O(exp(−b square root N)), N being the number of degrees of freedom and b>0, of a hp-dGFEM discretisation of the Laplace equation based on piecewise harmonic polynomials. This result is an improvement over the classical rate O(exp(−b cubic root N )), and is due to the use of harmonic polynomial spaces, as opposed to complete polynomial spaces.
Resumo:
In this paper we propose and analyse a hybrid numerical-asymptotic boundary element method for the solution of problems of high frequency acoustic scattering by a class of sound-soft nonconvex polygons. The approximation space is enriched with carefully chosen oscillatory basis functions; these are selected via a study of the high frequency asymptotic behaviour of the solution. We demonstrate via a rigorous error analysis, supported by numerical examples, that to achieve any desired accuracy it is sufficient for the number of degrees of freedom to grow only in proportion to the logarithm of the frequency as the frequency increases, in contrast to the at least linear growth required by conventional methods. This appears to be the first such numerical analysis result for any problem of scattering by a nonconvex obstacle. Our analysis is based on new frequency-explicit bounds on the normal derivative of the solution on the boundary and on its analytic continuation into the complex plane.
Resumo:
We propose and analyse a hybrid numerical–asymptotic hp boundary element method (BEM) for time-harmonic scattering of an incident plane wave by an arbitrary collinear array of sound-soft two-dimensional screens. Our method uses an approximation space enriched with oscillatory basis functions, chosen to capture the high-frequency asymptotics of the solution. We provide a rigorous frequency-explicit error analysis which proves that the method converges exponentially as the number of degrees of freedom N increases, and that to achieve any desired accuracy it is sufficient to increase N in proportion to the square of the logarithm of the frequency as the frequency increases (standard BEMs require N to increase at least linearly with frequency to retain accuracy). Our numerical results suggest that fixed accuracy can in fact be achieved at arbitrarily high frequencies with a frequency-independent computational cost, when the oscillatory integrals required for implementation are computed using Filon quadrature. We also show how our method can be applied to the complementary ‘breakwater’ problem of propagation through an aperture in an infinite sound-hard screen.
Resumo:
We design consistent discontinuous Galerkin finite element schemes for the approximation of a quasi-incompressible two phase flow model of Allen–Cahn/Cahn–Hilliard/Navier–Stokes–Korteweg type which allows for phase transitions. We show that the scheme is mass conservative and monotonically energy dissipative. In this case the dissipation is isolated to discrete equivalents of those effects already causing dissipation on the continuous level, that is, there is no artificial numerical dissipation added into the scheme. In this sense the methods are consistent with the energy dissipation of the continuous PDE system.
Discontinuous Galerkin methods for the p-biharmonic equation from a discrete variational perspective
Resumo:
We study discontinuous Galerkin approximations of the p-biharmonic equation for p∈(1,∞) from a variational perspective. We propose a discrete variational formulation of the problem based on an appropriate definition of a finite element Hessian and study convergence of the method (without rates) using a semicontinuity argument. We also present numerical experiments aimed at testing the robustness of the method.
Resumo:
The Bloom filter is a space efficient randomized data structure for representing a set and supporting membership queries. Bloom filters intrinsically allow false positives. However, the space savings they offer outweigh the disadvantage if the false positive rates are kept sufficiently low. Inspired by the recent application of the Bloom filter in a novel multicast forwarding fabric, this paper proposes a variant of the Bloom filter, the optihash. The optihash introduces an optimization for the false positive rate at the stage of Bloom filter formation using the same amount of space at the cost of slightly more processing than the classic Bloom filter. Often Bloom filters are used in situations where a fixed amount of space is a primary constraint. We present the optihash as a good alternative to Bloom filters since the amount of space is the same and the improvements in false positives can justify the additional processing. Specifically, we show via simulations and numerical analysis that using the optihash the false positives occurrences can be reduced and controlled at a cost of small additional processing. The simulations are carried out for in-packet forwarding. In this framework, the Bloom filter is used as a compact link/route identifier and it is placed in the packet header to encode the route. At each node, the Bloom filter is queried for membership in order to make forwarding decisions. A false positive in the forwarding decision is translated into packets forwarded along an unintended outgoing link. By using the optihash, false positives can be reduced. The optimization processing is carried out in an entity termed the Topology Manger which is part of the control plane of the multicast forwarding fabric. This processing is only carried out on a per-session basis, not for every packet. The aim of this paper is to present the optihash and evaluate its false positive performances via simulations in order to measure the influence of different parameters on the false positive rate. The false positive rate for the optihash is then compared with the false positive probability of the classic Bloom filter.
Resumo:
Recent interest in the validation of general circulation models (GCMs) has been devoted to objective methods. A small number of authors have used the direct synoptic identification of phenomena together with a statistical analysis to perform the objective comparison between various datasets. This paper describes a general method for performing the synoptic identification of phenomena that can be used for an objective analysis of atmospheric, or oceanographic, datasets obtained from numerical models and remote sensing. Methods usually associated with image processing have been used to segment the scene and to identify suitable feature points to represent the phenomena of interest. This is performed for each time level. A technique from dynamic scene analysis is then used to link the feature points to form trajectories. The method is fully automatic and should be applicable to a wide range of geophysical fields. An example will be shown of results obtained from this method using data obtained from a run of the Universities Global Atmospheric Modelling Project GCM.
Resumo:
A dry three-dimensional baroclinic life cycle model is used to investigate the role of turbulent fluxes of heat and momentum within the boundary layer on mid-latitude cyclones. Simulations are performed of life cycles for two basic states, both with and without turbulent fluxes. The different basic states produce cyclones with contrasting frontal and mesoscale-flow structures. The analysis focuses on the generation of potential-vorticity (PV) in the boundary layer and its subsequent transport into the free troposphere. The dynamic mechanism through which friction mitigates a barotropic vortex is that of Ekman pumping. This has often been assumed to be also the dominant mechanism for baroclinic developments. The PV framework highlights an additional, baroclinic mechanism. Positive PV is generated baroclinically due to friction to the north-east of a surface low and is transported out of the boundary layer by a cyclonic conveyor belt flow. The result is an anomaly of increased static stability in the lower troposphere which restricts the growth of the baroclinic wave. The reduced coupling between lower and upper levels can be sufficient to change the character of the upper-level evolution of the mature wave. The basic features of the baroclinic damping mechanism are robust for different frontal structures, with and without turbulent heat fluxes, and for the range of surface roughness found over the oceans.
Resumo:
The transport of stratospheric air deep into the troposphere via convection is investigated numerically using the UK Met Office Unified Model. A convective system that formed on 27 June 2004 near southeast England, in the vicinity an upper level potential vorticity anomaly and a lowered tropopause, provides the basis for analysis. Transport is diagnosed using a stratospheric tracer that can either be passed through or withheld from the model’s convective parameterization scheme. Three simulations are performed at increasingly finer resolutions, with horizontal grid lengths of 12, 4, and 1 km. In the 12 and 4 km simulations, tracer is transported deeply into the troposphere by the parameterized convection. In the 1 km simulation, for which the convective parameterization is disengaged, deep transport is still accomplished but with a much smaller magnitude. However, the 1 km simulation resolves stirring along the tropopause that does not exist in the coarser simulations. In all three simulations, the concentration of the deeply transported tracer is small, three orders of magnitude less than that of the shallow transport near the tropopause, most likely because of the efficient dilution of parcels in the lower troposphere.
Resumo:
The SCoTLASS problem-principal component analysis modified so that the components satisfy the Least Absolute Shrinkage and Selection Operator (LASSO) constraint-is reformulated as a dynamical system on the unit sphere. The LASSO inequality constraint is tackled by exterior penalty function. A globally convergent algorithm is developed based on the projected gradient approach. The algorithm is illustrated numerically and discussed on a well-known data set. (c) 2004 Elsevier B.V. All rights reserved.
Resumo:
The skill of numerical Lagrangian drifter trajectories in three numerical models is assessed by comparing these numerically obtained paths to the trajectories of drifting buoys in the real ocean. The skill assessment is performed using the two-sample Kolmogorov–Smirnov statistical test. To demonstrate the assessment procedure, it is applied to three different models of the Agulhas region. The test can either be performed using crossing positions of one-dimensional sections in order to test model performance in specific locations, or using the total two-dimensional data set of trajectories. The test yields four quantities: a binary decision of model skill, a confidence level which can be used as a measure of goodness-of-fit of the model, a test statistic which can be used to determine the sensitivity of the confidence level, and cumulative distribution functions that aid in the qualitative analysis. The ordering of models by their confidence levels is the same as the ordering based on the qualitative analysis, which suggests that the method is suited for model validation. Only one of the three models, a 1/10° two-way nested regional ocean model, might have skill in the Agulhas region. The other two models, a 1/2° global model and a 1/8° assimilative model, might have skill only on some sections in the region