28 resultados para Integrals, Hyperelliptic.
Resumo:
Exact error estimates for evaluating multi-dimensional integrals are considered. An estimate is called exact if the rates of convergence for the low- and upper-bound estimate coincide. The algorithm with such an exact rate is called optimal. Such an algorithm has an unimprovable rate of convergence. The problem of existing exact estimates and optimal algorithms is discussed for some functional spaces that define the regularity of the integrand. Important for practical computations data classes are considered: classes of functions with bounded derivatives and Holder type conditions. The aim of the paper is to analyze the performance of two optimal classes of algorithms: deterministic and randomized for computing multidimensional integrals. It is also shown how the smoothness of the integrand can be exploited to construct better randomized algorithms.
Resumo:
This paper is addressed to the numerical solving of the rendering equation in realistic image creation. The rendering equation is integral equation describing the light propagation in a scene accordingly to a given illumination model. The used illumination model determines the kernel of the equation under consideration. Nowadays, widely used are the Monte Carlo methods for solving the rendering equation in order to create photorealistic images. In this work we consider the Monte Carlo solving of the rendering equation in the context of the parallel sampling scheme for hemisphere. Our aim is to apply this sampling scheme to stratified Monte Carlo integration method for parallel solving of the rendering equation. The domain for integration of the rendering equation is a hemisphere. We divide the hemispherical domain into a number of equal sub-domains of orthogonal spherical triangles. This domain partitioning allows to solve the rendering equation in parallel. It is known that the Neumann series represent the solution of the integral equation as a infinity sum of integrals. We approximate this sum with a desired truncation error (systematic error) receiving the fixed number of iteration. Then the rendering equation is solved iteratively using Monte Carlo approach. At each iteration we solve multi-dimensional integrals using uniform hemisphere partitioning scheme. An estimate of the rate of convergence is obtained using the stratified Monte Carlo method. This domain partitioning allows easy parallel realization and leads to convergence improvement of the Monte Carlo method. The high performance and Grid computing of the corresponding Monte Carlo scheme are discussed.
Resumo:
Flat Phase PID Controllers have the property that the phase of the transfer function round the associated feedback loop is constant or flat around the design frequency, with the aim that the phase margin and overshoot to a step response is unaffected when the gain of the device under control changes. Such designs have been achieved using Bode Integrals and by ensuring the phase is the same at two frequencies. This paper extends the ‘two frequency’ controller and describes a novel three frequency controller. The different design strategies arc compared.
Resumo:
Puff-by-puff resolved gas phase free radicals were measured in mainstream smoke from Kentucky 2R4F reference cigarettes using ESR spectroscopy. Three spin-trapping reagents were evaluated: PBN, DMPO and DEPMPO. Two procedures were used to collect gas phase smoke on a puff-resolved basis: i) the accumulative mode, in which all the gas phase smoke up to a particular puff was bubbled into the trap (i.e., the 5th puff corresponded to the total smoke from the 1st to 5th puffs). In this case, after a specified puff, an aliquot of the spin trap was taken and analysed; or, ii) the individual mode, in which the spin trap was analysed and then replaced after each puff. Spin concentrations were determined by double-integration of the first derivative of the ESR signal. This was compared with the integrals of known standards using the TEMPO free radical. The radicals trapped with PBN were mainly carbon-centred, whilst the oxygen-centred radicals were identified with DMPO and DEPMPO. With each spin trap, the puff-resolved radical concentrations showed a characteristic pattern as a function of the puff number. Based on the spin concentrations, the DMPO and DEPMPO spin traps showed better trapping efficiencies than PBN. The implication for gas phase free radical analysis is that a range of different spin traps should be used to probe complex free radical reactions in cigarette smoke.
Resumo:
An important test of the quality of a computational model is its ability to reproduce standard test cases or benchmarks. For steady open–channel flow based on the Saint Venant equations some benchmarks exist for simple geometries from the work of Bresse, Bakhmeteff and Chow but these are tabulated in the form of standard integrals. This paper provides benchmark solutions for a wider range of cases, which may have a nonprismatic cross section, nonuniform bed slope, and transitions between subcritical and supercritical flow. This makes it possible to assess the underlying quality of computational algorithms in more difficult cases, including those with hydraulic jumps. Several new test cases are given in detail and the performance of a commercial steady flow package is evaluated against two of them. The test cases may also be used as benchmarks for both steady flow models and unsteady flow models in the steady limit.
Resumo:
In this article we describe recent progress on the design, analysis and implementation of hybrid numerical-asymptotic boundary integral methods for boundary value problems for the Helmholtz equation that model time harmonic acoustic wave scattering in domains exterior to impenetrable obstacles. These hybrid methods combine conventional piecewise polynomial approximations with high-frequency asymptotics to build basis functions suitable for representing the oscillatory solutions. They have the potential to solve scattering problems accurately in a computation time that is (almost) independent of frequency and this has been realized for many model problems. The design and analysis of this class of methods requires new results on the analysis and numerical analysis of highly oscillatory boundary integral operators and on the high-frequency asymptotics of scattering problems. The implementation requires the development of appropriate quadrature rules for highly oscillatory integrals. This article contains a historical account of the development of this currently very active field, a detailed account of recent progress and, in addition, a number of original research results on the design, analysis and implementation of these methods.
Resumo:
The direct impact of mountain waves on the atmospheric circulation is due to the deposition of wave momentum at critical levels, or levels where the waves break. The first process is treated analytically in this study within the framework of linear theory. The variation of the momentum flux with height is investigated for relatively large shears, extending the authors’ previous calculations of the surface gravity wave drag to the whole atmosphere. A Wentzel–Kramers–Brillouin (WKB) approximation is used to treat inviscid, steady, nonrotating, hydrostatic flow with directional shear over a circular mesoscale mountain, for generic wind profiles. This approximation must be extended to third order to obtain momentum flux expressions that are accurate to second order. Since the momentum flux only varies because of wave filtering by critical levels, the application of contour integration techniques enables it to be expressed in terms of simple 1D integrals. On the other hand, the momentum flux divergence (which corresponds to the force on the atmosphere that must be represented in gravity wave drag parameterizations) is given in closed analytical form. The momentum flux expressions are tested for idealized wind profiles, where they become a function of the Richardson number (Ri). These expressions tend, for high Ri, to results by previous authors, where wind profile effects on the surface drag were neglected and critical levels acted as perfect absorbers. The linear results are compared with linear and nonlinear numerical simulations, showing a considerable improvement upon corresponding results derived for higher Ri.
Resumo:
This paper is concerned with the problem of propagation from a monofrequency coherent line source above a plane of homogeneous surface impedance. The solution of this problem occurs in the kernel of certain boundary integral equation formulations of acoustic propagation above an impedance boundary, and the discussion of the paper is motivated by this application. The paper starts by deriving representations, as Laplace-type integrals, of the solution and its first partial derivatives. The evaluation of these integral representations by Gauss-Laguerre quadrature is discussed, and theoretical bounds on the truncation error are obtained. Specific approximations are proposed which are shown to be accurate except in the very near field, for all angles of incidence and a wide range of values of surface impedance. The paper finishes with derivations of partial results and analogous Laplace-type integral representations for the case of a point source.
Resumo:
We propose and analyse a hybrid numerical–asymptotic hp boundary element method (BEM) for time-harmonic scattering of an incident plane wave by an arbitrary collinear array of sound-soft two-dimensional screens. Our method uses an approximation space enriched with oscillatory basis functions, chosen to capture the high-frequency asymptotics of the solution. We provide a rigorous frequency-explicit error analysis which proves that the method converges exponentially as the number of degrees of freedom N increases, and that to achieve any desired accuracy it is sufficient to increase N in proportion to the square of the logarithm of the frequency as the frequency increases (standard BEMs require N to increase at least linearly with frequency to retain accuracy). Our numerical results suggest that fixed accuracy can in fact be achieved at arbitrarily high frequencies with a frequency-independent computational cost, when the oscillatory integrals required for implementation are computed using Filon quadrature. We also show how our method can be applied to the complementary ‘breakwater’ problem of propagation through an aperture in an infinite sound-hard screen.
Resumo:
We propose a topological approach to the problem of determining a curve from its iterated integrals. In particular, we prove that a family of terms in the signature series of a two dimensional closed curve with finite p-variation, 1≤p<2, are in fact moments of its winding number. This relation allows us to prove that the signature series of a class of simple non-smooth curves uniquely determine the curves. This implies that outside a Chordal SLEκ null set, where 0<κ≤4, the signature series of curves uniquely determine the curves. Our calculations also enable us to express the Fourier transform of the n-point functions of SLE curves in terms of the expected signature of SLE curves. Although the techniques used in this article are deterministic, the results provide a platform for studying SLE curves through the signatures of their sample paths.
Resumo:
We establish a general framework for a class of multidimensional stochastic processes over [0,1] under which with probability one, the signature (the collection of iterated path integrals in the sense of rough paths) is well-defined and determines the sample paths of the process up to reparametrization. In particular, by using the Malliavin calculus we show that our method applies to a class of Gaussian processes including fractional Brownian motion with Hurst parameter H>1/4, the Ornstein–Uhlenbeck process and the Brownian bridge.
Resumo:
A generalization of Arakawa and Schubert's convective quasi-equilibrium principle is presented for a closure formulation of mass-flux convection parameterization. The original principle is based on the budget of the cloud work function. This principle is generalized by considering the budget for a vertical integral of an arbitrary convection-related quantity. The closure formulation includes Arakawa and Schubert's quasi-equilibrium, as well as both CAPE and moisture closures as special cases. The formulation also includes new possibilities for considering vertical integrals that are dependent on convective-scale variables, such as the moisture within convection. The generalized convective quasi-equilibrium is defined by a balance between large-scale forcing and convective response for a given vertically-integrated quantity. The latter takes the form of a convolution of a kernel matrix and a mass-flux spectrum, as in the original convective quasi-equilibrium. The kernel reduces to a scalar when either a bulk formulation is adopted, or only large-scale variables are considered within the vertical integral. Various physical implications of the generalized closure are discussed. These include the possibility that precipitation might be considered as a potentially-significant contribution to the large-scale forcing. Two dicta are proposed as guiding physical principles for the specifying a suitable vertically-integrated quantity.
Resumo:
Initializing the ocean for decadal predictability studies is a challenge, as it requires reconstructing the little observed subsurface trajectory of ocean variability. In this study we explore to what extent surface nudging using well-observed sea surface temperature (SST) can reconstruct the deeper ocean variations for the 1949–2005 period. An ensemble made with a nudged version of the IPSLCM5A model and compared to ocean reanalyses and reconstructed datasets. The SST is restored to observations using a physically-based relaxation coefficient, in contrast to earlier studies, which use a much larger value. The assessment is restricted to the regions where the ocean reanalyses agree, i.e. in the upper 500 m of the ocean, although this can be latitude and basin dependent. Significant reconstruction of the subsurface is achieved in specific regions, namely region of subduction in the subtropical Atlantic, below the thermocline in the equatorial Pacific and, in some cases, in the North Atlantic deep convection regions. Beyond the mean correlations, ocean integrals are used to explore the time evolution of the correlation over 20-year windows. Classical fixed depth heat content diagnostics do not exhibit any significant reconstruction between the different existing observation-based references and can therefore not be used to assess global average time-varying correlations in the nudged simulations. Using the physically based average temperature above an isotherm (14 °C) alleviates this issue in the tropics and subtropics and shows significant reconstruction of these quantities in the nudged simulations for several decades. This skill is attributed to the wind stress reconstruction in the tropics, as already demonstrated in a perfect model study using the same model. Thus, we also show here the robustness of this result in an historical and observational context.