90 resultados para numerical integration methods
em CentAUR: Central Archive University of Reading - UK
Resumo:
A three-point difference scheme recently proposed in Ref. 1 for the numerical solution of a class of linear, singularly perturbed, two-point boundary-value problems is investigated. The scheme is derived from a first-order approximation to the original problem with a small deviating argument. It is shown here that, in the limit, as the deviating argument tends to zero, the difference scheme converges to a one-sided approximation to the original singularly perturbed equation in conservation form. The limiting scheme is shown to be stable on any uniform grid. Therefore, no advantage arises from using the deviating argument, and the most accurate and efficient results are obtained with the deviation at its zero limit.
Resumo:
Estimating the magnitude of Agulhas leakage, the volume flux of water from the Indian to the Atlantic Ocean, is difficult because of the presence of other circulation systems in the Agulhas region. Indian Ocean water in the Atlantic Ocean is vigorously mixed and diluted in the Cape Basin. Eulerian integration methods, where the velocity field perpendicular to a section is integrated to yield a flux, have to be calibrated so that only the flux by Agulhas leakage is sampled. Two Eulerian methods for estimating the magnitude of Agulhas leakage are tested within a high-resolution two-way nested model with the goal to devise a mooring-based measurement strategy. At the GoodHope line, a section halfway through the Cape Basin, the integrated velocity perpendicular to that line is compared to the magnitude of Agulhas leakage as determined from the transport carried by numerical Lagrangian floats. In the first method, integration is limited to the flux of water warmer and more saline than specific threshold values. These threshold values are determined by maximizing the correlation with the float-determined time series. By using the threshold values, approximately half of the leakage can directly be measured. The total amount of Agulhas leakage can be estimated using a linear regression, within a 90% confidence band of 12 Sv. In the second method, a subregion of the GoodHope line is sought so that integration over that subregion yields an Eulerian flux as close to the float-determined leakage as possible. It appears that when integration is limited within the model to the upper 300 m of the water column within 900 km of the African coast the time series have the smallest root-mean-square difference. This method yields a root-mean-square error of only 5.2 Sv but the 90% confidence band of the estimate is 20 Sv. It is concluded that the optimum thermohaline threshold method leads to more accurate estimates even though the directly measured transport is a factor of two lower than the actual magnitude of Agulhas leakage in this model.
Resumo:
The sampling of certain solid angle is a fundamental operation in realistic image synthesis, where the rendering equation describing the light propagation in closed domains is solved. Monte Carlo methods for solving the rendering equation use sampling of the solid angle subtended by unit hemisphere or unit sphere in order to perform the numerical integration of the rendering equation. In this work we consider the problem for generation of uniformly distributed random samples over hemisphere and sphere. Our aim is to construct and study the parallel sampling scheme for hemisphere and sphere. First we apply the symmetry property for partitioning of hemisphere and sphere. The domain of solid angle subtended by a hemisphere is divided into a number of equal sub-domains. Each sub-domain represents solid angle subtended by orthogonal spherical triangle with fixed vertices and computable parameters. Then we introduce two new algorithms for sampling of orthogonal spherical triangles. Both algorithms are based on a transformation of the unit square. Similarly to the Arvo's algorithm for sampling of arbitrary spherical triangle the suggested algorithms accommodate the stratified sampling. We derive the necessary transformations for the algorithms. The first sampling algorithm generates a sample by mapping of the unit square onto orthogonal spherical triangle. The second algorithm directly compute the unit radius vector of a sampling point inside to the orthogonal spherical triangle. The sampling of total hemisphere and sphere is performed in parallel for all sub-domains simultaneously by using the symmetry property of partitioning. The applicability of the corresponding parallel sampling scheme for Monte Carlo and Quasi-D/lonte Carlo solving of rendering equation is discussed.
Resumo:
This paper is directed to the advanced parallel Quasi Monte Carlo (QMC) methods for realistic image synthesis. We propose and consider a new QMC approach for solving the rendering equation with uniform separation. First, we apply the symmetry property for uniform separation of the hemispherical integration domain into 24 equal sub-domains of solid angles, subtended by orthogonal spherical triangles with fixed vertices and computable parameters. Uniform separation allows to apply parallel sampling scheme for numerical integration. Finally, we apply the stratified QMC integration method for solving the rendering equation. The superiority our QMC approach is proved.
Resumo:
This dissertation deals with aspects of sequential data assimilation (in particular ensemble Kalman filtering) and numerical weather forecasting. In the first part, the recently formulated Ensemble Kalman-Bucy (EnKBF) filter is revisited. It is shown that the previously used numerical integration scheme fails when the magnitude of the background error covariance grows beyond that of the observational error covariance in the forecast window. Therefore, we present a suitable integration scheme that handles the stiffening of the differential equations involved and doesn’t represent further computational expense. Moreover, a transform-based alternative to the EnKBF is developed: under this scheme, the operations are performed in the ensemble space instead of in the state space. Advantages of this formulation are explained. For the first time, the EnKBF is implemented in an atmospheric model. The second part of this work deals with ensemble clustering, a phenomenon that arises when performing data assimilation using of deterministic ensemble square root filters in highly nonlinear forecast models. Namely, an M-member ensemble detaches into an outlier and a cluster of M-1 members. Previous works may suggest that this issue represents a failure of EnSRFs; this work dispels that notion. It is shown that ensemble clustering can be reverted also due to nonlinear processes, in particular the alternation between nonlinear expansion and compression of the ensemble for different regions of the attractor. Some EnSRFs that use random rotations have been developed to overcome this issue; these formulations are analyzed and their advantages and disadvantages with respect to common EnSRFs are discussed. The third and last part contains the implementation of the Robert-Asselin-Williams (RAW) filter in an atmospheric model. The RAW filter is an improvement to the widely popular Robert-Asselin filter that successfully suppresses spurious computational waves while avoiding any distortion in the mean value of the function. Using statistical significance tests both at the local and field level, it is shown that the climatology of the SPEEDY model is not modified by the changed time stepping scheme; hence, no retuning of the parameterizations is required. It is found the accuracy of the medium-term forecasts is increased by using the RAW filter.
Resumo:
New representations and efficient calculation methods are derived for the problem of propagation from an infinite regularly spaced array of coherent line sources above a homogeneous impedance plane, and for the Green's function for sound propagation in the canyon formed by two infinitely high, parallel rigid or sound soft walls and an impedance ground surface. The infinite sum of source contributions is replaced by a finite sum and the remainder is expressed as a Laplace-type integral. A pole subtraction technique is used to remove poles in the integrand which lie near the path of integration, obtaining a smooth integrand, more suitable for numerical integration, and a specific numerical integration method is proposed. Numerical experiments show highly accurate results across the frequency spectrum for a range of ground surface types. It is expected that the methods proposed will prove useful in boundary element modeling of noise propagation in canyon streets and in ducts, and for problems of scattering by periodic surfaces.
Resumo:
Acid mine drainage (AMD) is a widespread environmental problem associated with both working and abandoned mining operations. As part of an overall strategy to determine a long-term treatment option for AMD, a pilot passive treatment plant was constructed in 1994 at Wheal Jane Mine in Cornwall, UK. The plant consists of three separate systems, each containing aerobic reed beds, anaerobic cell and rock filters, and represents the largest European experimental facility of its kind. The systems only differ by the type of pretreatment utilised to increase the pH of the influent minewater (pH <4): lime dosed (LD), anoxic limestone drain (ALD) and lime free (LF), which receives no form of pretreatment. Historical data (1994-1997) indicate median Fe reduction between 55% and 92%, sulphate removal in the range of 3-38% and removal of target metals (cadmium, copper and zinc) below detection limits, depending on pretreatment and flow rates through the system. A new model to simulate the processes and dynamics of the wetlands systems is described, as well as the application of the model to experimental data collected at the pilot plant. The model is process based, and utilises reaction kinetic approaches based on experimental microbial techniques rather than an equilibrium approach to metal precipitation. The model is dynamic and utilises numerical integration routines to solve a set of differential equations that describe the behaviour of 20 variables over the 17 pilot plant cells on a daily basis. The model outputs at each cell boundary are evaluated and compared with the measured data, and the model is demonstrated to provide a good representation of the complex behaviour of the wetland system for a wide range of variables. (C) 2004 Elsevier B.V/ All rights reserved.
Resumo:
In a recent study, Williams introduced a simple modification to the widely used Robert–Asselin (RA) filter for numerical integration. The main purpose of the Robert–Asselin–Williams (RAW) filter is to avoid the undesired numerical damping of the RA filter and to increase the accuracy. In the present paper, the effects of the modification are comprehensively evaluated in the Simplified Parameterizations, Primitive Equation Dynamics (SPEEDY) atmospheric general circulation model. First, the authors search for significant changes in the monthly climatology due to the introduction of the new filter. After testing both at the local level and at the field level, no significant changes are found, which is advantageous in the sense that the new scheme does not require a retuning of the parameterized model physics. Second, the authors examine whether the new filter improves the skill of short- and medium-term forecasts. January 1982 data from the NCEP–NCAR reanalysis are used to evaluate the forecast skill. Improvements are found in all the model variables (except the relative humidity, which is hardly changed). The improvements increase with lead time and are especially evident in medium-range forecasts (96–144 h). For example, in tropical surface pressure predictions, 5-day forecasts made using the RAW filter have approximately the same skill as 4-day forecasts made using the RA filter. The results of this work are encouraging for the implementation of the RAW filter in other models currently using the RA filter.
Integrated cytokine and metabolic analysis of pathological responses to parasite exposure in rodents
Resumo:
Parasitic infections cause a myriad of responses in their mammalian hosts, on immune as well as on metabolic level. A multiplex panel of cytokines and metabolites derived from four parasite-rodent models, namely, Plasmodium berghei-mouse, Trypanosoma brucei brucei-mouse, Schistosoma mansoni-mouse, and Fasciola hepatica-rat were statistically coanalyzed. 1H NMR spectroscopy and multivariate statistical analysis were used to characterize the urine and plasma metabolite profiles in infected and noninfected animals. Each parasite generated a unique metabolic signature in the host. Plasma cytokine concentrations were obtained using the ‘Meso Scale Discovery’ multi cytokine assay platform. Multivariate data integration methods were subsequently used to elucidate the component of the metabolic signature which is associated with inflammation and to determine specific metabolic correlates with parasite-induced changes in plasma cytokine levels. For example, the relative levels of acetyl glycoproteins extracted from the plasma metabolite profile in the P. berghei-infected mice were statistically correlated with IFN-γ, whereas the same cytokine was anticorrelated with glucose levels. Both the metabolic and the cytokine data showed a similar spatial distribution in principal component analysis scores plots constructed for the combined murine data, with samples from all infected animals clustering according to the parasite species and whereby the protozoan infections (P. berghei and T. b. brucei) grouped separately from the helminth infection (S. mansoni). For S. mansoni, the main infection-responsive cytokines were IL-4 and IL-5, which covaried with lactate, choline, and D-3-hydroxybutyrate. This study demonstrates that the inherently differential immune response to single and multicellular parasites not only manifests in the cytokine expression, but also consequently imprints on the metabolic signature, and calls for in-depth analysis to further explore direct links between immune features and biochemical pathways.
Resumo:
A recent paper published in this journal considers the numerical integration of the shallow-water equations using the leapfrog time-stepping scheme [Sun Wen-Yih, Sun Oliver MT. A modified leapfrog scheme for shallow water equations. Comput Fluids 2011;52:69–72]. The authors of that paper propose using the time-averaged height in the numerical calculation of the pressure-gradient force, instead of the instantaneous height at the middle time step. The authors show that this modification doubles the maximum Courant number (and hence the maximum time step) at which the integrations are stable, doubling the computational efficiency. Unfortunately, the pressure-averaging technique proposed by the authors is not original. It was devised and published by Shuman [5] and has been widely used in the atmosphere and ocean modelling community for over 40 years.
Resumo:
Expressions for the viscosity correction function, and hence bulk complex impedance, density, compressibility, and propagation constant, are obtained for a rigid frame porous medium whose pores are prismatic with fixed cross-sectional shape, but of variable pore size distribution. The lowand high-frequency behavior of the viscosity correction function is derived for the particular case of a log-normal pore size distribution, in terms of coefficients which can, in general, be computed numerically, and are given here explicitly for the particular cases of pores of equilateral triangular, circular, and slitlike cross-section. Simple approximate formulae, based on two-point Pade´ approximants for the viscosity correction function are obtained, which avoid a requirement for numerical integration or evaluation of special functions, and their accuracy is illustrated and investigated for the three pore shapes already mentioned
Resumo:
Two recent works have adapted the Kalman–Bucy filter into an ensemble setting. In the first formulation, the ensemble of perturbations is updated by the solution of an ordinary differential equation (ODE) in pseudo-time, while the mean is updated as in the standard Kalman filter. In the second formulation, the full ensemble is updated in the analysis step as the solution of single set of ODEs in pseudo-time. Neither requires matrix inversions except for the frequently diagonal observation error covariance. We analyse the behaviour of the ODEs involved in these formulations. We demonstrate that they stiffen for large magnitudes of the ratio of background error to observational error variance, and that using the integration scheme proposed in both formulations can lead to failure. A numerical integration scheme that is both stable and is not computationally expensive is proposed. We develop transform-based alternatives for these Bucy-type approaches so that the integrations are computed in ensemble space where the variables are weights (of dimension equal to the ensemble size) rather than model variables. Finally, the performance of our ensemble transform Kalman–Bucy implementations is evaluated using three models: the 3-variable Lorenz 1963 model, the 40-variable Lorenz 1996 model, and a medium complexity atmospheric general circulation model known as SPEEDY. The results from all three models are encouraging and warrant further exploration of these assimilation techniques.
Resumo:
Exact error estimates for evaluating multi-dimensional integrals are considered. An estimate is called exact if the rates of convergence for the low- and upper-bound estimate coincide. The algorithm with such an exact rate is called optimal. Such an algorithm has an unimprovable rate of convergence. The problem of existing exact estimates and optimal algorithms is discussed for some functional spaces that define the regularity of the integrand. Important for practical computations data classes are considered: classes of functions with bounded derivatives and Holder type conditions. The aim of the paper is to analyze the performance of two optimal classes of algorithms: deterministic and randomized for computing multidimensional integrals. It is also shown how the smoothness of the integrand can be exploited to construct better randomized algorithms.