923 resultados para Error bounds
Resumo:
Lava flows can produce changes in topography on the order of 10s-100s of metres. A knowledge of the resulting volume change provides evidence about the dynamics of an eruption. We present a method to measure topographic changes from the differential InSAR phase delays caused by the height differences between the current topography and a Digital Elevation Model (DEM). This does not require a pre-event SAR image, so it does not rely on interferometric phase remaining coherent during eruption and emplacement. Synthetic tests predicts that we can estimate lava thickness of as little as �9 m, given a minimum of 5 interferograms with suitably large orbital baseine separations. In the case of continuous motion, such as lava flow subsidence, we invert interferometric phase simultaneously for topographic change and displacement. We demonstrate the method using data from Santiaguito volcano, Guatemala, and measure increases in lava thickness of up to 140 m between 2000 and 2009, largely associated with activity between 2000 and 2005. We find a mean extrusion rate of 0.43 +/- 0.06 m3/s, which lies within the error bounds of the longer term extrusion rate between 1922-2000. The thickest and youngest parts of the flow deposit were shown to be subsiding at an average rate of �-6 cm/yr. This is the first time that flow thickness and subsidence have been measured simultaneously. We expect this method to be suitable for measurment of landslides and other mass flow deposits as well as lava flows.
Resumo:
In this paper we propose methods for computing Fresnel integrals based on truncated trapezium rule approximations to integrals on the real line, these trapezium rules modified to take into account poles of the integrand near the real axis. Our starting point is a method for computation of the error function of complex argument due to Matta and Reichel (J Math Phys 34:298–307, 1956) and Hunter and Regan (Math Comp 26:539–541, 1972). We construct approximations which we prove are exponentially convergent as a function of N , the number of quadrature points, obtaining explicit error bounds which show that accuracies of 10−15 uniformly on the real line are achieved with N=12 , this confirmed by computations. The approximations we obtain are attractive, additionally, in that they maintain small relative errors for small and large argument, are analytic on the real axis (echoing the analyticity of the Fresnel integrals), and are straightforward to implement.
Resumo:
Aggregation disaggregation is used to reduce the analysis of a large generalized transportation problem to a smaller one. Bounds for the actual difference between the aggregated objective and the original optimal value are used to quantify the error due to aggregation and estimate the quality of the aggregation. The bounds can be calculated either before optimization of the aggregated problem (a priori) or after (a posteriori). Both types of the bounds are derived and numerically compared. A computational experiment was designed to (a) study the correlation between the bounds and the actual error and (b) quantify the difference of the error bounds from the actual error. The experiment shows a significant correlation between some a priori bounds, the a posteriori bounds and the actual error. These preliminary results indicate that calculating the a priori error bound is a useful strategy to select the appropriate aggregation level, since the a priori bound varies in the same way that the actual error does. After the aggregated problem has been selected and optimized, the a posteriori bound provides a good quantitative measure for the error due to aggregation.
Resumo:
We analyze three sets of doubly-censored cohort data on incubation times, estimating incubation distributions using semi-parametric methods and assessing the comparability of the estimates. Weibull models appear to be inappropriate for at least one of the cohorts, and the estimates for the different cohorts are substantially different. We use these estimates as inputs for backcalculation, using a nonparametric method based on maximum penalized likelihood. The different incubations all produce fits to the reported AIDS counts that are as good as the fit from a nonstationary incubation distribution that models treatment effects, but the estimated infection curves are very different. We also develop a method for estimating nonstationarity as part of the backcalculation procedure and find that such estimates also depend very heavily on the assumed incubation distribution. We conclude that incubation distributions are so uncertain that meaningful error bounds are difficult to place on backcalculated estimates and that backcalculation may be too unreliable to be used without being supplemented by other sources of information in HIV prevalence and incidence.
Resumo:
We introduce and analyze hp-version discontinuous Galerkin (dG) finite element methods for the numerical approximation of linear second-order elliptic boundary-value problems in three-dimensional polyhedral domains. To resolve possible corner-, edge- and corner-edge singularities, we consider hexahedral meshes that are geometrically and anisotropically refined toward the corresponding neighborhoods. Similarly, the local polynomial degrees are increased linearly and possibly anisotropically away from singularities. We design interior penalty hp-dG methods and prove that they are well-defined for problems with singular solutions and stable under the proposed hp-refinements. We establish (abstract) error bounds that will allow us to prove exponential rates of convergence in the second part of this work.
Resumo:
One of the aims of the SvalGlac project is to obtain an improved estimate, with reliable error estimates, of the volume of Svalbard glaciers and their potential contribution to sea level rise. As part of this work, we present volume calculations, with detailed error estimates, for eight glaciers on Wedel Jarlsberg Land, southern Spitsbergen, Svalbard. The volume estimates are based upon a dense net of GPR-retrieved ice thickness data collected over several field campaigns spanning the period 2004-2011. The total area and volume of the ensemble are 502.9±18.6 km2 and 80.72±2.85 km3, respectively. Excluding Ariebreen (a tiny glacier, menor que 0.4 km2 in area), the individual areas, volumes and average ice thickness lie within 4.7-141.0 km2, 0.30-25.85 km3 and 64-183 m, respectively. The maximum recorded ice thickness, ca. 619±13 m, is found in Austre Torellbreen. To estimate the ice volume of small non-echo-sounded tributary glaciers, we used a function providing the best fit to the ice thickness along the centre line of a collection of such tributaries where echo-soundings were available, and assuming parabolic cross-sections. We did some tests on the effect on the measured ice volumes of the distinct radio-wave velocity (RWV) of firn as compared to ice, and cold versus temperate ice, concluding that the changes in volume implied by such corrections were within the error bounds of our volume estimate using a constant RWV for the entire glacier inferred from common mid-point measurements on the upper ablation area.
Resumo:
We present ground-penetrating radar (GPR)—based volume calculations, with associated error estimates, for eight glaciers on Wedel Jarlsberg Land, southwestern Spitsbergen, Svalbard, and compare them with those obtained from volume-area scaling relationships. The volume estimates are based upon GPR ice-thickness data collected during the period 2004–2013. The total area and volume of the ensemble are 502.91 ± 18.60 km2 and 91.91 ± 3.12 km3, respectively. The individual areas, volumes, and average ice thickness lie within 0.37–140.99 km2, 0.01–31.98 km3, and 28–227 m, respectively, with a maximum recorded ice thickness of 619 ± 13 m on Austre Torellbreen. To estimate the ice volume of unsurveyed tributary glaciers, we combine polynomial cross-sections with a function providing the best fit to the measured ice thickness along the center line of a collection of 22 surveyed tributaries. For the time-to-depth conversion of GPR data, we test the use of a glacierwide constant radio-wave velocity chosen on the basis of local or regional common midpoint measurements, versus the use of distinct velocities for the firn, cold ice, and temperate ice layers, concluding that the corresponding volume calculations agree with each other within their error bounds.
Resumo:
In this paper we deal with parameterized linear inequality systems in the n-dimensional Euclidean space, whose coefficients depend continuosly on an index ranging in a compact Hausdorff space. The paper is developed in two different parametric settings: the one of only right-hand-side perturbations of the linear system, and that in which both sides of the system can be perturbed. Appealing to the backgrounds on the calmness property, and exploiting the specifics of the current linear structure, we derive different characterizations of the calmness of the feasible set mapping, and provide an operative expresion for the calmness modulus when confined to finite systems. In the paper, the role played by the Abadie constraint qualification in relation to calmness is clarified, and illustrated by different examples. We point out that this approach has the virtue of tackling the calmness property exclusively in terms of the system’s data.
Resumo:
The paper has been presented at the 12th International Conference on Applications of Computer Algebra, Varna, Bulgaria, June, 2006.
Resumo:
AMS subject classification: 90C30, 90C33.
Resumo:
In this dissertation, we study the behavior of exciton-polariton quasiparticles in semiconductor microcavities, under the sourceless and lossless conditions.
First, we simplify the original model by removing the photon dispersion term, thus effectively turn the PDEs system to an ODEs system,
and investigate the behavior of the resulting system, including the equilibrium points and the wave functions of the excitons and the photons.
Second, we add the dispersion term for the excitons to the original model and prove that the band of the discontinuous solitons now become dark solitons.
Third, we employ the Strang-splitting method to our sytem of PDEs and prove the first-order and second-order error bounds in the $H^1$ norm and the $L_2$ norm, respectively.
Using this numerical result, we analyze the stability of the steady state bright soliton solution. This solution revolves around the $x$-axis as time progresses
and the perturbed soliton also rotates around the $x$-axis and tracks closely in terms of amplitude but lags behind the exact one. Our numerical result shows orbital
stability but no $L_2$ stability.
Resumo:
In this article we consider the a posteriori error estimation and adaptive mesh refinement of discontinuous Galerkin finite element approximations of the hydrodynamic stability problem associated with the incompressible Navier-Stokes equations. Particular attention is given to the reliable error estimation of the eigenvalue problem in channel and pipe geometries. Here, computable a posteriori error bounds are derived based on employing the generalization of the standard Dual-Weighted-Residual approach, originally developed for the estimation of target functionals of the solution, to eigenvalue/stability problems. The underlying analysis consists of constructing both a dual eigenvalue problem and a dual problem for the original base solution. In this way, errors stemming from both the numerical approximation of the original nonlinear flow problem, as well as the underlying linear eigenvalue problem are correctly controlled. Numerical experiments highlighting the practical performance of the proposed a posteriori error indicator on adaptively refined computational meshes are presented.
Resumo:
Bounds on the expectation and variance of errors at the output of a multilayer feedforward neural network with perturbed weights and inputs are derived. It is assumed that errors in weights and inputs to the network are statistically independent and small. The bounds obtained are applicable to both digital and analogue network implementations and are shown to be of practical value.
Resumo:
Evaluation of the probability of error in decision feedback equalizers is difficult due to the presence of a hard limiter in the feedback path. This paper derives the upper and lower bounds on the probability of a single error and multiple error patterns. The bounds are fairly tight. The bounds can also be used to select proper tap gains of the equalizer.