921 resultados para Error analysis (Mathematics)


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The North Atlantic spring bloom is one of the main events that lead to carbon export to the deep ocean and drive oceanic uptake of CO(2) from the atmosphere. Here we use a suite of physical, bio-optical and chemical measurements made during the 2008 spring bloom to optimize and compare three different models of biological carbon export. The observations are from a Lagrangian float that operated south of Iceland from early April to late June, and were calibrated with ship-based measurements. The simplest model is representative of typical NPZD models used for the North Atlantic, while the most complex model explicitly includes diatoms and the formation of fast sinking diatom aggregates and cysts under silicate limitation. We carried out a variational optimization and error analysis for the biological parameters of all three models, and compared their ability to replicate the observations. The observations were sufficient to constrain most phytoplankton-related model parameters to accuracies of better than 15 %. However, the lack of zooplankton observations leads to large uncertainties in model parameters for grazing. The simulated vertical carbon flux at 100 m depth is similar between models and agrees well with available observations, but at 600 m the simulated flux is larger by a factor of 2.5 to 4.5 for the model with diatom aggregation. While none of the models can be formally rejected based on their misfit with the available observations, the model that includes export by diatom aggregation has a statistically significant better fit to the observations and more accurately represents the mechanisms and timing of carbon export based on observations not included in the optimization. Thus models that accurately simulate the upper 100 m do not necessarily accurately simulate export to deeper depths.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

AnewRelativisticScreenedHydrogenicModel has been developed to calculate atomic data needed to compute the optical and thermodynamic properties of high energy density plasmas. The model is based on anewset of universal screeningconstants, including nlj-splitting that has been obtained by fitting to a large database of ionization potentials and excitation energies. This database was built with energies compiled from the National Institute of Standards and Technology (NIST) database of experimental atomic energy levels, and energies calculated with the Flexible Atomic Code (FAC). The screeningconstants have been computed up to the 5p3/2 subshell using a Genetic Algorithm technique with an objective function designed to minimize both the relative error and the maximum error. To select the best set of screeningconstants some additional physical criteria has been applied, which are based on the reproduction of the filling order of the shells and on obtaining the best ground state configuration. A statistical error analysis has been performed to test the model, which indicated that approximately 88% of the data lie within a ±10% error interval. We validate the model by comparing the results with ionization energies, transition energies, and wave functions computed using sophisticated self-consistent codes and experimental data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The analysis of complex nonlinear systems is often carried out using simpler piecewise linear representations of them. A principled and practical technique is proposed to linearize and evaluate arbitrary continuous nonlinear functions using polygonal (continuous piecewise linear) models under the L1 norm. A thorough error analysis is developed to guide an optimal design of two kinds of polygonal approximations in the asymptotic case of a large budget of evaluation subintervals N. The method allows the user to obtain the level of linearization (N) for a target approximation error and vice versa. It is suitable for, but not limited to, an efficient implementation in modern Graphics Processing Units (GPUs), allowing real-time performance of computationally demanding applications. The quality and efficiency of the technique has been measured in detail on two nonlinear functions that are widely used in many areas of scientific computing and are expensive to evaluate.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present a quasi-monotone semi-Lagrangian particle level set (QMSL-PLS) method for moving interfaces. The QMSL method is a blend of first order monotone and second order semi-Lagrangian methods. The QMSL-PLS method is easy to implement, efficient, and well adapted for unstructured, either simplicial or hexahedral, meshes. We prove that it is unconditionally stable in the maximum discrete norm, � · �h,∞, and the error analysis shows that when the level set solution u(t) is in the Sobolev space Wr+1,∞(D), r ≥ 0, the convergence in the maximum norm is of the form (KT/Δt)min(1,Δt � v �h,∞ /h)((1 − α)hp + hq), p = min(2, r + 1), and q = min(3, r + 1),where v is a velocity. This means that at high CFL numbers, that is, when Δt > h, the error is O( (1−α)hp+hq) Δt ), whereas at CFL numbers less than 1, the error is O((1 − α)hp−1 + hq−1)). We have tested our method with satisfactory results in benchmark problems such as the Zalesak’s slotted disk, the single vortex flow, and the rising bubble.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many computer vision and human-computer interaction applications developed in recent years need evaluating complex and continuous mathematical functions as an essential step toward proper operation. However, rigorous evaluation of this kind of functions often implies a very high computational cost, unacceptable in real-time applications. To alleviate this problem, functions are commonly approximated by simpler piecewise-polynomial representations. Following this idea, we propose a novel, efficient, and practical technique to evaluate complex and continuous functions using a nearly optimal design of two types of piecewise linear approximations in the case of a large budget of evaluation subintervals. To this end, we develop a thorough error analysis that yields asymptotically tight bounds to accurately quantify the approximation performance of both representations. It provides an improvement upon previous error estimates and allows the user to control the trade-off between the approximation error and the number of evaluation subintervals. To guarantee real-time operation, the method is suitable for, but not limited to, an efficient implementation in modern Graphics Processing Units (GPUs), where it outperforms previous alternative approaches by exploiting the fixed-function interpolation routines present in their texture units. The proposed technique is a perfect match for any application requiring the evaluation of continuous functions, we have measured in detail its quality and efficiency on several functions, and, in particular, the Gaussian function because it is extensively used in many areas of computer vision and cybernetics, and it is expensive to evaluate.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The present is marked by the availability of large volumes of heterogeneous data, whose management is extremely complex. While the treatment of factual data has been widely studied, the processing of subjective information still poses important challenges. This is especially true in tasks that combine Opinion Analysis with other challenges, such as the ones related to Question Answering. In this paper, we describe the different approaches we employed in the NTCIR 8 MOAT monolingual English (opinionatedness, relevance, answerness and polarity) and cross-lingual English-Chinese tasks, implemented in our OpAL system. The results obtained when using different settings of the system, as well as the error analysis performed after the competition, offered us some clear insights on the best combination of techniques, that balance between precision and recall. Contrary to our initial intuitions, we have also seen that the inclusion of specialized Natural Language Processing tools dealing with Temporality or Anaphora Resolution lowers the system performance, while the use of topic detection techniques using faceted search with Wikipedia and Latent Semantic Analysis leads to satisfactory system performance, both for the monolingual setting, as well as in a multilingual one.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Contract no DA-44-009 Eng. 2435, Department of the Army Project no. 8-35-11-101.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Includes index.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many studies of quantitative and disease traits in human genetics rely upon self-reported measures. Such measures are based on questionnaires or interviews and are often cheaper and more readily available than alternatives. However, the precision and potential bias cannot usually be assessed. Here we report a detailed quantitative genetic analysis of stature. We characterise the degree of measurement error by utilising a large sample of Australian twin pairs (857 MZ, 815 DZ) with both clinical and self-reported measures of height. Self-report height measurements are shown to be more variable than clinical measures. This has led to lowered estimates of heritability in many previous studies of stature. In our twin sample the heritability estimate for clinical height exceeded 90%. Repeated measures analysis shows that 2-3 times as many self-report measures are required to recover heritability estimates similar to those obtained from clinical measures. Bivariate genetic repeated measures analysis of self-report and clinical height measures showed an additive genetic correlation > 0.98. We show that the accuracy of self-report height is upwardly biased in older individuals and in individuals of short stature. By comparing clinical and self-report measures we also showed that there was a genetic component to females systematically reporting their height incorrectly; this phenomenon appeared to not be present in males. The results from the measurement error analysis were subsequently used to assess the effects of error on the power to detect linkage in a genome scan. Moderate reduction in error (through the use of accurate clinical or multiple self-report measures) increased the effective sample size by 22%; elimination of measurement error led to increases in effective sample size of 41%.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An inherent incomputability in the specification of a functional language extension that combines assertions with dynamic type checking is isolated in an explicit derivation from mathematical specifications. The combination of types and assertions (into "dynamic assertion-types" - DATs) is a significant issue since, because the two are congruent means for program correctness, benefit arises from their better integration in contrast to the harm resulting from their unnecessary separation. However, projecting the "set membership" view of assertion-checking into dynamic types results in some incomputable combinations. Refinement of the specification of DAT checking into an implementation by rigorous application of mathematical identities becomes feasible through the addition of a "best-approximate" pseudo-equality that isolates the incomputable component of the specification. This formal treatment leads to an improved, more maintainable outcome with further development potential.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We assess the accuracy of the Visante anterior segment optical coherence tomographer (AS-OCT) and present improved formulas for measurement of surface curvature and axial separation. Measurements are made in physical model eyes. Accuracy is compared for measurements of corneal thickness (d1) and anterior chamber depth (d2) using-built-in AS-OCT software versus the improved scheme. The improved scheme enables measurements of lens thickness (d 3) and surface curvature, in the form of conic sections specified by vertex radii and conic constants. These parameters are converted to surface coordinates for error analysis. The built-in AS-OCT software typically overestimates (mean±standard deviation(SD)]d1 by +62±4 μm and d2 by +4±88μm. The improved scheme reduces d1 (-0.4±4 μm) and d2 (0±49 μm) errors while also reducing d3 errors from +218±90 (uncorrected) to +14±123 μm (corrected). Surface x coordinate errors gradually increase toward the periphery. Considering the central 6-mm zone of each surface, the x coordinate errors for anterior and posterior corneal surfaces reached +3±10 and 0±23 μm, respectively, with the improved scheme. Those of the anterior and posterior lens surfaces reached +2±22 and +11±71 μm, respectively. Our improved scheme reduced AS-OCT errors and could, therefore, enhance pre- and postoperative assessments of keratorefractive or cataract surgery, including measurement of accommodating intraocular lenses. © 2007 Society of Photo-Optical Instrumentation Engineers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study is concerned with gravity field recovery from low-low satellite to satellite range rate data. An improvement over a coplanar mission is predicted in the errors associated with certain parts of the geopotential by the separation of the orbital planes of the two satellites. Using Hill's equations an analytical scheme to model the range rate residuals is developed. It is flexible enough to model equally well the residuals between pairs of satellites in the same orbital plane or whose planes are separated in right ascension. The possible benefits of such an orientation to gravity field recovery from range rate data can therefore be analysed, and this is done by means of an extensive error analysis. The results of this analysis show that for an optimal planar mission improvements can be made by separating the satellites in right ascension. Gravity field recoveries are performed in order to verify and gauge the limitations of the analytical model, and to support the results of the error analysis. Finally the possible problem of the differential decay rates of two satellites due to the diurnal bulge are evaluated.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study extends a previous research concerning intervertebral motion registration by means of 2D dynamic fluoroscopy to obtain a more comprehensive 3D description of vertebral kinematics. The problem of estimating the 3D rigid pose of a CT volume of a vertebra from its 2D X-ray fluoroscopy projection is addressed. 2D-3D registration is obtained maximising a measure of similarity between Digitally Reconstructed Radiographs (obtained from the CT volume) and real fluoroscopic projection. X-ray energy correction was performed. To assess the method a calibration model was realised a sheep dry vertebra was rigidly fixed to a frame of reference including metallic markers. Accurate measurement of 3D orientation was obtained via single-camera calibration of the markers and held as true 3D vertebra position; then, vertebra 3D pose was estimated and results compared. Error analysis revealed accuracy of the order of 0.1 degree for the rotation angles of about 1mm for displacements parallel to the fluoroscopic plane, and of order of 10mm for the orthogonal displacement. © 2010 P. Bifulco et al.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The major objectives of this dissertation were to develop optimal spatial techniques to model the spatial-temporal changes of the lake sediments and their nutrients from 1988 to 2006, and evaluate the impacts of the hurricanes occurred during 1998–2006. Mud zone reduced about 10.5% from 1988 to 1998, and increased about 6.2% from 1998 to 2006. Mud areas, volumes and weight were calculated using validated Kriging models. From 1988 to 1998, mud thicknesses increased up to 26 cm in the central lake area. The mud area and volume decreased about 13.78% and 10.26%, respectively. From 1998 to 2006, mud depths declined by up to 41 cm in the central lake area, mud volume reduced about 27%. Mud weight increased up to 29.32% from 1988 to 1998, but reduced over 20% from 1998 to 2006. The reduction of mud sediments is likely due to re-suspension and redistribution by waves and currents produced by large storm events, particularly Hurricanes Frances and Jeanne in 2004 and Wilma in 2005. Regression, kriging, geographically weighted regression (GWR) and regression-kriging models have been calibrated and validated for the spatial analysis of the sediments TP and TN of the lake. GWR models provide the most accurate predictions for TP and TN based on model performance and error analysis. TP values declined from an average of 651 to 593 mg/kg from 1998 to 2006, especially in the lake’s western and southern regions. From 1988 to 1998, TP declined in the northern and southern areas, and increased in the central-western part of the lake. The TP weights increased about 37.99%–43.68% from 1988 to 1998 and decreased about 29.72%–34.42% from 1998 to 2006. From 1988 to 1998, TN decreased in most areas, especially in the northern and southern lake regions; western littoral zone had the biggest increase, up to 40,000 mg/kg. From 1998 to 2006, TN declined from an average of 9,363 to 8,926 mg/kg, especially in the central and southern regions. The biggest increases occurred in the northern lake and southern edge areas. TN weights increased about 15%–16.2% from 1988 to 1998, and decreased about 7%–11% from 1998 to 2006.