986 resultados para Bounded relative error
Resumo:
The GPS observables are subject to several errors. Among them, the systematic ones have great impact, because they degrade the accuracy of the accomplished positioning. These errors are those related, mainly, to GPS satellites orbits, multipath and atmospheric effects. Lately, a method has been suggested to mitigate these errors: the semiparametric model and the penalised least squares technique (PLS). In this method, the errors are modeled as functions varying smoothly in time. It is like to change the stochastic model, in which the errors functions are incorporated, the results obtained are similar to those in which the functional model is changed. As a result, the ambiguities and the station coordinates are estimated with better reliability and accuracy than the conventional least square method (CLS). In general, the solution requires a shorter data interval, minimizing costs. The method performance was analyzed in two experiments, using data from single frequency receivers. The first one was accomplished with a short baseline, where the main error was the multipath. In the second experiment, a baseline of 102 km was used. In this case, the predominant errors were due to the ionosphere and troposphere refraction. In the first experiment, using 5 minutes of data collection, the largest coordinates discrepancies in relation to the ground truth reached 1.6 cm and 3.3 cm in h coordinate for PLS and the CLS, respectively, in the second one, also using 5 minutes of data, the discrepancies were 27 cm in h for the PLS and 175 cm in h for the CLS. In these tests, it was also possible to verify a considerable improvement in the ambiguities resolution using the PLS in relation to the CLS, with a reduced data collection time interval. © Springer-Verlag Berlin Heidelberg 2007.
Resumo:
This study aimed to assess measurements of temperature and relative humidity obtained with HOBO a data logger, under various conditions of exposure to solar radiation, comparing them with those obtained through the use of a temperature/relative humidity probe and a copper-constantan thermocouple psychrometer, which are considered the standards for obtaining such measurements. Data were collected over a 6-day period (from 25 March to 1 April, 2010), during which the equipment was monitored continuously and simultaneously. We employed the following combinations of equipment and conditions: a HOBO data logger in full sunlight; a HOBO data logger shielded within a white plastic cup with windows for air circulation; a HOBO data logger shielded within a gill-type shelter (multi-plate prototype plastic); a copper-constantan thermocouple psychrometer exposed to natural ventilation and protected from sunlight; and a temperature/relative humidity probe under a commercial, multi-plate radiation shield. Comparisons between the measurements obtained with the various devices were made on the basis of statistical indicators: linear regression, with coefficient of determination; index of agreement; maximum absolute error; and mean absolute error. The prototype multi-plate shelter (gill-type) used in order to protect the HOBO data logger was found to provide the best protection against the effects of solar radiation on measurements of temperature and relative humidity. The precision and accuracy of a device that measures temperature and relative humidity depend on an efficient shelter that minimizes the interference caused by solar radiation, thereby avoiding erroneous analysis of the data obtained.
Resumo:
The scope of this study was to estimate calibrated values for dietary data obtained by the Food Frequency Questionnaire for Adolescents (FFQA) and illustrate the effect of this approach on food consumption data. The adolescents were assessed on two occasions, with an average interval of twelve months. In 2004, 393 adolescents participated, and 289 were then reassessed in 2005. Dietary data obtained by the FFQA were calibrated using the regression coefficients estimated from the average of two 24-hour recalls (24HR) of the subsample. The calibrated values were similar to the the 24HR reference measurement in the subsample. In 2004 and 2005 a significant difference was observed between the average consumption levels of the FFQA before and after calibration for all nutrients. With the use of calibrated data the proportion of schoolchildren who had fiber intake below the recommended level increased. Therefore, it is seen that calibrated data can be used to obtain adjusted associations due to reclassification of subjects within the predetermined categories.
Resumo:
While beneficially decreasing the necessary incision size, arthroscopic hip surgery increases the surgical complexity due to loss of joint visibility. To ease such difficulty, a computer-aided mechanical navigation system was developed to present the location of the surgical tool relative to the patient¿s hip joint. A preliminary study reduced the position error of the tracking linkage with limited static testing trials. In this study, a correction method, including a rotational correction factor and a length correction function, was developed through more in-depth static testing. The developed correction method was then applied to additional static and dynamic testing trials to evaluate its effectiveness. For static testing, the position error decreased from an average of 0.384 inches to 0.153 inches, with an error reduction of 60.5%. Three parameters utilized to quantify error reduction of dynamic testing did not show consistent results. The vertex coordinates achieved 29.4% of error reduction, yet with large variation in the upper vertex. The triangular area error was reduced by 5.37%, however inconsistent among all five dynamic trials. Error of vertex angles increased, indicating a shape torsion using the developed correction method. While the established correction method effectively and consistently reduced position error in static testing, it did not present consistent results in dynamic trials. More dynamic paramters should be explored to quantify error reduction of dynamic testing, and more in-depth dynamic testing methodology should be conducted to further improve the accuracy of the computer-aided nagivation system.
Resumo:
To estimate a parameter in an elliptic boundary value problem, the method of equation error chooses the value that minimizes the error in the PDE and boundary condition (the solution of the BVP having been replaced by a measurement). The estimated parameter converges to the exact value as the measured data converge to the exact value, provided Tikhonov regularization is used to control the instability inherent in the problem. The error in the estimated solution can be bounded in an appropriate quotient norm; estimates can be derived for both the underlying (infinite-dimensional) problem and a finite-element discretization that can be implemented in a practical algorithm. Numerical experiments demonstrate the efficacy and limitations of the method.
Resumo:
Environmental data sets of pollutant concentrations in air, water, and soil frequently include unquantified sample values reported only as being below the analytical method detection limit. These values, referred to as censored values, should be considered in the estimation of distribution parameters as each represents some value of pollutant concentration between zero and the detection limit. Most of the currently accepted methods for estimating the population parameters of environmental data sets containing censored values rely upon the assumption of an underlying normal (or transformed normal) distribution. This assumption can result in unacceptable levels of error in parameter estimation due to the unbounded left tail of the normal distribution. With the beta distribution, which is bounded by the same range of a distribution of concentrations, $\rm\lbrack0\le x\le1\rbrack,$ parameter estimation errors resulting from improper distribution bounds are avoided. This work developed a method that uses the beta distribution to estimate population parameters from censored environmental data sets and evaluated its performance in comparison to currently accepted methods that rely upon an underlying normal (or transformed normal) distribution. Data sets were generated assuming typical values encountered in environmental pollutant evaluation for mean, standard deviation, and number of variates. For each set of model values, data sets were generated assuming that the data was distributed either normally, lognormally, or according to a beta distribution. For varying levels of censoring, two established methods of parameter estimation, regression on normal ordered statistics, and regression on lognormal ordered statistics, were used to estimate the known mean and standard deviation of each data set. The method developed for this study, employing a beta distribution assumption, was also used to estimate parameters and the relative accuracy of all three methods were compared. For data sets of all three distribution types, and for censoring levels up to 50%, the performance of the new method equaled, if not exceeded, the performance of the two established methods. Because of its robustness in parameter estimation regardless of distribution type or censoring level, the method employing the beta distribution should be considered for full development in estimating parameters for censored environmental data sets. ^
Resumo:
Cramér Rao Lower Bounds (CRLB) have become the standard for expression of uncertainties in quantitative MR spectroscopy. If properly interpreted as a lower threshold of the error associated with model fitting, and if the limits of its estimation are respected, CRLB are certainly a very valuable tool to give an idea of minimal uncertainties in magnetic resonance spectroscopy (MRS), although other sources of error may be larger. Unfortunately, it has also become standard practice to use relative CRLB expressed as a percentage of the presently estimated area or concentration value as unsupervised exclusion criterion for bad quality spectra. It is shown that such quality filtering with widely used threshold levels of 20% to 50% CRLB readily causes bias in the estimated mean concentrations of cohort data, leading to wrong or missed statistical findings-and if applied rigorously-to the failure of using MRS as a clinical instrument to diagnose disease characterized by low levels of metabolites. Instead, absolute CRLB in comparison to those of the normal group or CRLB in relation to normal metabolite levels may be more useful as quality criteria. Magn Reson Med, 2015. © 2015 Wiley Periodicals, Inc.
Resumo:
Imprecise manipulation of source code (semi-parsing) is useful for tasks such as robust parsing, error recovery, lexical analysis, and rapid development of parsers for data extraction. An island grammar precisely defines only a subset of a language syntax (islands), while the rest of the syntax (water) is defined imprecisely. Usually, water is defined as the negation of islands. Albeit simple, such a definition of water is naive and impedes composition of islands. When developing an island grammar, sooner or later a programmer has to create water tailored to each individual island. Such an approach is fragile, however, because water can change with any change of a grammar. It is time-consuming, because water is defined manually by a programmer and not automatically. Finally, an island surrounded by water cannot be reused because water has to be defined for every grammar individually. In this paper we propose a new technique of island parsing - bounded seas. Bounded seas are composable, robust, reusable and easy to use because island-specific water is created automatically. We integrated bounded seas into a parser combinator framework as a demonstration of their composability and reusability.
Resumo:
The Astronomical Institute of the University of Bern (AIUB) is conducting several search campaigns for orbital debris. The debris objects are discovered during systematic survey observations. In general only a short observation arc, or tracklet, is available for most of these objects. From this discovery tracklet a first orbit determination is computed in order to be able to find the object again in subsequent follow-up observations. The additional observations are used in the orbit improvement process to obtain accurate orbits to be included in a catalogue. In this paper, the accuracy of the initial orbit determination is analyzed. This depends on a number of factors: tracklet length, number of observations, type of orbit, astrometric error, and observation geometry. The latter is characterized by both the position of the object along its orbit and the location of the observing station. Different positions involve different distances from the target object and a different observing angle with respect to its orbital plane and trajectory. The present analysis aims at optimizing the geometry of the discovery observation is depending on the considered orbit.
Resumo:
We used a controlled CO2 perturbation experiment to test hypotheses about changes in diversity, composition and structure of soft-bottom intertidal macrobenthic assemblages, under realistic and locally relevant scenarios of seawater acidification. Patches of undisturbed sediment were collected from 2 types of intertidal sedimentary habitat in the Ria Formosa coastal lagoon (South Portugal) and exposed to 2 levels of seawater acidification (pH reduced by 0.3 and 0.6 units) and 1 unmanipulated (control) level. After 75 d the assemblages differed significantly between the 2 types of sediment and between field controls and the ex situ treatments, but not among the 3 pH levels tested. The naturally high values of total alkalinity buffered seawater from the changes imposed on carbonate chemistry and may have contributed to offsetting acidification at the local scale. Observed differences on biota were strongly related to the organic matter content and grain-size of the sediments, particularly to the fractions of medium and coarse sand. Soft-bottom intertidal macrofauna was significantly affected by the stress of being held in an artificial environment, but not by CO2-induced seawater acidification. Given the previously observed variations in the sensitivities of marine organisms to seawater acidification, direct extrapolations of the present findings to different regions or other types of assemblages do not seem advisable. However, the contribution of ex situ studies to the assessment of ecosystem-level responses to environmental disturbances could generally be improved by incorporating adequate field controls in the experimental design.
Resumo:
El propósito de esta tesis es la implementación de métodos eficientes de adaptación de mallas basados en ecuaciones adjuntas en el marco de discretizaciones de volúmenes finitos para mallas no estructuradas. La metodología basada en ecuaciones adjuntas optimiza la malla refinándola adecuadamente con el objetivo de mejorar la precisión de cálculo de un funcional de salida dado. El funcional suele ser una magnitud escalar de interés ingenieril obtenida por post-proceso de la solución, como por ejemplo, la resistencia o la sustentación aerodinámica. Usualmente, el método de adaptación adjunta está basado en una estimación a posteriori del error del funcional de salida mediante un promediado del residuo numérico con las variables adjuntas, “Dual Weighted Residual method” (DWR). Estas variables se obtienen de la solución del problema adjunto para el funcional seleccionado. El procedimiento habitual para introducir este método en códigos basados en discretizaciones de volúmenes finitos involucra la utilización de una malla auxiliar embebida obtenida por refinamiento uniforme de la malla inicial. El uso de esta malla implica un aumento significativo de los recursos computacionales (por ejemplo, en casos 3D el aumento de memoria requerida respecto a la que necesita el problema fluido inicial puede llegar a ser de un orden de magnitud). En esta tesis se propone un método alternativo basado en reformular la estimación del error del funcional en una malla auxiliar más basta y utilizar una técnica de estimación del error de truncación, denominada _ -estimation, para estimar los residuos que intervienen en el método DWR. Utilizando esta estimación del error se diseña un algoritmo de adaptación de mallas que conserva los ingredientes básicos de la adaptación adjunta estándar pero con un coste computacional asociado sensiblemente menor. La metodología de adaptación adjunta estándar y la propuesta en la tesis han sido introducidas en un código de volúmenes finitos utilizado habitualmente en la industria aeronáutica Europea. Se ha investigado la influencia de distintos parámetros numéricos que intervienen en el algoritmo. Finalmente, el método propuesto se compara con otras metodologías de adaptación de mallas y su eficiencia computacional se demuestra en una serie de casos representativos de interés aeronáutico. ABSTRACT The purpose of this thesis is the implementation of efficient grid adaptation methods based on the adjoint equations within the framework of finite volume methods (FVM) for unstructured grid solvers. The adjoint-based methodology aims at adapting grids to improve the accuracy of a functional output of interest, as for example, the aerodynamic drag or lift. The adjoint methodology is based on the a posteriori functional error estimation using the adjoint/dual-weighted residual method (DWR). In this method the error in a functional output can be directly related to local residual errors of the primal solution through the adjoint variables. These variables are obtained by solving the corresponding adjoint problem for the chosen functional. The common approach to introduce the DWR method within the FVM framework involves the use of an auxiliary embedded grid. The storage of this mesh demands high computational resources, i.e. over one order of magnitude increase in memory relative to the initial problem for 3D cases. In this thesis, an alternative methodology for adapting the grid is proposed. Specifically, the DWR approach for error estimation is re-formulated on a coarser mesh level using the _ -estimation method to approximate the truncation error. Then, an output-based adaptive algorithm is designed in such way that the basic ingredients of the standard adjoint method are retained but the computational cost is significantly reduced. The standard and the new proposed adjoint-based adaptive methodologies have been incorporated into a flow solver commonly used in the EU aeronautical industry. The influence of different numerical settings has been investigated. The proposed method has been compared against different grid adaptation approaches and the computational efficiency of the new method has been demonstrated on some representative aeronautical test cases.
Resumo:
Abstract Imprecise manipulation of source code (semi-parsing) is useful for tasks such as robust parsing, error recovery, lexical analysis, and rapid development of parsers for data extraction. An island grammar precisely defines only a subset of a language syntax (islands), while the rest of the syntax (water) is defined imprecisely. Usually water is defined as the negation of islands. Albeit simple, such a definition of water is naive and impedes composition of islands. When developing an island grammar, sooner or later a language engineer has to create water tailored to each individual island. Such an approach is fragile, because water can change with any change of a grammar. It is time-consuming, because water is defined manually by an engineer and not automatically. Finally, an island surrounded by water cannot be reused because water has to be defined for every grammar individually. In this paper we propose a new technique of island parsing —- bounded seas. Bounded seas are composable, robust, reusable and easy to use because island-specific water is created automatically. Our work focuses on applications of island parsing to data extraction from source code. We have integrated bounded seas into a parser combinator framework as a demonstration of their composability and reusability.