92 resultados para Approximations
Resumo:
The GPS observables are subject to several errors. Among them, the systematic ones have great impact, because they degrade the accuracy of the accomplished positioning. These errors are those related, mainly, to GPS satellites orbits, multipath and atmospheric effects. Lately, a method has been suggested to mitigate these errors: the semiparametric model and the penalised least squares technique (PLS). In this method, the errors are modeled as functions varying smoothly in time. It is like to change the stochastic model, in which the errors functions are incorporated, the results obtained are similar to those in which the functional model is changed. As a result, the ambiguities and the station coordinates are estimated with better reliability and accuracy than the conventional least square method (CLS). In general, the solution requires a shorter data interval, minimizing costs. The method performance was analyzed in two experiments, using data from single frequency receivers. The first one was accomplished with a short baseline, where the main error was the multipath. In the second experiment, a baseline of 102 km was used. In this case, the predominant errors were due to the ionosphere and troposphere refraction. In the first experiment, using 5 minutes of data collection, the largest coordinates discrepancies in relation to the ground truth reached 1.6 cm and 3.3 cm in h coordinate for PLS and the CLS, respectively, in the second one, also using 5 minutes of data, the discrepancies were 27 cm in h for the PLS and 175 cm in h for the CLS. In these tests, it was also possible to verify a considerable improvement in the ambiguities resolution using the PLS in relation to the CLS, with a reduced data collection time interval. © Springer-Verlag Berlin Heidelberg 2007.
Resumo:
The effect of the ionosphere on the signals of Global Navigation Satellite Systems (GNSS), such as the Global Positionig System (GPS) and the proposed European Galileo, is dependent on the ionospheric electron density, given by its Total Electron Content (TEC). Ionospheric time-varying density irregularities may cause scintillations, which are fluctuations in phase and amplitude of the signals. Scintillations occur more often at equatorial and high latitudes. They can degrade navigation and positioning accuracy and may cause loss of signal tracking, disrupting safety-critical applications, such as marine navigation and civil aviation. This paper addresses the results of initial research carried out on two fronts that are relevant to GNSS users if they are to counter ionospheric scintillations, i.e. forecasting and mitigating their effects. On the forecasting front, the dynamics of scintillation occurrence were analysed during the severe ionospheric storm that took place on the evening of 30 October 2003, using data from a network of GPS Ionospheric Scintillation and TEC Monitor (GISTM) receivers set up in Northern Europe. Previous results [1] indicated that GPS scintillations in that region can originate from ionospheric plasma structures from the American sector. In this paper we describe experiments that enabled confirmation of those findings. On the mitigation front we used the variance of the output error of the GPS receiver DLL (Delay Locked Loop) to modify the least squares stochastic model applied by an ordinary receiver to compute position. This error was modelled according to [2], as a function of the S4 amplitude scintillation index measured by the GISTM receivers. An improvement of up to 21% in relative positioning accuracy was achieved with this technnique.
Resumo:
Aerodynamic balances are employed in wind tunnels to estimate the forces and moments acting on the model under test. This paper proposes a methodology for the assessment of uncertainty in the calibration of an internal multi-component aerodynamic balance. In order to obtain a suitable model to provide aerodynamic loads from the balance sensor responses, a calibration is performed prior to the tests by applying known weights to the balance. A multivariate polynomial fitting by the least squares method is used to interpolate the calibration data points. The uncertainties of both the applied loads and the readings of the sensors are considered in the regression. The data reduction includes the estimation of the calibration coefficients, the predicted values of the load components and their corresponding uncertainties, as well as the goodness of fit.
Resumo:
This work evaluated kinetic and adsorption physicochemical models for the biosorption process of lanthanum, neodymium, europium, and gadolinium by Sargassum sp. in batch systems. The results showed: (a) the pseudo-second order kinetic model was the best approximation for the experimental data with the metal adsorption initial velocity parameter in 0.042-0.055 mmol.g -1.min-1 (La < Nd < Gd < Eu); (b) the Langmuir adsorption model presented adequate correlation with maximum metal uptake at 0.60-0.70 mmol g-1 (Eu < La < Gd < Nd) and the metal-biomass affinity parameter showed distinct values (Gd < Nd < Eu < La: 183.1, 192.5, 678.3, and 837.3 L g-1, respectively); and (c) preliminarily, the kinetics and adsorption evaluation did not reveal a well-defined metal selectivity behavior for the RE biosorption in Sargassum sp., but they indicate a possible partition among RE studied. © (2009) Trans Tech Publications.
Resumo:
Since Sharir and Pnueli, algorithms for context-sensitivity have been defined in terms of 'valid' paths in an interprocedural flow graph. The definition of valid paths requires atomic call and ret statements, and encapsulated procedures. Thus, the resulting algorithms are not directly applicable when behavior similar to call and ret instructions may be realized using non-atomic statements, or when procedures do not have rigid boundaries, such as with programs in low level languages like assembly or RTL. We present a framework for context-sensitive analysis that requires neither atomic call and ret instructions, nor encapsulated procedures. The framework presented decouples the transfer of control semantics and the context manipulation semantics of statements. A new definition of context-sensitivity, called stack contexts, is developed. A stack context, which is defined using trace semantics, is more general than Sharir and Pnueli's interprocedural path based calling-context. An abstract interpretation based framework is developed to reason about stack-contexts and to derive analogues of calling-context based algorithms using stack-context. The framework presented is suitable for deriving algorithms for analyzing binary programs, such as malware, that employ obfuscations with the deliberate intent of defeating automated analysis. The framework is used to create a context-sensitive version of Venable et al.'s algorithm for analyzing x86 binaries without requiring that a binary conforms to a standard compilation model for maintaining procedures, calls, and returns. Experimental results show that a context-sensitive analysis using stack-context performs just as well for programs where the use of Sharir and Pnueli's calling-context produces correct approximations. However, if those programs are transformed to use call obfuscations, a contextsensitive analysis using stack-context still provides the same, correct results and without any additional overhead. © Springer Science+Business Media, LLC 2011.
Resumo:
In this paper, the calculation of the steady-state operation of a radial/meshed electrical distribution system (EDS) through solving a system of linear equations (non-iterative load flow) is presented. The constant power type demand of the EDS is modeled through linear approximations in terms of real and imaginary parts of the voltage taking into account the typical operating conditions of the EDS's. To illustrate the use of the proposed set of linear equations, a linear model for the optimal power flow with distributed generator is presented. Results using some test and real systems show the excellent performance of the proposed methodology when is compared with conventional methods. © 2011 IEEE.
Resumo:
The aim of this work is to evaluate the influence of point measurements in images, with subpixel accuracy, and its contribution in the calibration of digital cameras. Also, the effect of subpixel measurements in 3D coordinates of check points in the object space will be evaluated. With this purpose, an algorithm that allows subpixel accuracy was implemented for semi-automatic determination of points of interest, based on Fõrstner operator. Experiments were accomplished with a block of images acquired with the multispectral camera DuncanTech MS3100-CIR. The influence of subpixel measurements in the adjustment by Least Square Method (LSM) was evaluated by the comparison of estimated standard deviation of parameters in both situations, with manual measurement (pixel accuracy) and with subpixel estimation. Additionally, the influence of subpixel measurements in the 3D reconstruction was also analyzed. Based on the obtained results, i.e., on the quantification of the standard deviation reduction in the Inner Orientation Parameters (IOP) and also in the relative error of the 3D reconstruction, it was shown that measurements with subpixel accuracy are relevant for some tasks in Photogrammetry, mainly for those in which the metric quality is of great relevance, as Camera Calibration.
Resumo:
The objective of this paper is to show a methodology to estimate transmission line parameters. The method is applied in a single-phase transmission line using the method of least squares. In this method the longitudinal and transversal parameters of the line are obtained as a function of a set of measurements of currents and voltages (as well as their derivatives with respect to time) at the terminals of the line during the occurrence of a short-circuit phase-ground near the load. The method is based on the assumption that a transmission line can be represented by a single circuit π. The results show that the precision of the method depends on the length of the line, where it has a better performance for short lines and medium length. © 2012 IEEE.
Resumo:
This paper proposes we grant graphemes - the constituents of words in situations of discursive constructions - the status of meaning carrying units, in much the same way Vigotski granted the phoneme such status in relation to the spoken word. This paper also seeks to analyze these units in singular manifestations in acts of appropriation of the written language, based on data created by a six-year-old child in a discursive situation. To perform this task, we also referred to Bakhtinian studies on the role of the other in our relation with language. Since the analysis indicated spelling approximations to records found in old Portuguese, historical grammar research of Portuguese was also used. The findings indicate the diversity of reference sources for letter selection by the child, according to its function in the composition of the word.
Resumo:
The problem of reconfiguration of distribution systems considering the presence of distributed generation is modeled as a mixed-integer linear programming (MILP) problem in this paper. The demands of the electric distribution system are modeled through linear approximations in terms of real and imaginary parts of the voltage, taking into account typical operating conditions of the electric distribution system. The use of an MILP formulation has the following benefits: (a) a robust mathematical model that is equivalent to the mixed-integer non-linear programming model; (b) an efficient computational behavior with exiting MILP solvers; and (c) guarantees convergence to optimality using classical optimization techniques. Results from one test system and two real systems show the excellent performance of the proposed methodology compared with conventional methods. © 2012 Published by Elsevier B.V. All rights reserved.
Resumo:
In this work we study two different spin-boson models. Such models are generalizations of the Dicke model, it means they describe systems of N identical two-level atoms coupled to a single-mode quantized bosonic field, assuming the rotating wave approximation. In the first model, we consider the wavelength of the bosonic field to be of the order of the linear dimension of the material composed of the atoms, therefore we consider the spatial sinusoidal form of the bosonic field. The second model is the Thompson model, where we consider the presence of phonons in the material composed of the atoms. We study finite temperature properties of the models using the path integral approach and functional methods. In the thermodynamic limit, N→∞, the systems exhibit phase transitions from normal to superradiant phase at some critical values of temperature and coupling constant. We find the asymptotic behavior of the partition functions and the collective spectrums of the systems in the normal and the superradiant phases. We observe that the collective spectrums have zero energy values in the superradiant phases, corresponding to the Goldstone mode associated to the continuous symmetry breaking of the models. Our analysis and results are valid in the limit of zero temperature β→∞, where the models exhibit quantum phase transitions. © 2013 Elsevier B.V. All rights reserved.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Pós-graduação em Estudos Linguísticos - IBILCE
Resumo:
Pós-graduação em Estudos Linguísticos - IBILCE