952 resultados para Post processing


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The main topic of this paper is the Ambiguity Dilution of Precision known as ADOP. Basically, ADOP is defined as a diagnostic measure for assessing the precision of the float scalar ambiguities. Among the several possibilities, the ADOP can provide help in predicting the behavior of a baseline or a network of GNSS receivers, concerning the problem of ambiguity resolution, either in real-time (instantaneous) or in the post-processing mode. The main advantage of using ADOP is possibility of the extraction of a closed analytical expression, considering various factors that affect the ambiguity resolution. Furthermore, the ADOP is related to the success rate of ambiguity resolution. The expressions here used, takes into account several factors, for example, a priori information of the measurement precision of GNSS carrier phase and pseudorange, the number of stations and satellites, the number of available frequencies and the behavior of the atmosphere (ionosphere and troposphere). Several scenarios were established so as to analyze the impact of each factor in ambiguities resolution, within the context of some stations of the São Paulo GNSS network (GNSS-SP).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The use of the Global Positioning System technology has brought a real revolution in the surveying and georeferencing techniques, which are the basis for many others relevant studies to the many fields of Geography. In this sense, it was seen a massive growth of the use of GNSS receivers in Brazil from the early 2000s, due the duty of Georreferencing Rural Properties to comply with the 10.267/2001 law. For Georreferencing, it is needed high accuracy receivers, and most of time, it is used two receivers: one static base and one rover. To do the adjustment of the base (in order to correct errors), two ways are utilized: post-processing via the brazilian GNSS network - Rede Brasileira de Monitoramento Contínuo dos Sistemas GNSS (RBMC) or via Precise Point Positioning - Posicionamento por Ponto Preciso, both managed by IBGE. Given the wide range of applications as well as discussions on the accuracy of both methods, this monograph aims to conduct a comparative analysis and prove the effectiveness of both methods considering the INCRA’s Standard for Rural Properties Georreferencing. From the processing of GNSS data collected in Piracicaba, Ituverava, Iperó and São Pedro, it could be seen that the research reached the objectives and shows that both methods are accurate and feasible

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Fuel cells are a very promising solution to the problems of power generation and emission of pollutant to the environment, excellent to be used in stationary application and mobile application too. The high cost of production of these devices, mainly due to the use of noble metals as anode, is a major obstacle to massive production and deployment of this technology, however the use of intermetallic phases of platinum combined with other metals less noble has been evaluated as electrodes in order to minimize production costs and still being able to significantly improve the catalytic performance of the anode. The study of intermetallic phases, exclusively done by experimental techniques is not complete and demand that other methods need to be applied to a deeper understanding of the behavior geometric properties and the electronic structure of the material, to this end the use of computer simulation methods, which have proved appropriate for a broader understanding of the geometric and electronic properties of the materials involved, so far not so well understood.. The use of computational methods provides answers to explain the behavior of the materials and allows assessing whether the intermetallic may be a good electrode. In this research project was used the Quantum-ESPRESSO package, based on the DFT theory, which provides the self-consistent field calculations with great precision, calculations of the periodic systems interatomic force, and other post-processing calculations that points to a knowledge of the geometric and electronic properties of materials, which may be related to other properties of them, even the electrocatalytic. The electronic structure is determined from the optimized geometric structure of materials by analyzing the density of states (DOS) projected onto atomic orbital, which determines the influence of the electrocatalytic properties of the material... (Complete abstract click electronic access below)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Acceleration is a key parameter for engineering and is becoming increasingly important because of the need for companies to become more competitive in the market. Both applying new technologies to their products and optimizing their process lines with predictive maintenance and robotic automation. This study aims to analyze the quality of the signals obtained from a capacitive accelerometer. To do that a test rig was mounted, which consist of a shaker, fed by a signal generator, a linear potentiometer and a capacitive accelerometer; for the signal acquisition was used a acquisition board and the Labview software, in order to integrate twice the signal from the accelerometer and compare it with the sign of the potentiometer. This work also demonstrates the impact of acquired signal processing as well as techniques of pre and post processing of signal via software GNU/Octave

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Pós-graduação em Odontologia Restauradora - ICT

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Pós-graduação em Engenharia Mecânica - FEB

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A transparent (wide-area) wavelength-routed optical network may be constructed by using wavelength cross-connect switches connected together by fiber to form an arbitrary mesh structure. The network is accessed through electronic stations that are attached to some of these cross-connects. These wavelength cross-connect switches have the property that they may configure themselves into unspecified states. Each input port of a switch is always connected to some output port of the switch whether or not such a connection is required for the purpose of information transfer. Due to the presence of these unspecified states, there exists the possibility of setting up unintended alloptical cycles in the network (viz., a loop with no terminating electronics in it). If such a cycle contains amplifiers [e.g., Erbium- Doped Fiber Amplifiers (EDFA’s)], there exists the possibility that the net loop gain is greater than the net loop loss. The amplified spontaneous emission (ASE) noise from amplifiers can build up in such a feedback loop to saturate the amplifiers and result in oscillations of the ASE noise in the loop. Such all-optical cycles as defined above (and hereafter referred to as “white” cycles) must be eliminated from an optical network in order for the network to perform any useful operation. Furthermore, for the realistic case in which the wavelength cross-connects result in signal crosstalk, there is a possibility of having closed cycles with oscillating crosstalk signals. We examine algorithms that set up new transparent optical connections upon request while avoiding the creation of such cycles in the network. These algorithms attempt to find a route for a connection and then (in a post-processing fashion) configure switches such that white cycles that might get created would automatically get eliminated. In addition, our call-set-up algorithms can avoid the possibility of crosstalk cycles.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The aim of the present study was to evaluate the use MRI to quantify the workload of gluteus medius (GM), vastus medialis (VM) and vastus lateralis (VL) muscles in different types of squat exercises. Fourteen female volunteers were evaluated, average age of 22 +/- 2 years, sedentary, without clinical symptoms, and without history of previous lower limb injuries. Quantitative MRI was used to analyze VM, VL and GM muscles before and after squat exercise, squat associated with isometric hip adduction and squat associated with isometric hip abduction. Multi echo images were acquired to calculate the transversal relaxation times (T2) before and after exercise. Mixed Effects Model statistical analysis was used to compare images before and after the exercise (Delta T2) to normalize the variability between subjects. Imaging post processing was performed in Matlab software. GM muscle was the least active during the squat associated with isometric hip adduction and VM the least active during the squat associated with isometric hip abduction, while VL was the most active during squat associated with isometric hip adduction. Our data suggests that isometric hip adduction during the squat does not increase the workload of VM, but decreases the GM muscle workload. Squat associated with isometric hip abduction does not increase VL workload.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A polarimetric X-band radar has been deployed during one month (April 2011) for a field campaign in Fortaleza, Brazil, together with three additional laser disdrometers. The disdrometers are capable of measuring the raindrop size distributions (DSDs), hence making it possible to forward-model theoretical polarimetric X-band radar observables at the point where the instruments are located. This setup allows to thoroughly test the accuracy of the X-band radar measurements as well as the algorithms that are used to correct the radar data for radome and rain attenuation. For the campaign in Fortaleza it was found that radome attenuation dominantly affects the measurements. With an algorithm that is based on the self-consistency of the polarimetric observables, the radome induced reflectivity offset was estimated. Offset corrected measurements were then further corrected for rain attenuation with two different schemes. The performance of the post-processing steps was analyzed by comparing the data with disdrometer-inferred polarimetric variables that were measured at a distance of 20 km from the radar. Radome attenuation reached values up to 14 dB which was found to be consistent with an empirical radome attenuation vs. rain intensity relation that was previously developed for the same radar type. In contrast to previous work, our results suggest that radome attenuation should be estimated individually for every view direction of the radar in order to obtain homogenous reflectivity fields.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

[EN] We propose four algorithms for computing the inverse optical flow between two images. We assume that the forward optical flow has already been obtained and we need to estimate the flow in the backward direction. The forward and backward flows can be related through a warping formula, which allows us to propose very efficient algorithms. These are presented in increasing order of complexity. The proposed methods provide high accuracy with low memory requirements and low running times.In general, the processing reduces to one or two image passes. Typically, when objects move in a sequence, some regions may appear or disappear. Finding the inverse flows in these situations is difficult and, in some cases, it is not possible to obtain a correct solution. Our algorithms deal with occlusions very easy and reliably. On the other hand, disocclusions have to be overcome as a post-processing step. We propose three approaches for filling disocclusions. In the experimental results, we use standard synthetic sequences to study the performance of the proposed methods, and show that they yield very accurate solutions. We also analyze the performance of the filling strategies. 

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Nell’ambito dell’analisi computazionale delle strutture il metodo degli elementi finiti è probabilmente uno dei metodi numerici più efficaci ed impiegati. La semplicità dell’idea di base del metodo e la relativa facilità con cui può essere implementato in codici di calcolo hanno reso possibile l’applicazione di questa tecnica computazionale in diversi settori, non solo dell’ingegneria strutturale, ma in generale della matematica applicata. Ma, nonostante il livello raggiunto dalle tecnologie ad elementi finiti sia già abbastanza elevato, per alcune applicazioni tipiche dell’ingegneria strutturale (problemi bidimensionali, analisi di lastre inflesse) le prestazioni fornite dagli elementi usualmente utilizzati, ovvero gli elementi di tipo compatibile, sono in effetti poco soddisfacenti. Vengono in aiuto perciò gli elementi finiti basati su formulazioni miste che da un lato presentano una più complessa formulazione, ma dall’altro consentono di prevenire alcuni problemi ricorrenti quali per esempio il fenomeno dello shear locking. Indipendentemente dai tipi di elementi finiti utilizzati, le quantità di interesse nell’ambito dell’ingegneria non sono gli spostamenti ma gli sforzi o più in generale le quantità derivate dagli spostamenti. Mentre i primi sono molto accurati, i secondi risultano discontinui e di qualità scadente. A valle di un calcolo FEM, negli ultimi anni, hanno preso piede procedure di post-processing in grado, partendo dalla soluzione agli elementi finiti, di ricostruire lo sforzo all’interno di patch di elementi rendendo quest’ultimo più accurato. Tali procedure prendono il nome di Procedure di Ricostruzione (Recovery Based Approaches). Le procedure di ricostruzione qui utilizzate risultano essere la REP (Recovery by Equilibrium in Patches) e la RCP (Recovery by Compatibility in Patches). L’obbiettivo che ci si prefigge in questo lavoro è quello di applicare le procedure di ricostruzione ad un esempio di piastra, discretizzato con vari tipi di elementi finiti, mettendone in luce i vantaggi in termini di migliore accurattezza e di maggiore convergenza.