963 resultados para calibration estimation
Resumo:
A new algorithm for the velocity vector estimation of moving ships using Single Look Complex (SLC) SAR data in strip map acquisition mode is proposed. The algorithm exploits both amplitude and phase information of the Doppler decompressed data spectrum, with the aim to estimate both the azimuth antenna pattern and the backscattering coefficient as function of the look angle. The antenna pattern estimation provides information about the target velocity; the backscattering coefficient can be used for vessel classification. The range velocity is retrieved in the slow time frequency domain by estimating the antenna pattern effects induced by the target motion, while the azimuth velocity is calculated by the estimated range velocity and the ship orientation. Finally, the algorithm is tested on simulated SAR SLC data.
Resumo:
This paper extents the by now classic sensor fusion complementary filter (CF) design, involving two sensors, to the case where three sensors that provide measurements in different bands are available. This paper shows that the use of classical CF techniques to tackle a generic three sensors fusion problem, based solely on their frequency domain characteristics, leads to a minimal realization, stable, sub-optimal solution, denoted as Complementary Filters3 (CF3). Then, a new approach for the estimation problem at hand is used, based on optimal linear Kalman filtering techniques. Moreover, the solution is shown to preserve the complementary property, i.e. the sum of the three transfer functions of the respective sensors add up to one, both in continuous and discrete time domains. This new class of filters are denoted as Complementary Kalman Filters3 (CKF3). The attitude estimation of a mobile robot is addressed, based on data from a rate gyroscope, a digital compass, and odometry. The experimental results obtained are reported.
Resumo:
This paper presents an ankle mounted Inertial Navigation System (INS) used to estimate the distance traveled by a pedestrian. This distance is estimated by the number of steps given by the user. The proposed method is based on force sensors to enhance the results obtained from an INS. Experimental results have shown that, depending on the step frequency, the traveled distance error varies between 2.7% and 5.6%.
Resumo:
This paper addresses the estimation of surfaces from a set of 3D points using the unified framework described in [1]. This framework proposes the use of competitive learning for curve estimation, i.e., a set of points is defined on a deformable curve and they all compete to represent the available data. This paper extends the use of the unified framework to surface estimation. It o shown that competitive learning performes better than snakes, improving the model performance in the presence of concavities and allowing to desciminate close surfaces. The proposed model is evaluated in this paper using syntheticdata and medical images (MRI and ultrasound images).
Resumo:
Dimensionality reduction plays a crucial role in many hyperspectral data processing and analysis algorithms. This paper proposes a new mean squared error based approach to determine the signal subspace in hyperspectral imagery. The method first estimates the signal and noise correlations matrices, then it selects the subset of eigenvalues that best represents the signal subspace in the least square sense. The effectiveness of the proposed method is illustrated using simulated and real hyperspectral images.
Resumo:
As it is widely known, in structural dynamic applications, ranging from structural coupling to model updating, the incompatibility between measured and simulated data is inevitable, due to the problem of coordinate incompleteness. Usually, the experimental data from conventional vibration testing is collected at a few translational degrees of freedom (DOF) due to applied forces, using hammer or shaker exciters, over a limited frequency range. Hence, one can only measure a portion of the receptance matrix, few columns, related to the forced DOFs, and rows, related to the measured DOFs. In contrast, by finite element modeling, one can obtain a full data set, both in terms of DOFs and identified modes. Over the years, several model reduction techniques have been proposed, as well as data expansion ones. However, the latter are significantly fewer and the demand for efficient techniques is still an issue. In this work, one proposes a technique for expanding measured frequency response functions (FRF) over the entire set of DOFs. This technique is based upon a modified Kidder's method and the principle of reciprocity, and it avoids the need for modal identification, as it uses the measured FRFs directly. In order to illustrate the performance of the proposed technique, a set of simulated experimental translational FRFs is taken as reference to estimate rotational FRFs, including those that are due to applied moments.
Resumo:
Mestrado em Engenharia Electrotécnica e de Computadores - Sistemas Autónomos
Resumo:
Mestrado em Engenharia Electrotécnica e de Computadores - Ramo de Sistemas Autónomos
Resumo:
Given an hyperspectral image, the determination of the number of endmembers and the subspace where they live without any prior knowledge is crucial to the success of hyperspectral image analysis. This paper introduces a new minimum mean squared error based approach to infer the signal subspace in hyperspectral imagery. The method, termed hyperspectral signal identification by minimum error (HySime), is eigendecomposition based and it does not depend on any tuning parameters. It first estimates the signal and noise correlation matrices and then selects the subset of eigenvalues that best represents the signal subspace in the least squared error sense. The effectiveness of the proposed method is illustrated using simulated data based on U.S.G.S. laboratory spectra and real hyperspectral data collected by the AVIRIS sensor over Cuprite, Nevada.
Resumo:
In hyperspectral imagery a pixel typically consists mixture of spectral signatures of reference substances, also called endmembers. Linear spectral mixture analysis, or linear unmixing, aims at estimating the number of endmembers, their spectral signatures, and their abundance fractions. This paper proposes a framework for hyperpsectral unmixing. A blind method (SISAL) is used for the estimation of the unknown endmember signature and their abundance fractions. This method solve a non-convex problem by a sequence of augmented Lagrangian optimizations, where the positivity constraints, forcing the spectral vectors to belong to the convex hull of the endmember signatures, are replaced by soft constraints. The proposed framework simultaneously estimates the number of endmembers present in the hyperspectral image by an algorithm based on the minimum description length (MDL) principle. Experimental results on both synthetic and real hyperspectral data demonstrate the effectiveness of the proposed algorithm.
Resumo:
For the purpose of research a large quantity of anti-measles IgG working reference serum was needed. A pool of sera from five teenagers was prepared and named Alexandre Herculano (AH). In order to calibrate the AH serum, 18 EIA assays were performed testing in parallel AH and the 2nd International Standard 1990, Anti-Measles Antibody, 66/202 (IS) in a range of dilutions (from 1/50 to 1/25600). A method which compared parallel lines resulting from the graphic representation of the results of laboratory tests was used to estimate the power of AH relative to IS. A computer programme written by one of the authors was used to analyze the data and make potency estimates. Another method of analysis was used, comparing logistic curves relating serum concentrations with optical density by EIA. For that purpose an existing computer programme (WRANL) was used. The potency of AH relative to IS, by either method, was estimated to be 2.4. As IS has 5000 milli international units (mIU) of anti-measles IgG per millilitre (ml), we concluded that AH has 12000 mIU/ml.
Resumo:
Dissertação para obtenção do Grau de Mestre em Matemática e Aplicações Especialização em Actuariado, Estatística e Investigação Operacional
Resumo:
Esta dissertação descreve o desenvolvimento e avaliação de um procedimento de \Numerical Site Calibration" (NSC) para um Parque Eólico, situado a sul de Portugal, usando Dinâmica de Fluídos Computacional (CFD). O NSC encontra-se baseado no \Site Calibration" (SC), sendo este um método de medição padronizado pela Comissão Electrónica Internacional através da norma IEC 61400. Este método tem a finalidade de quantificar e reduzir os efeitos provocados pelo terreno e por possíveis obstáculos, na medição do desempenho energético das turbinas eólicas. Assim, no SC são realizadas medições em dois pontos, no mastro referência e no local da turbina (mastro temporário). No entanto, em Parques Eólicos já construídos, este método não é aplicável visto ser necessária a instalação de um mastro de medição no local da turbina e, por conseguinte, o procedimento adequado para estas circunstâncias é o NSC. O desenvolvimento deste método é feito por um código CFD, desenvolvido por uma equipa de investigação do Instituto Superior de Engenharia do Porto, designado de WINDIETM, usado extensivamente pela empresa Megajoule Inovação, Lda em aplicações de energia eólica em todo mundo. Este código é uma ferramenta para simulação de escoamentos tridimensionais em terrenos complexos. As simulações do escoamento são realizadas no regime transiente utilizando as equações de Navier-Stokes médias de Reynolds com aproximação de Bussinesq e o modelo de turbulência TKE 1.5. As condições fronteira são provenientes dos resultados de uma simulação realizada com Weather Research and Forecasting, WRF. Estas simulações dividem-se em dois grupos, um dos conjuntos de simulações utiliza o esquema convectivo Upwind e o outro utiliza o esquema convectivo de 4aordem. A análise deste método é realizada a partir da comparação dos dados obtidos nas simulações realizadas no código WINDIETM e a coleta de dados medidos durante o processo SC. Em suma, conclui-se que o WINDIETM e as suas configurações reproduzem bons resultados de calibração, ja que produzem erros globais na ordem de dois pontos percentuais em relação ao SC realizado para o mesmo local em estudo.
Resumo:
Dissertação apresentada como requisito parcial para obtenção do grau de Doutor em Gestão de Informação
Resumo:
Radio link quality estimation is essential for protocols and mechanisms such as routing, mobility management and localization, particularly for low-power wireless networks such as wireless sensor networks. Commodity Link Quality Estimators (LQEs), e.g. PRR, RNP, ETX, four-bit and RSSI, can only provide a partial characterization of links as they ignore several link properties such as channel quality and stability. In this paper, we propose F-LQE (Fuzzy Link Quality Estimator, a holistic metric that estimates link quality on the basis of four link quality properties—packet delivery, asymmetry, stability, and channel quality—that are expressed and combined using Fuzzy Logic. We demonstrate through an extensive experimental analysis that F-LQE is more reliable than existing estimators (e.g., PRR, WMEWMA, ETX, RNP, and four-bit) as it provides a finer grain link classification. It is also more stable as it has lower coefficient of variation of link estimates. Importantly, we evaluate the impact of F-LQE on the performance of tree routing, specifically the CTP (Collection Tree Protocol). For this purpose, we adapted F-LQE to build a new routing metric for CTP, which we dubbed as F-LQE/RM. Extensive experimental results obtained with state-of-the-art widely used test-beds show that F-LQE/RM improves significantly CTP routing performance over four-bit (the default LQE of CTP) and ETX (another popular LQE). F-LQE/RM improves the end-to-end packet delivery by up to 16%, reduces the number of packet retransmissions by up to 32%, reduces the Hop count by up to 4%, and improves the topology stability by up to 47%.