889 resultados para Least-Squares estimation
Resumo:
GPS active networks are more and more used in geodetic surveying and scientific experiments, as water vapor monitoring in the atmosphere and lithosphere plate movement. Among the methods of GPS positioning, Precise Point Positioning (PPP) has provided very good results. A characteristic of PPP is related to the modeling and/or estimation of the errors involved in this method. The accuracy obtained for the coordinates can reach few millimeters. Seasonal effects can affect such accuracy if they are not consistent treated during the data processing. Coordinates time series analyses have been realized using Fourier or Harmonics spectral analyses, wavelets, least squares estimation among others. An approach is presented in this paper aiming to investigate the seasonal effects included in the stations coordinates time series. Experiments were carried out using data from stations Manaus (NAUS) and Fortaleza (BRFT) which belong to the Brazilian Continuous GPS Network (RBMC). The coordinates of these stations were estimated daily using PPP and were analyzed through wavelets for identification of the periods of the seasonal effects (annual and semi-annual) in each time series. These effects were removed by means of a filtering process applied in the series via the least squares adjustment (LSQ) of a periodic function. The results showed that the combination of these two mathematical tools, wavelets and LSQ, is an interesting and efficient technique for removal of seasonal effects in time series.
Resumo:
GPS active networks are more and more used in geodetic surveying and scientific experiments, as water vapor monitoring in the atmosphere and lithosphere plate movement. Among the methods of GPS positioning, Precise Point Positioning (PPP) has provided very good results. A characteristic of PPP is related to the modeling and / or estimation of the errors involved in this method. The accuracy obtained for the coordinates can reach few millimeters. Seasonal effects can affect such accuracy if they are not consistent treated during the data processing. Coordinates time series analyses have been realized using Fourier or Harmonics spectral analyses, wavelets, least squares estimation among others. An approach is presented in this paper aiming to investigate the seasonal effects included in the stations coordinates time series. Experiments were carried out using data from stations Manaus (NAUS) and Fortaleza (BRFT) which belong to the Brazilian Continuous GPS Network (RBMC). The coordinates of these stations were estimated daily using PPP and were analyzed through wavelets for identification of the periods of the seasonal effects (annual and semi-annual) in each time series. These effects were removed by means of a filtering process applied in the series via the least squares adjustment (LSQ) of a periodic function. The results showed that the combination of these two mathematical tools, wavelets and LSQ, is an interesting and efficient technique for removal of seasonal effects in time series.
Resumo:
Pós-graduação em Ciências Cartográficas - FCT
Resumo:
In this paper we propose a new identification method based on the residual white noise autoregressive criterion (Pukkila et al. , 1990) to select the order of VARMA structures. Results from extensive simulation experiments based on different model structures with varying number of observations and number of component series are used to demonstrate the performance of this new procedure. We also use economic and business data to compare the model structures selected by this order selection method with those identified in other published studies.
Resumo:
Analysis of risk measures associated with price series data movements and its predictions are of strategic importance in the financial markets as well as to policy makers in particular for short- and longterm planning for setting up economic growth targets. For example, oilprice risk-management focuses primarily on when and how an organization can best prevent the costly exposure to price risk. Value-at-Risk (VaR) is the commonly practised instrument to measure risk and is evaluated by analysing the negative/positive tail of the probability distributions of the returns (profit or loss). In modelling applications, least-squares estimation (LSE)-based linear regression models are often employed for modeling and analyzing correlated data. These linear models are optimal and perform relatively well under conditions such as errors following normal or approximately normal distributions, being free of large size outliers and satisfying the Gauss-Markov assumptions. However, often in practical situations, the LSE-based linear regression models fail to provide optimal results, for instance, in non-Gaussian situations especially when the errors follow distributions with fat tails and error terms possess a finite variance. This is the situation in case of risk analysis which involves analyzing tail distributions. Thus, applications of the LSE-based regression models may be questioned for appropriateness and may have limited applicability. We have carried out the risk analysis of Iranian crude oil price data based on the Lp-norm regression models and have noted that the LSE-based models do not always perform the best. We discuss results from the L1, L2 and L∞-norm based linear regression models. ACM Computing Classification System (1998): B.1.2, F.1.3, F.2.3, G.3, J.2.
Resumo:
This dissertation examines the monetary models of exchange rate determination for Brazil, Canada, and two countries in the Caribbean, namely, the Dominican Republic and Jamaica. With the exception of Canada, the others adopted the floating regime during the past ten years.^ The empirical validity of four seminal models in exchange rate economics were determined. Three of these models were entirely classical (Bilson and Frenkel) or Keynesian (Dornbusch) in nature. The fourth model (Real Interest Differential Model) was a mixture of the two schools of economic theory.^ There is no clear empirical evidence of the validity of the monetary models. However, the signs of the coefficients of the nominal interest differential variable were as predicted by the Keynesian hypothesis in the case of Canada and as predicted by the Chicago theorists in the remaining countries. Moreover, in case of Brazil, due to hyperinflation, the exchange rate is heavily influenced by domestic money supply.^ I also tested the purchasing power parity (PPP) for this same set of countries. For both the monetary as well as the PPP hypothesis, I tested for co-integration and applied ordinary least squares estimation procedure. The error correction model was also used for the PPP model, to determine convergence to equilibrium.^ The validity of PPP is also questionable for my set of countries. Endogeinity among the regressors as well as the lack of proper price indices are the contributing factors. More importantly, Central Bank intervention negate rapid adjustment of price and exchange rates to their equilibrium value. However, its forecasting capability for the period 1993-1994 is superior compared to the monetary models in two of the four cases.^ I conclude that in spite of the questionable validity of these models, the monetary models give better results in the case of the "smaller" economies like the Dominican Republic and Jamaica where monetary influences swamp the other determinants of exchange rate. ^
Resumo:
The success rate of carrier phase ambiguity resolution (AR) is the probability that the ambiguities are successfully fixed to their correct integer values. In existing works, an exact success rate formula for integer bootstrapping estimator has been used as a sharp lower bound for the integer least squares (ILS) success rate. Rigorous computation of success rate for the more general ILS solutions has been considered difficult, because of complexity of the ILS ambiguity pull-in region and computational load of the integration of the multivariate probability density function. Contributions of this work are twofold. First, the pull-in region mathematically expressed as the vertices of a polyhedron is represented by a multi-dimensional grid, at which the cumulative probability can be integrated with the multivariate normal cumulative density function (mvncdf) available in Matlab. The bivariate case is studied where the pull-region is usually defined as a hexagon and the probability is easily obtained using mvncdf at all the grid points within the convex polygon. Second, the paper compares the computed integer rounding and integer bootstrapping success rates, lower and upper bounds of the ILS success rates to the actual ILS AR success rates obtained from a 24 h GPS data set for a 21 km baseline. The results demonstrate that the upper bound probability of the ILS AR probability given in the existing literatures agrees with the actual ILS success rate well, although the success rate computed with integer bootstrapping method is a quite sharp approximation to the actual ILS success rate. The results also show that variations or uncertainty of the unit–weight variance estimates from epoch to epoch will affect the computed success rates from different methods significantly, thus deserving more attentions in order to obtain useful success probability predictions.
Resumo:
This paper presents a new algorithm based on honey-bee mating optimization (HBMO) to estimate harmonic state variables in distribution networks including distributed generators (DGs). The proposed algorithm performs estimation for both amplitude and phase of each harmonics by minimizing the error between the measured values from phasor measurement units (PMUs) and the values computed from the estimated parameters during the estimation process. Simulation results on two distribution test system are presented to demonstrate that the speed and accuracy of proposed distribution harmonic state estimation (DHSE) algorithm is extremely effective and efficient in comparison with the conventional algorithms such as weight least square (WLS), genetic algorithm (GA) and tabu search (TS).
Resumo:
This paper presents a method for the estimation of thrust model parameters of uninhabited airborne systems using specific flight tests. Particular tests are proposed to simplify the estimation. The proposed estimation method is based on three steps. The first step uses a regression model in which the thrust is assumed constant. This allows us to obtain biased initial estimates of the aerodynamic coeficients of the surge model. In the second step, a robust nonlinear state estimator is implemented using the initial parameter estimates, and the model is augmented by considering the thrust as random walk. In the third step, the estimate of the thrust obtained by the observer is used to fit a polynomial model in terms of the propeller advanced ratio. We consider a numerical example based on Monte-Carlo simulations to quantify the sampling properties of the proposed estimator given realistic flight conditions.
Resumo:
This paper presents new schemes for recursive estimation of the state transition probabilities for hidden Markov models (HMM's) via extended least squares (ELS) and recursive state prediction error (RSPE) methods. Local convergence analysis for the proposed RSPE algorithm is shown using the ordinary differential equation (ODE) approach developed for the more familiar recursive output prediction error (RPE) methods. The presented scheme converges and is relatively well conditioned compared with the ...
Resumo:
In this paper new online adaptive hidden Markov model (HMM) state estimation schemes are developed, based on extended least squares (ELS) concepts and recursive prediction error (RPE) methods. The best of the new schemes exploit the idempotent nature of Markov chains and work with a least squares prediction error index, using a posterior estimates, more suited to Markov models then traditionally used in identification of linear systems.
Resumo:
The method of generalised estimating equations for regression modelling of clustered outcomes allows for specification of a working matrix that is intended to approximate the true correlation matrix of the observations. We investigate the asymptotic relative efficiency of the generalised estimating equation for the mean parameters when the correlation parameters are estimated by various methods. The asymptotic relative efficiency depends on three-features of the analysis, namely (i) the discrepancy between the working correlation structure and the unobservable true correlation structure, (ii) the method by which the correlation parameters are estimated and (iii) the 'design', by which we refer to both the structures of the predictor matrices within clusters and distribution of cluster sizes. Analytical and numerical studies of realistic data-analysis scenarios show that choice of working covariance model has a substantial impact on regression estimator efficiency. Protection against avoidable loss of efficiency associated with covariance misspecification is obtained when a 'Gaussian estimation' pseudolikelihood procedure is used with an AR(1) structure.
Resumo:
In the analysis of tagging data, it has been found that the least-squares method, based on the increment function known as the Fabens method, produces biased estimates because individual variability in growth is not allowed for. This paper modifies the Fabens method to account for individual variability in the length asymptote. Significance tests using t-statistics or log-likelihood ratio statistics may be applied to show the level of individual variability. Simulation results indicate that the modified method reduces the biases in the estimates to negligible proportions. Tagging data from tiger prawns (Penaeus esculentus and Penaeus semisulcatus) and rock lobster (Panulirus ornatus) are analysed as an illustration.
Resumo:
Near infrared (NIR) spectroscopy was investigated as a potential rapid method of estimating fish age from whole otoliths of Saddletail snapper (Lutjanus malabaricus). Whole otoliths from 209 Saddletail snapper were extracted and the NIR spectral characteristics were acquired over a spectral range of 800–2780 nm. Partial least-squares models (PLS) were developed from the diffuse reflectance spectra and reference-validated age estimates (based on traditional sectioned otolith increments) to predict age for independent otolith samples. Predictive models developed for a specific season and geographical location performed poorly against a different season and geographical location. However, overall PLS regression statistics for predicting a combined population incorporating both geographic location and season variables were: coefficient of determination (R2) = 0.94, root mean square error of prediction (RMSEP) = 1.54 for age estimation, indicating that Saddletail age could be predicted within 1.5 increment counts. This level of accuracy suggests the method warrants further development for Saddletail snapper and may have potential for other fish species. A rapid method of fish age estimation could have the potential to reduce greatly both costs of time and materials in the assessment and management of commercial fisheries.
Resumo:
A modified linear prediction (MLP) method is proposed in which the reference sensor is optimally located on the extended line of the array. The criterion of optimality is the minimization of the prediction error power, where the prediction error is defined as the difference between the reference sensor and the weighted array outputs. It is shown that the L2-norm of the least-squares array weights attains a minimum value for the optimum spacing of the reference sensor, subject to some soft constraint on signal-to-noise ratio (SNR). How this minimum norm property can be used for finding the optimum spacing of the reference sensor is described. The performance of the MLP method is studied and compared with that of the linear prediction (LP) method using resolution, detection bias, and variance as the performance measures. The study reveals that the MLP method performs much better than the LP technique.