917 resultados para Generalized Least Squares Estimation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Radar refractivity retrievals have the potential to accurately capture near-surface humidity fields from the phase change of ground clutter returns. In practice, phase changes are very noisy and the required smoothing will diminish large radial phase change gradients, leading to severe underestimates of large refractivity changes (ΔN). To mitigate this, the mean refractivity change over the field (ΔNfield) must be subtracted prior to smoothing. However, both observations and simulations indicate that highly correlated returns (e.g., when single targets straddle neighboring gates) result in underestimates of ΔNfield when pulse-pair processing is used. This may contribute to reported differences of up to 30 N units between surface observations and retrievals. This effect can be avoided if ΔNfield is estimated using a linear least squares fit to azimuthally averaged phase changes. Nevertheless, subsequent smoothing of the phase changes will still tend to diminish the all-important spatial perturbations in retrieved refractivity relative to ΔNfield; an iterative estimation approach may be required. The uncertainty in the target location within the range gate leads to additional phase noise proportional to ΔN, pulse length, and radar frequency. The use of short pulse lengths is recommended, not only to reduce this noise but to increase both the maximum detectable refractivity change and the number of suitable targets. Retrievals of refractivity fields must allow for large ΔN relative to an earlier reference field. This should be achievable for short pulses at S band, but phase noise due to target motion may prevent this at C band, while at X band even the retrieval of ΔN over shorter periods may at times be impossible.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a new class of neurofuzzy construction algorithms with the aim of maximizing generalization capability specifically for imbalanced data classification problems based on leave-one-out (LOO) cross validation. The algorithms are in two stages, first an initial rule base is constructed based on estimating the Gaussian mixture model with analysis of variance decomposition from input data; the second stage carries out the joint weighted least squares parameter estimation and rule selection using orthogonal forward subspace selection (OFSS)procedure. We show how different LOO based rule selection criteria can be incorporated with OFSS, and advocate either maximizing the leave-one-out area under curve of the receiver operating characteristics, or maximizing the leave-one-out Fmeasure if the data sets exhibit imbalanced class distribution. Extensive comparative simulations illustrate the effectiveness of the proposed algorithms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

High bandwidth-efficiency quadrature amplitude modulation (QAM) signaling widely adopted in high-rate communication systems suffers from a drawback of high peak-toaverage power ratio, which may cause the nonlinear saturation of the high power amplifier (HPA) at transmitter. Thus, practical high-throughput QAM communication systems exhibit nonlinear and dispersive channel characteristics that must be modeled as a Hammerstein channel. Standard linear equalization becomes inadequate for such Hammerstein communication systems. In this paper, we advocate an adaptive B-Spline neural network based nonlinear equalizer. Specifically, during the training phase, an efficient alternating least squares (LS) scheme is employed to estimate the parameters of the Hammerstein channel, including both the channel impulse response (CIR) coefficients and the parameters of the B-spline neural network that models the HPA’s nonlinearity. In addition, another B-spline neural network is used to model the inversion of the nonlinear HPA, and the parameters of this inverting B-spline model can easily be estimated using the standard LS algorithm based on the pseudo training data obtained as a natural byproduct of the Hammerstein channel identification. Nonlinear equalisation of the Hammerstein channel is then accomplished by the linear equalization based on the estimated CIR as well as the inverse B-spline neural network model. Furthermore, during the data communication phase, the decision-directed LS channel estimation is adopted to track the time-varying CIR. Extensive simulation results demonstrate the effectiveness of our proposed B-Spline neural network based nonlinear equalization scheme.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

4-Dimensional Variational Data Assimilation (4DVAR) assimilates observations through the minimisation of a least-squares objective function, which is constrained by the model flow. We refer to 4DVAR as strong-constraint 4DVAR (sc4DVAR) in this thesis as it assumes the model is perfect. Relaxing this assumption gives rise to weak-constraint 4DVAR (wc4DVAR), leading to a different minimisation problem with more degrees of freedom. We consider two wc4DVAR formulations in this thesis, the model error formulation and state estimation formulation. The 4DVAR objective function is traditionally solved using gradient-based iterative methods. The principle method used in Numerical Weather Prediction today is the Gauss-Newton approach. This method introduces a linearised `inner-loop' objective function, which upon convergence, updates the solution of the non-linear `outer-loop' objective function. This requires many evaluations of the objective function and its gradient, which emphasises the importance of the Hessian. The eigenvalues and eigenvectors of the Hessian provide insight into the degree of convexity of the objective function, while also indicating the difficulty one may encounter while iterative solving 4DVAR. The condition number of the Hessian is an appropriate measure for the sensitivity of the problem to input data. The condition number can also indicate the rate of convergence and solution accuracy of the minimisation algorithm. This thesis investigates the sensitivity of the solution process minimising both wc4DVAR objective functions to the internal assimilation parameters composing the problem. We gain insight into these sensitivities by bounding the condition number of the Hessians of both objective functions. We also precondition the model error objective function and show improved convergence. We show that both formulations' sensitivities are related to error variance balance, assimilation window length and correlation length-scales using the bounds. We further demonstrate this through numerical experiments on the condition number and data assimilation experiments using linear and non-linear chaotic toy models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The study aims to assess the empirical adherence of the permanent income theory and the consumption smoothing view in Latin America. Two present value models are considered, one describing household behavior and the other open economy macroeconomics. Following the methodology developed in Campbell and Schiller (1987), Bivariate Vector Autoregressions are estimated for the saving ratio and the real growth rate of income concerning the household behavior model and for the current account and the change in national cash ‡ow regarding the open economy model. The countries in the sample are considered separately in the estimation process (individual system estimation) as well as jointly (joint system estimation). Ordinary Least Squares (OLS) and Seemingly Unrelated Regressions (SURE) estimates of the coe¢cients are generated. Wald Tests are then conducted to verify if the VAR coe¢cient estimates are in conformity with those predicted by the theory. While the empirical results are sensitive to the estimation method and discount factors used, there is only weak evidence in favor of the permanent income theory and consumption smoothing view in the group of countries analyzed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este trabalho visa investigar o analfabetismo no Brasil a partir do uso de novas medidas de alfabetização inicialmente desenvolvidas por Basu e Foster (1998). Para tanto, primeiro se avalia o perfil de alfabetização da população brasileira de acordo com a definição de analfabetismo isolado. Entende-se por analfabetos isolados aqueles indivíduos analfabetos que não convivem com pessoas alfabetizadas em nível domiciliar. Segundo, busca-se evidências de externalidades domiciliares da alfabetização de filhos em aspectos relacionados a salários e à participação no mercado de trabalho de pais analfabetos. Devido à suspeita de causalidade reversa nas estimações por Mínimos Quadrados Ordinários (MQO) das equações que relacionam as variáveis de interesse dos pais com as alfabetizações dos filhos, utiliza-se as ofertas municipais de escolas e professores de primeiro grau como variáveis instrumentais para a alfabetização. Da investigação do perfil de alfabetização constata-se que a região Nordeste é a que apresenta os piores resultados entre as regiões do país, uma vez que é a que tem a maior parcela de analfabetos isolados do total. Tal resultado condiciona políticas públicas de alfabetização para essa região. As aplicações das medidas indicam que os rankings de alfabetização por estados são sensíveis às diferentes medidas. Os resultados encontrados das estimações por MQO indicam haver correlação positiva entre as variáveis dependentes dos pais e alfabetização dos filhos. Entretanto, a aplicação da metodologia de variáveis instrumentais não confirma os resultados das estimações por MQO. Contudo, devido à fragilidade dos resultados das estimações do primeiro estágio, não se pode afirmar que não há externalidades da alfabetização dos filhos para os pais.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O objetivo deste trabalho é caracterizar a Curva de Juros Mensal para o Brasil através de três fatores, comparando dois tipos de métodos de estimação: Através da Representação em Espaço de Estado é possível estimá-lo por dois Métodos: Filtro de Kalman e Mínimos Quadrados em Dois Passos. Os fatores têm sua dinâmica representada por um Modelo Autorregressivo Vetorial, VAR(1), e para o segundo método de estimação, atribui-se uma estrutura para a Variância Condicional. Para a comparação dos métodos empregados, propõe-se uma forma alternativa de compará-los: através de Processos de Markov que possam modelar conjuntamente o Fator de Inclinação da Curva de Juros, obtido pelos métodos empregados neste trabalho, e uma váriavel proxy para Desempenho Econômico, fornecendo alguma medida de previsão para os Ciclos Econômicos.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper studies the effects of generic drug’s entry on bidding behavior of drug suppliers in procurement auctions for pharmaceuticals, and the consequences on procurer’s price paid for drugs. Using an unique data set on procurement auctions for off-patent drugs organized by Brazilian public bodies, we surprisingly find no statistically difference between bids and prices paid for generic and branded drugs. On the other hand, some branded drug suppliers leave auctions in which there exists a supplier of generics, whereas the remaining ones lower their bidding price. These findings explain why we find that the presence of any supplier of generic drugs in a procurement auction reduces the price paid for pharmaceuticals by 7 percent. To overcome potential estimation bias due to generic’s entry endogeneity, we exploit variation in the number of days between drug’s patent expiration date and the tendering session. The two-stage estimations document the same pattern as the generalized least square estimations find. This evidence indicates that generic competition affects branded supplier’s behavior in public procurement auctions differently from other markets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este trabalho observa como as variáveis macroeconômicas (expectativa de inflação, juro real, hiato do produto e a variação cambial) influenciam a dinâmica da Estrutura a Termo da Taxa de Juros (ETTJ). Esta dinâmica foi verificada introduzindo a teoria de Análise de Componentes Principais (ACP) para capturar o efeito das componentes mais relevantes na ETTJ (nível, inclinação e curvatura). Utilizando-se as estimativas por mínimos quadrados ordinários e pelo método generalizado dos momentos, foi verificado que existe uma relação estatisticamente significante entre as variáveis macroeconômicas e as componentes principais da ETTJ.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Medimos a validade da paridade descoberta de juros – PDJ - para o mercado brasileiro no período de janeiro de 2010 a julho de 2014. Testamos a equação clássica da PDJ usando o Método dos Mínimos Quadrados Ordinários. Após a estimação dos parâmetros, aplicamos o Teste de Wald e verificamos que a paridade descoberta de juros não foi validada. Estendemos a equação tradicional da PDJ para uma especificação alternativa que captura medidas de risco Brasil e de alteração na liquidez internacional. Especificamente, acrescentamos três variáveis de controle: duas variáveis dummy que capturam condições de liquidez externa e o índice de commoditie CRB, que captura o risco Brasil. Com a especificação alternativa, a hipótese de que os retornos das taxas de juros em Real, dolarizadas, são iguais aos retornos da taxas de juros contratadas em dólares, ambas sujeitas ao risco Brasil, não foi rejeitada. Em complemento à análise das taxas representativas do mercado brasileiro, procurou-se avaliar a predominância da PDJ nas operações de swap cambial realizadas pela Vale S.A.. Para tanto, a série de taxa de juros em dólares do mercado brasileiro foi substituída pela taxa em dólar dos swaps contratados pela Vale. Os resultados encontrados demonstram que, quando comparado ao comportamento do mercado, as taxas em dólares da VALE são mais sensíveis às variações das taxas em Reais.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este trabalho demonstra como podemos usar opções sobre o Índice de Taxa Média de Depósitos Interfinanceiros de Um Dia (IDI) para extrair a função densidade de probabilidade (FDP) para os próximos passos do Comitê de Política Monetária (COPOM). Como a decisão do COPOM tem uma natureza discreta, podemos estimar a FDP usando Mínimo Quadrados Ordinários (MQO). Esta técnica permite incluir restrições sobre as probabilidades estimadas. As probabilidades calculadas usando opções sobre IDI são então comparadas com as probabilidades encontradas usando o Futuro de DI e as probabilidades calculadas através de pesquisas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work proposes a new technique for phasor estimation applied in microprocessor numerical relays for distance protection of transmission lines, based on the recursive least squares method and called least squares modified random walking. The phasor estimation methods have compromised their performance, mainly due to the DC exponential decaying component present in fault currents. In order to reduce the influence of the DC component, a Morphological Filter (FM) was added to the method of least squares and previously applied to the process of phasor estimation. The presented method is implemented in MATLABr and its performance is compared to one-cycle Fourier technique and conventional phasor estimation, which was also based on least squares algorithm. The methods based on least squares technique used for comparison with the proposed method were: forgetting factor recursive, covariance resetting and random walking. The techniques performance analysis were carried out by means of signals synthetic and signals provided of simulations on the Alternative Transient Program (ATP). When compared to other phasor estimation methods, the proposed method showed satisfactory results, when it comes to the estimation speed, the steady state oscillation and the overshoot. Then, the presented method performance was analyzed by means of variations in the fault parameters (resistance, distance, angle of incidence and type of fault). Through this study, the results did not showed significant variations in method performance. Besides, the apparent impedance trajectory and estimated distance of the fault were analysed, and the presented method showed better results in comparison to one-cycle Fourier algorithm

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main target here is to determine the orbit of an artificial satellite, using signals of the GPS constellation and least squares algorithms implemented through sequential Givens rotations as a method of estimation, with the aim of improving the performance of the orbit estimation process and, at the same time, minimizing the computational procedure cost. Geopotential perturbations up to high order and direct solar radiation pressure were taken into account. It was also considered the position of the GPS antenna on the satellite body that, lately, consists of the influence of the satellite attitude motion in the orbit determination process. An application has been done, using real data from the Topex/Poseidon satellite, whose ephemeris is available at Internet. The best accuracy obtained in position was smaller than 5 meters for short period (2 hours) and smaller than 28 meters for long period (24 hours) orbit determination. In both cases, the perturbations mentioned before were taken into consideration and the analysis occurred without selective availability on the signals measurements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work has as main objective to find mathematical models based on linear parametric estimation techniques applied to the problem of calculating the grow of gas in oil wells. In particular we focus on achieving grow models applied to the case of wells that produce by plunger-lift technique on oil rigs, in which case, there are high peaks in the grow values that hinder their direct measurement by instruments. For this, we have developed estimators based on recursive least squares and make an analysis of statistical measures such as autocorrelation, cross-correlation, variogram and the cumulative periodogram, which are calculated recursively as data are obtained in real time from the plant in operation; the values obtained for these measures tell us how accurate the used model is and how it can be changed to better fit the measured values. The models have been tested in a pilot plant which emulates the process gas production in oil wells