908 resultados para quasi-least squares
Resumo:
Singular value decomposition - least squares (SVDLS), a new method for processing the multiple spectra with multiple wavelengths and multiple components in thin layer spectroelectrochemistry has been developed. The CD spectra of three components, norepinephrine reduced form of norepinephrinechrome and norepinephrinequinone, and their fraction distributions with applied potential were obtained in three redox processes of norepinephrine from 30 experimental CD spectra, which well explains electrochemical mechanism of norepinephrine as well as the changes in the CD spectrum during the electrochemical processes.
Resumo:
This paper describes the application of multivariate regression techniques to the Tennessee Eastman benchmark process for modelling and fault detection. Two methods are applied : linear partial least squares, and a nonlinear variant of this procedure using a radial basis function inner relation. The performance of the RBF networks is enhanced through the use of a recently developed training algorithm which uses quasi-Newton optimization to ensure an efficient and parsimonious network; details of this algorithm can be found in this paper. The PLS and PLS/RBF methods are then used to create on-line inferential models of delayed process measurements. As these measurements relate to the final product composition, these models suggest that on-line statistical quality control analysis should be possible for this plant. The generation of `soft sensors' for these measurements has the further effect of introducing a redundant element into the system, redundancy which can then be used to generate a fault detection and isolation scheme for these sensors. This is achieved by arranging the sensors and models in a manner comparable to the dedicated estimator scheme of Clarke et al. 1975, IEEE Trans. Pero. Elect. Sys., AES-14R, 465-473. The effectiveness of this scheme is demonstrated on a series of simulated sensor and process faults, with full detection and isolation shown to be possible for sensor malfunctions, and detection feasible in the case of process faults. Suggestions for enhancing the diagnostic capacity in the latter case are covered towards the end of the paper.
Resumo:
This paper studies seemingly unrelated linear models with integrated regressors and stationary errors. By adding leads and lags of the first differences of the regressors and estimating this augmented dynamic regression model by feasible generalized least squares using the long-run covariance matrix, we obtain an efficient estimator of the cointegrating vector that has a limiting mixed normal distribution. Simulation results suggest that this new estimator compares favorably with others already proposed in the literature. We apply these new estimators to the testing of purchasing power parity (PPP) among the G-7 countries. The test based on the efficient estimates rejects the PPP hypothesis for most countries.
Resumo:
This paper proposes finite-sample procedures for testing the SURE specification in multi-equation regression models, i.e. whether the disturbances in different equations are contemporaneously uncorrelated or not. We apply the technique of Monte Carlo (MC) tests [Dwass (1957), Barnard (1963)] to obtain exact tests based on standard LR and LM zero correlation tests. We also suggest a MC quasi-LR (QLR) test based on feasible generalized least squares (FGLS). We show that the latter statistics are pivotal under the null, which provides the justification for applying MC tests. Furthermore, we extend the exact independence test proposed by Harvey and Phillips (1982) to the multi-equation framework. Specifically, we introduce several induced tests based on a set of simultaneous Harvey/Phillips-type tests and suggest a simulation-based solution to the associated combination problem. The properties of the proposed tests are studied in a Monte Carlo experiment which shows that standard asymptotic tests exhibit important size distortions, while MC tests achieve complete size control and display good power. Moreover, MC-QLR tests performed best in terms of power, a result of interest from the point of view of simulation-based tests. The power of the MC induced tests improves appreciably in comparison to standard Bonferroni tests and, in certain cases, outperforms the likelihood-based MC tests. The tests are applied to data used by Fischer (1993) to analyze the macroeconomic determinants of growth.
Resumo:
It is well known that standard asymptotic theory is not valid or is extremely unreliable in models with identification problems or weak instruments [Dufour (1997, Econometrica), Staiger and Stock (1997, Econometrica), Wang and Zivot (1998, Econometrica), Stock and Wright (2000, Econometrica), Dufour and Jasiak (2001, International Economic Review)]. One possible way out consists here in using a variant of the Anderson-Rubin (1949, Ann. Math. Stat.) procedure. The latter, however, allows one to build exact tests and confidence sets only for the full vector of the coefficients of the endogenous explanatory variables in a structural equation, which in general does not allow for individual coefficients. This problem may in principle be overcome by using projection techniques [Dufour (1997, Econometrica), Dufour and Jasiak (2001, International Economic Review)]. AR-types are emphasized because they are robust to both weak instruments and instrument exclusion. However, these techniques can be implemented only by using costly numerical techniques. In this paper, we provide a complete analytic solution to the problem of building projection-based confidence sets from Anderson-Rubin-type confidence sets. The latter involves the geometric properties of “quadrics” and can be viewed as an extension of usual confidence intervals and ellipsoids. Only least squares techniques are required for building the confidence intervals. We also study by simulation how “conservative” projection-based confidence sets are. Finally, we illustrate the methods proposed by applying them to three different examples: the relationship between trade and growth in a cross-section of countries, returns to education, and a study of production functions in the U.S. economy.
Resumo:
The classical methods of analysing time series by Box-Jenkins approach assume that the observed series uctuates around changing levels with constant variance. That is, the time series is assumed to be of homoscedastic nature. However, the nancial time series exhibits the presence of heteroscedasticity in the sense that, it possesses non-constant conditional variance given the past observations. So, the analysis of nancial time series, requires the modelling of such variances, which may depend on some time dependent factors or its own past values. This lead to introduction of several classes of models to study the behaviour of nancial time series. See Taylor (1986), Tsay (2005), Rachev et al. (2007). The class of models, used to describe the evolution of conditional variances is referred to as stochastic volatility modelsThe stochastic models available to analyse the conditional variances, are based on either normal or log-normal distributions. One of the objectives of the present study is to explore the possibility of employing some non-Gaussian distributions to model the volatility sequences and then study the behaviour of the resulting return series. This lead us to work on the related problem of statistical inference, which is the main contribution of the thesis
Resumo:
Customer satisfaction and retention are key issues for organizations in today’s competitive market place. As such, much research and revenue has been invested in developing accurate ways of assessing consumer satisfaction at both the macro (national) and micro (organizational) level, facilitating comparisons in performance both within and between industries. Since the instigation of the national customer satisfaction indices (CSI), partial least squares (PLS) has been used to estimate the CSI models in preference to structural equation models (SEM) because they do not rely on strict assumptions about the data. However, this choice was based upon some misconceptions about the use of SEM’s and does not take into consideration more recent advances in SEM, including estimation methods that are robust to non-normality and missing data. In this paper, both SEM and PLS approaches were compared by evaluating perceptions of the Isle of Man Post Office Products and Customer service using a CSI format. The new robust SEM procedures were found to be advantageous over PLS. Product quality was found to be the only driver of customer satisfaction, while image and satisfaction were the only predictors of loyalty, thus arguing for the specificity of postal services
Resumo:
La crisis que se desató en el mercado hipotecario en Estados Unidos en 2008 y que logró propagarse a lo largo de todo sistema financiero, dejó en evidencia el nivel de interconexión que actualmente existe entre las entidades del sector y sus relaciones con el sector productivo, dejando en evidencia la necesidad de identificar y caracterizar el riesgo sistémico inherente al sistema, para que de esta forma las entidades reguladoras busquen una estabilidad tanto individual, como del sistema en general. El presente documento muestra, a través de un modelo que combina el poder informativo de las redes y su adecuación a un modelo espacial auto regresivo (tipo panel), la importancia de incorporar al enfoque micro-prudencial (propuesto en Basilea II), una variable que capture el efecto de estar conectado con otras entidades, realizando así un análisis macro-prudencial (propuesto en Basilea III).
Resumo:
We propose and estimate a financial distress model that explicitly accounts for the interactions or spill-over effects between financial institutions, through the use of a spatial continuity matrix that is build from financial network data of inter bank transactions. Such setup of the financial distress model allows for the empirical validation of the importance of network externalities in determining financial distress, in addition to institution specific and macroeconomic covariates. The relevance of such specification is that it incorporates simultaneously micro-prudential factors (Basel 2) as well as macro-prudential and systemic factors (Basel 3) as determinants of financial distress. Results indicate network externalities are an important determinant of financial health of a financial institutions. The parameter that measures the effect of network externalities is both economically and statistical significant and its inclusion as a risk factor reduces the importance of the firm specific variables such as the size or degree of leverage of the financial institution. In addition we analyze the policy implications of the network factor model for capital requirements and deposit insurance pricing.
Resumo:
A construction algorithm for multioutput radial basis function (RBF) network modelling is introduced by combining a locally regularised orthogonal least squares (LROLS) model selection with a D-optimality experimental design. The proposed algorithm aims to achieve maximised model robustness and sparsity via two effective and complementary approaches. The LROLS method alone is capable of producing a very parsimonious RBF network model with excellent generalisation performance. The D-optimality design criterion enhances the model efficiency and robustness. A further advantage of the combined approach is that the user only needs to specify a weighting for the D-optimality cost in the combined RBF model selecting criterion and the entire model construction procedure becomes automatic. The value of this weighting does not influence the model selection procedure critically and it can be chosen with ease from a wide range of values.
Resumo:
Moving-least-squares (MLS) surfaces undergoing large deformations need periodic regeneration of the point set (point-set resampling) so as to keep the point-set density quasi-uniform. Previous work by the authors dealt with algebraic MLS surfaces, and proposed a resampling strategy based on defining the new points at the intersections of the MLS surface with a suitable set of rays. That strategy has very low memory requirements and is easy to parallelize. In this article new resampling strategies with reduced CPU-time cost are explored. The basic idea is to choose as set of rays the lines of a regular, Cartesian grid, and to fully exploit this grid: as data structure for search queries, as spatial structure for traversing the surface in a continuation-like algorithm, and also as approximation grid for an interpolated version of the MLS surface. It is shown that in this way a very simple and compact resampling technique is obtained, which cuts the resampling cost by half with affordable memory requirements.
Resumo:
Empirical evidence suggests that real exchange rate is characterized by the presence of near-unity and additive outliers. Recent studeis have found evidence on favor PPP reversion by using the quasi-differencing (Elliott et al., 1996) unit root tests (ERS), which is more efficient against local alternatives but is still based on least squares estimation. Unit root tests basead on least saquares method usually tend to bias inference towards stationarity when additive out liers are present. In this paper, we incorporate quasi-differencing into M-estimation to construct a unit root test that is robust not only against near-unity root but also against nonGaussian behavior provoked by assitive outliers. We re-visit the PPP hypothesis and found less evidemce in favor PPP reversion when non-Gaussian behavior in real exchange rates is taken into account.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Dois dos principais objetivos da interpretação petrofísica de perfis são a determinação dos limites entre as camadas geológicas e o contato entre fluidos. Para isto, o perfil de indução possui algumas importantes propriedades: É sensível ao tipo de fluido e a distribuição do mesmo no espaço poroso; e o seu registro pode ser modelado com precisão satisfatória como sendo uma convolução entre a condutividade da formação e a função resposta da ferramenta. A primeira propriedade assegura uma boa caracterização dos reservatórios e, ao mesmo tempo, evidencia os contatos entre fluidos, o que permite um zoneamento básico do perfil de poço. A segunda propriedade decorre da relação quasi-linear entre o perfil de indução e a condutividade da formação, o que torna possível o uso da teoria dos sistemas lineares e, particularmente, o desenho de filtros digitais adaptados à deconvolução do sinal original. A idéia neste trabalho é produzir um algoritmo capaz de identificar os contatos entre as camadas atravessadas pelo poço, a partir da condutividade aparente lida pelo perfil de indução. Para simplificar o problema, o modelo de formação assume uma distribuição plano-paralela de camadas homogêneas. Este modelo corresponde a um perfil retangular para condutividade da formação. Usando o perfil de entrada digitalizado, os pontos de inflexão são obtidos numericamente a partir dos extremos da primeira derivada. Isto gera uma primeira aproximação do perfil real da formação. Este perfil estimado é então convolvido com a função resposta da ferramenta gerando um perfil de condutividade aparente. Uma função custo de mínimos quadrados condicionada é definida em termos da diferença entre a condutividade aparente medida e a estimada. A minimização da função custo fornece a condutividade das camadas. O problema de otimização para encontrar o melhor perfil retangular para os dados de indução é linear nas amplitudes (condutividades das camadas), mas uma estimativa não linear para os contatos entre as camadas. Neste caso as amplitudes são estimadas de forma linear pelos mínimos quadrados mantendo-se fixos os contatos. Em um segundo passo mantem-se fixas as amplitudes e são calculadas pequenas mudanças nos limites entre as camadas usando uma aproximação linearizada. Este processo é interativo obtendo sucessivos refinamentos até que um critério de convergência seja satisfeito. O algoritmo é aplicado em dados sintéticos e reais demonstrando a robustez do método.
Resumo:
O presente trabalho consiste na formulação de uma metodologia para interpretação automática de dados de campo magnético. Desta forma, a sua utilização tornará possível a determinação das fronteiras e magnetização de cada corpo. Na base desta metodologia foram utilizadas as características de variações abruptas de magnetização dos corpos. Estas variações laterais abruptas serão representadas por polinômios descontínuos conhecidos como polinômios de Walsh. Neste trabalho, muitos conceitos novos foram desenvolvidos na aplicação dos polinômios de Walsh para resolver problemas de inversão de dados aeromagnéticos. Dentre os novos aspectos considerados, podemos citar. (i) - O desenvolvimento de um algoritmo ótimo para gerar um jôgo dos polinômios "quase-ortogonais" baseados na distribuição de magnetização de Walsh. (ii) - O uso da metodologia damped least squares para estabilizar a solução inversa. (iii) - Uma investigação dos problemas da não-invariância, inerentes quando se usa os polinômios de Walsh. (iv) - Uma investigação da escolha da ordem dos polinômios, tomando-se em conta as limitações de resolução e o comportamento dos autovalores. Utilizando estas características dos corpos magnetizados é possível formular o problema direto, ou seja, a magnetização dos corpos obedece a distribuição de Walsh. É também possível formular o problema inverso, na qual a magnetização geradora do campo observado obedece a série de Walsh. Antes da utilização do método é necessária uma primeira estimativa da localização das fontes magnéticas. Foi escolhida uma metodologia desenvolvida por LOURES (1991), que tem como base a equação homogênea de Euler e cujas exigências necessárias à sua utilização é o conhecimento do campo magnético e suas derivadas. Para testar a metodologia com dados reais foi escolhida uma região localizada na bacia sedimentar do Alto Amazonas. Os dados foram obtidos a partir do levantamento aeromagnético realizado pela PETROBRÁS.