951 resultados para vector auto-regressive model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esta dissertação analisa a conexão existente entre o mercado de dívida pública e a política monetária no Brasil. Com base em um Vetor Auto-Regressivo (VAR), foram utilizadas duas proxies alternativas de risco inflacionário para mostrar que choques positivos no risco inflacionário elevam tanto as expectativas de inflação do mercado quanto os juros futuros do Swap Pré x DI. Em seguida, com base em modelo de inconsistência dinâmica de Blanchard e Missale (1994) e utilizando a metodologia de Johansen, constatou-se que um aumento nos juros futuros diminui a maturidade da dívida pública, no longo prazo. Os resultados levam a duas conclusões: o risco inflacionário 1) dificulta a colocação de títulos nominais (não-indexados) no mercado pelo governo, gerando um perfil de dívida menos longo do que o ideal e 2) torna a política monetária mais custosa.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esta dissertação analisa a conexão existente entre o mercado de dívida pública e a política monetária no Brasil. Com base em um Vetor Auto-Regressivo (VAR), foram utilizadas duas proxies alternativas de risco inflacionário para mostrar que choques positivos no risco inflacionário elevam tanto as expectativas de inflação do mercado quanto os juros futuros do Swap Pré x DI. Em seguida, com base em modelo de inconsistência dinâmica de Blanchard e Missale (1994) e utilizando a metodologia de Johansen, constatou-se que um aumento nos juros futuros diminui a maturidade da dívida pública, no longo prazo. Os resultados levam a duas conclusões: o risco inflacionário 1) dificulta a colocação de títulos nominais (não-indexados) no mercado pelo governo, gerando um perfil de dívida menos longo do que o ideal e 2) torna a política monetária mais custosa.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents an approach for automatic classification of pulsed Terahertz (THz), or T-ray, signals highlighting their potential in biomedical, pharmaceutical and security applications. T-ray classification systems supply a wealth of information about test samples and make possible the discrimination of heterogeneous layers within an object. In this paper, a novel technique involving the use of Auto Regressive (AR) and Auto Regressive Moving Average (ARMA) models on the wavelet transforms of measured T-ray pulse data is presented. Two example applications are examined - the classi. cation of normal human bone (NHB) osteoblasts against human osteosarcoma (HOS) cells and the identification of six different powder samples. A variety of model types and orders are used to generate descriptive features for subsequent classification. Wavelet-based de-noising with soft threshold shrinkage is applied to the measured T-ray signals prior to modeling. For classi. cation, a simple Mahalanobis distance classi. er is used. After feature extraction, classi. cation accuracy for cancerous and normal cell types is 93%, whereas for powders, it is 98%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Uncertainty quantification of petroleum reservoir models is one of the present challenges, which is usually approached with a wide range of geostatistical tools linked with statistical optimisation or/and inference algorithms. Recent advances in machine learning offer a novel approach to model spatial distribution of petrophysical properties in complex reservoirs alternative to geostatistics. The approach is based of semisupervised learning, which handles both ?labelled? observed data and ?unlabelled? data, which have no measured value but describe prior knowledge and other relevant data in forms of manifolds in the input space where the modelled property is continuous. Proposed semi-supervised Support Vector Regression (SVR) model has demonstrated its capability to represent realistic geological features and describe stochastic variability and non-uniqueness of spatial properties. On the other hand, it is able to capture and preserve key spatial dependencies such as connectivity of high permeability geo-bodies, which is often difficult in contemporary petroleum reservoir studies. Semi-supervised SVR as a data driven algorithm is designed to integrate various kind of conditioning information and learn dependences from it. The semi-supervised SVR model is able to balance signal/noise levels and control the prior belief in available data. In this work, stochastic semi-supervised SVR geomodel is integrated into Bayesian framework to quantify uncertainty of reservoir production with multiple models fitted to past dynamic observations (production history). Multiple history matched models are obtained using stochastic sampling and/or MCMC-based inference algorithms, which evaluate posterior probability distribution. Uncertainty of the model is described by posterior probability of the model parameters that represent key geological properties: spatial correlation size, continuity strength, smoothness/variability of spatial property distribution. The developed approach is illustrated with a fluvial reservoir case. The resulting probabilistic production forecasts are described by uncertainty envelopes. The paper compares the performance of the models with different combinations of unknown parameters and discusses sensitivity issues.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Se analiza la manera en que se realizan las tesis doctorales en educación matemática en España. Se utiliza la metodología ARIMA (Auto-Regressive Integrated Moving Average) para realizar el análisis de manera diacrónica sobre datos longitudinales. Se hace incapié en la importancia de la metodología usada y sus ventajas frente a las metodologías tradicionalmente usadas en análisis diacrónicos. Se exponen las cuatro fases de la metodología ARIMA, correspondientes a la identificación del proceso, la estimación de cambio en el proceso, la validación del mismo y la predicción de sus consecuencias.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We explore the mutual dependencies and interactions among different groups of species of the plankton population, based on an analysis of the long-term field observations carried out by our group in the North–West coast of the Bay of Bengal. The plankton community is structured into three groups of species, namely, non-toxic phytoplankton (NTP), toxic phytoplankton (TPP) and zooplankton. To find the pair-wise dependencies among the three groups of plankton, Pearson and partial correlation coefficients are calculated. To explore the simultaneous interaction among all the three groups, a time series analysis is performed. Following an Expectation Maximization (E-M) algorithm, those data points which are missing due to irregularities in sampling are estimated, and with the completed data set a Vector Auto-Regressive (VAR) model is analyzed. The overall analysis demonstrates that toxin-producing phytoplankton play two distinct roles: the inhibition on consumption of toxic substances reduces the abundance of zooplankton, and the toxic materials released by TPP significantly compensate for the competitive disadvantages among phytoplankton species. Our study suggests that the presence of TPP might be a possible cause for the generation of a complex interaction among the large number of phytoplankton and zooplankton species that might be responsible for the prolonged coexistence of the plankton species in a fluctuating biomass.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Real exchange rate is an important macroeconomic price in the economy and a ects economic activity, interest rates, domestic prices, trade and investiments ows among other variables. Methodologies have been developed in empirical exchange rate misalignment studies to evaluate whether a real e ective exchange is overvalued or undervalued. There is a vast body of literature on the determinants of long-term real exchange rates and on empirical strategies to implement the equilibrium norms obtained from theoretical models. This study seeks to contribute to this literature by showing that it is possible to calculate the misalignment from a mixed ointegrated vector error correction framework. An empirical exercise using United States' real exchange rate data is performed. The results suggest that the model with mixed frequency data is preferred to the models with same frequency variables

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract A new LIBS quantitative analysis method based on analytical line adaptive selection and Relevance Vector Machine (RVM) regression model is proposed. First, a scheme of adaptively selecting analytical line is put forward in order to overcome the drawback of high dependency on a priori knowledge. The candidate analytical lines are automatically selected based on the built-in characteristics of spectral lines, such as spectral intensity, wavelength and width at half height. The analytical lines which will be used as input variables of regression model are determined adaptively according to the samples for both training and testing. Second, an LIBS quantitative analysis method based on RVM is presented. The intensities of analytical lines and the elemental concentrations of certified standard samples are used to train the RVM regression model. The predicted elemental concentration analysis results will be given with a form of confidence interval of probabilistic distribution, which is helpful for evaluating the uncertainness contained in the measured spectra. Chromium concentration analysis experiments of 23 certified standard high-alloy steel samples have been carried out. The multiple correlation coefficient of the prediction was up to 98.85%, and the average relative error of the prediction was 4.01%. The experiment results showed that the proposed LIBS quantitative analysis method achieved better prediction accuracy and better modeling robustness compared with the methods based on partial least squares regression, artificial neural network and standard support vector machine.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data fluctuation in multiple measurements of Laser Induced Breakdown Spectroscopy (LIBS) greatly affects the accuracy of quantitative analysis. A new LIBS quantitative analysis method based on the Robust Least Squares Support Vector Machine (RLS-SVM) regression model is proposed. The usual way to enhance the analysis accuracy is to improve the quality and consistency of the emission signal, such as by averaging the spectral signals or spectrum standardization over a number of laser shots. The proposed method focuses more on how to enhance the robustness of the quantitative analysis regression model. The proposed RLS-SVM regression model originates from the Weighted Least Squares Support Vector Machine (WLS-SVM) but has an improved segmented weighting function and residual error calculation according to the statistical distribution of measured spectral data. Through the improved segmented weighting function, the information on the spectral data in the normal distribution will be retained in the regression model while the information on the outliers will be restrained or removed. Copper elemental concentration analysis experiments of 16 certified standard brass samples were carried out. The average value of relative standard deviation obtained from the RLS-SVM model was 3.06% and the root mean square error was 1.537%. The experimental results showed that the proposed method achieved better prediction accuracy and better modeling robustness compared with the quantitative analysis methods based on Partial Least Squares (PLS) regression, standard Support Vector Machine (SVM) and WLS-SVM. It was also demonstrated that the improved weighting function had better comprehensive performance in model robustness and convergence speed, compared with the four known weighting functions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The state-space approach is used to evaluate the relation between soil physical and chemical properties in an area cultivated with sugarcane. The experiment was carried out on a Rhodic Kandiudalf in Piracicaba, State of São Paulo, Brazil. Sugarcane was planted on an area of 0.21 ha i.e., in 15 rows 100 m long, spaced 1.4 m. Soil water content, soil organic matter, clay content and aggregate stability were sampled along a transect of 84 points, meter by meter. The state-space approach is used to evaluate how the soil water content is affected by itself and by soil organic matter, clay content, and aggregate stability of neighboring locations, in different combinations, aiming to contribute to a better understanding of the relation among these variables in the soil. Results show that soil water contents were successfully estimated by this approach. Best performances were found when the estimate of soil water content at locations i was related to soil water content, clay content and aggregate stability at locations i-1. Results also indicate that this state-space model using all series describes the soil water content better than any equivalent multiple regression equation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many of the most interesting questions ecologists ask lead to analyses of spatial data. Yet, perhaps confused by the large number of statistical models and fitting methods available, many ecologists seem to believe this is best left to specialists. Here, we describe the issues that need consideration when analysing spatial data and illustrate these using simulation studies. Our comparative analysis involves using methods including generalized least squares, spatial filters, wavelet revised models, conditional autoregressive models and generalized additive mixed models to estimate regression coefficients from synthetic but realistic data sets, including some which violate standard regression assumptions. We assess the performance of each method using two measures and using statistical error rates for model selection. Methods that performed well included generalized least squares family of models and a Bayesian implementation of the conditional auto-regressive model. Ordinary least squares also performed adequately in the absence of model selection, but had poorly controlled Type I error rates and so did not show the improvements in performance under model selection when using the above methods. Removing large-scale spatial trends in the response led to poor performance. These are empirical results; hence extrapolation of these findings to other situations should be performed cautiously. Nevertheless, our simulation-based approach provides much stronger evidence for comparative analysis than assessments based on single or small numbers of data sets, and should be considered a necessary foundation for statements of this type in future.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Inflation targeting, Taylor rule and money neutrality: a post-Keynesian critic. This paper critically discusses the inflation targeting regime proposed by orthodox economists, in particular the Taylor Rule. The article describes how the Taylor Rule assumes the argument of money neutrality inherited from the Quantitative Theory of Money. It discusses critically the ways of operation of the rule, and the negative impacts of the interest rate over the potential output. In this sense, the article shows the possible vicious circles of the monetary policy when money is not neutral, as is the case for post-keynesian economists. The relation of interest rates, potential output and the output gap is illustrated in some estimates using the methodology of Vector Auto-Regressive in the Brazilian case.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose – Expectations of future market conditions are acknowledged to be crucial for the development decision and hence for shaping the built environment. The purpose of this paper is to study the central London office market from 1987 to 2009 and test for evidence of rational, adaptive and naive expectations. Design/methodology/approach – Two parallel approaches are applied to test for either rational or adaptive/naive expectations: vector auto-regressive (VAR) approach with Granger causality tests and recursive OLS regression with one-step forecasts. Findings – Applying VAR models and a recursive OLS regression with one-step forecasts, the authors do not find evidence of adaptive and naïve expectations of developers. Although the magnitude of the errors and the length of time lags between market signal and construction starts vary over time and development cycles, the results confirm that developer decisions are explained, to a large extent, by contemporaneous and historic conditions in both the City and the West End, but this is more likely to stem from the lengthy design, financing and planning permission processes rather than adaptive or naive expectations. Research limitations/implications – More generally, the results of this study suggest that real estate cycles are largely generated endogenously rather than being the result of large demand shocks and/or irrational behaviour. Practical implications – Developers may be able to generate excess profits by exploiting market inefficiencies but this may be hindered in practice by the long periods necessary for planning and construction of the asset. Originality/value – This paper focuses the scholarly debate of real estate cycles on the role of expectations. It is also one of very few spatially disaggregate studies of the subject matter.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Analisamos a previsibilidade dos retornos mensais de ativos no mercado brasileiro em um período de 10 anos desde o início do plano Real. Para analisarmos a variação cross-section dos retornos e explicarmos estes retornos em função de prêmios de risco variantes no tempo, condicionados a variáveis de estado macroeconômicas, utilizamos um novo modelo de apreçamento de ativos, combinando dois diferentes tipos de modelos econômicos, um modelo de finanças - condicional e multifatorial, e um modelo estritamente macroeconômico do tipo Vector Auto Regressive. Verificamos que o modelo com betas condicionais não explica adequadamente os retornos dos ativos, porém o modelo com os prêmios de risco (e não os betas) condicionais, produz resultados com interpretação econômica e estatisticamente satis fatórios