838 resultados para Vector error correction model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The thesis at hand adds to the existing literature by investigating the relationship between economic growth and outward foreign direct investments (OFDI) on a set of 16 emerging countries. Two different econometric techniques are employed: a panel data regression analysis and a time-series causality analysis. Results from the regression analysis indicate a positive and significant correlation between OFDI and economic growth. Additionally, the coefficient for the OFDI variable is robust in the sense specified by the Extreme Bound Analysis (EBA). On the other hand, the findings of the causality analysis are particularly heterogeneous. The vector autoregression (VAR) and the vector error correction model (VECM) approaches identify unidirectional Granger causality running either from OFDI to GDP or from GDP to OFDI in six countries. In four economies causality among the two variables is bidirectional, whereas in five countries no causality relationship between OFDI and GDP seems to be present.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A taxa de desemprego no Brasil sofreu redução significativa do começo do século XXI até o fim do ano de 2014. No entanto, esta redução significativa não foi acompanhada do esperado crescimento econômico disposto na teoria. Desta forma, constata-se que embora a taxa de desemprego tenha se reduzido, não necessariamente as pessoas estavam trabalhando e produzindo. Procurará se entender os fatores que influenciaram esta trajetória de redução da taxa de desemprego por meio de influência na PEA e no número de admissões de empregados, que aproximaremos à oferta e à demanda por mão de obra. Ou seja, pretende-se verificar as variáveis que influenciaram uma possível redução da oferta de trabalho, assim como uma maior demanda por trabalho, resultantes em uma redução da taxa de desemprego. Serão consideradas variáveis de renda, de transferência de renda, de educação e de crescimento econômico na análise das influências da baixa taxa de desemprego. Com base em um modelo vetor de correção de erros (VEC) pretende-se identificar quais variáveis efetivamente afetaram o panorama do desemprego.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research investigates the spatial market integration of the Chilean wheat market in relation with its most representative international markets by using a vector error correction model (VECM) and how a price support policy, as a price band, affect it. The international market was characterized by two relevant wheat prices: PAN from Argentina and Hard Red Winter from the United States. The spatial market integration level, expressed in the error correction term (ECT), allowed concluding that there is a high integration degree among these markets with a variable influence of the price band mechanism mainly related with its estimation methodology. Moreover, this paper showed that Chile can be seen as price taker as long as the speed of its adjustment to international shocks, being these reactions faster than in the United States and Argentina. Finally, the results validated the "Law of the One Price", which assumes price equalization across all local markets in the long run.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Category-management models serve to assist in the development of plans for pricing and promotions of individual brands. Techniques to solve the models can have problems of accuracy and interpretability because they are susceptible to spurious regression problems due to nonstationary time-series data. Improperly stated nonstationary systems can reduce the accuracy of the forecasts and undermine the interpretation of the results. This is problematic because recent studies indicate that sales are often a nonstationary time-series. Newly developed correction techniques can account for nonstationarity by incorporating error-correction terms into the model when using a Bayesian Vector Error-Correction Model. The benefit of using such a technique is that shocks to control variates can be separated into permanent and temporary effects and allow cointegration of series for analysis purposes. Analysis of a brand data set indicates that this is important even at the brand level. Thus, additional information is generated that allows a decision maker to examine controllable variables in terms of whether they influence sales over a short or long duration. Only products that are nonstationary in sales volume can be manipulated for long-term profit gain, and promotions must be cointegrated with brand sales volume. The brand data set is used to explore the capabilities and interpretation of cointegration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper analyzes the dynamics ofthe American Depositary Receipt (ADR) of a Colombian bank (Bancolombia) in relation to its pricing factors (underlying (preferred) shares price, exchange rate and the US market index). The aim is to test if there is a long-term relation among these variables that would imply predictability. One cointegrating relation is found allowing the use of a vector error correction model to examine the transmission of shocks to the underlying prices, the exchange rate, and the US market index. The main finding of this paper is that in the short run, the underlying share price seems to adjust after changes in the ADR price, pointing to the fact that the NYSE (trading market for the ADR) leads the Colombian market. However, in the long run, both, the underlying share price and the ADR price, adjust to changes in one another.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As order dependencies between process tasks can get complex, it is easy to make mistakes in process model design, especially behavioral ones such as deadlocks. Notions such as soundness formalize behavioral errors and tools exist that can identify such errors. However these tools do not provide assistance with the correction of the process models. Error correction can be very challenging as the intentions of the process modeler are not known and there may be many ways in which an error can be corrected. We present a novel technique for automatic error correction in process models based on simulated annealing. Via this technique a number of process model alternatives are identified that resolve one or more errors in the original model. The technique is implemented and validated on a sample of industrial process models. The tests show that at least one sound solution can be found for each input model and that the response times are short.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A single source network is said to be memory-free if all of the internal nodes (those except the source and the sinks) do not employ memory but merely send linear combinations of the symbols received at their incoming edges on their outgoing edges. In this work, we introduce network-error correction for single source, acyclic, unit-delay, memory-free networks with coherent network coding for multicast. A convolutional code is designed at the source based on the network code in order to correct network- errors that correspond to any of a given set of error patterns, as long as consecutive errors are separated by a certain interval which depends on the convolutional code selected. Bounds on this interval and the field size required for constructing the convolutional code with the required free distance are also obtained. We illustrate the performance of convolutional network error correcting codes (CNECCs) designed for the unit-delay networks using simulations of CNECCs on an example network under a probabilistic error model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este documento estima modelos lineales y no-lineales de corrección de errores para los precios spot de cuatro tipos de café. En concordancia con las leyes económicas, se encuentra evidencia que cuando los precios están por encima de su nivel de equilibrio, retornan a éste mas lentamente que cuando están por debajo. Esto puede reflejar el hecho que, en el corto plazo, para los países productores de café es mas fácil restringir la oferta para incrementar precios, que incrementarla para reducirlos. Además, se encuentra evidencia que el ajuste es más rápido cuando las desviaciones del equilibrio son mayores. Los pronósticos que se obtienen a partir de los modelos de corrección de errores no lineales y asimétricos considerados en el trabajo, ofrecen una leve mejoría cuando se comparan con los pronósticos que resultan de un modelo de paseo aleatorio.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper has two original contributions. First, we show that the present value model (PVM hereafter), which has a wide application in macroeconomics and fi nance, entails common cyclical feature restrictions in the dynamics of the vector error-correction representation (Vahid and Engle, 1993); something that has been already investigated in that VECM context by Johansen and Swensen (1999, 2011) but has not been discussed before with this new emphasis. We also provide the present value reduced rank constraints to be tested within the log-linear model. Our second contribution relates to forecasting time series that are subject to those long and short-run reduced rank restrictions. The reason why appropriate common cyclical feature restrictions might improve forecasting is because it finds natural exclusion restrictions preventing the estimation of useless parameters, which would otherwise contribute to the increase of forecast variance with no expected reduction in bias. We applied the techniques discussed in this paper to data known to be subject to present value restrictions, i.e. the online series maintained and up-dated by Shiller. We focus on three different data sets. The fi rst includes the levels of interest rates with long and short maturities, the second includes the level of real price and dividend for the S&P composite index, and the third includes the logarithmic transformation of prices and dividends. Our exhaustive investigation of several different multivariate models reveals that better forecasts can be achieved when restrictions are applied to them. Moreover, imposing short-run restrictions produce forecast winners 70% of the time for target variables of PVMs and 63.33% of the time when all variables in the system are considered.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new physics-based technique for correcting inhomogeneities present in sub-daily temperature records is proposed. The approach accounts for changes in the sensor-shield characteristics that affect the energy balance dependent on ambient weather conditions (radiation, wind). An empirical model is formulated that reflects the main atmospheric processes and can be used in the correction step of a homogenization procedure. The model accounts for short- and long-wave radiation fluxes (including a snow cover component for albedo calculation) of a measurement system, such as a radiation shield. One part of the flux is further modulated by ventilation. The model requires only cloud cover and wind speed for each day, but detailed site-specific information is necessary. The final model has three free parameters, one of which is a constant offset. The three parameters can be determined, e.g., using the mean offsets for three observation times. The model is developed using the example of the change from the Wild screen to the Stevenson screen in the temperature record of Basel, Switzerland, in 1966. It is evaluated based on parallel measurements of both systems during a sub-period at this location, which were discovered during the writing of this paper. The model can be used in the correction step of homogenization to distribute a known mean step-size to every single measurement, thus providing a reasonable alternative correction procedure for high-resolution historical climate series. It also constitutes an error model, which may be applied, e.g., in data assimilation approaches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract A new LIBS quantitative analysis method based on analytical line adaptive selection and Relevance Vector Machine (RVM) regression model is proposed. First, a scheme of adaptively selecting analytical line is put forward in order to overcome the drawback of high dependency on a priori knowledge. The candidate analytical lines are automatically selected based on the built-in characteristics of spectral lines, such as spectral intensity, wavelength and width at half height. The analytical lines which will be used as input variables of regression model are determined adaptively according to the samples for both training and testing. Second, an LIBS quantitative analysis method based on RVM is presented. The intensities of analytical lines and the elemental concentrations of certified standard samples are used to train the RVM regression model. The predicted elemental concentration analysis results will be given with a form of confidence interval of probabilistic distribution, which is helpful for evaluating the uncertainness contained in the measured spectra. Chromium concentration analysis experiments of 23 certified standard high-alloy steel samples have been carried out. The multiple correlation coefficient of the prediction was up to 98.85%, and the average relative error of the prediction was 4.01%. The experiment results showed that the proposed LIBS quantitative analysis method achieved better prediction accuracy and better modeling robustness compared with the methods based on partial least squares regression, artificial neural network and standard support vector machine.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data fluctuation in multiple measurements of Laser Induced Breakdown Spectroscopy (LIBS) greatly affects the accuracy of quantitative analysis. A new LIBS quantitative analysis method based on the Robust Least Squares Support Vector Machine (RLS-SVM) regression model is proposed. The usual way to enhance the analysis accuracy is to improve the quality and consistency of the emission signal, such as by averaging the spectral signals or spectrum standardization over a number of laser shots. The proposed method focuses more on how to enhance the robustness of the quantitative analysis regression model. The proposed RLS-SVM regression model originates from the Weighted Least Squares Support Vector Machine (WLS-SVM) but has an improved segmented weighting function and residual error calculation according to the statistical distribution of measured spectral data. Through the improved segmented weighting function, the information on the spectral data in the normal distribution will be retained in the regression model while the information on the outliers will be restrained or removed. Copper elemental concentration analysis experiments of 16 certified standard brass samples were carried out. The average value of relative standard deviation obtained from the RLS-SVM model was 3.06% and the root mean square error was 1.537%. The experimental results showed that the proposed method achieved better prediction accuracy and better modeling robustness compared with the quantitative analysis methods based on Partial Least Squares (PLS) regression, standard Support Vector Machine (SVM) and WLS-SVM. It was also demonstrated that the improved weighting function had better comprehensive performance in model robustness and convergence speed, compared with the four known weighting functions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Excess nutrient loads carried by streams and rivers are a great concern for environmental resource managers. In agricultural regions, excess loads are transported downstream to receiving water bodies, potentially causing algal blooms, which could lead to numerous ecological problems. To better understand nutrient load transport, and to develop appropriate water management plans, it is important to have accurate estimates of annual nutrient loads. This study used a Monte Carlo sub-sampling method and error-corrected statistical models to estimate annual nitrate-N loads from two watersheds in central Illinois. The performance of three load estimation methods (the seven-parameter log-linear model, the ratio estimator, and the flow-weighted averaging estimator) applied at one-, two-, four-, six-, and eight-week sampling frequencies were compared. Five error correction techniques; the existing composite method, and four new error correction techniques developed in this study; were applied to each combination of sampling frequency and load estimation method. On average, the most accurate error reduction technique, (proportional rectangular) resulted in 15% and 30% more accurate load estimates when compared to the most accurate uncorrected load estimation method (ratio estimator) for the two watersheds. Using error correction methods, it is possible to design more cost-effective monitoring plans by achieving the same load estimation accuracy with fewer observations. Finally, the optimum combinations of monitoring threshold and sampling frequency that minimizes the number of samples required to achieve specified levels of accuracy in load estimation were determined. For one- to three-weeks sampling frequencies, combined threshold/fixed-interval monitoring approaches produced the best outcomes, while fixed-interval-only approaches produced the most accurate results for four- to eight-weeks sampling frequencies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Error correction is perhaps the most widely used method for responding to student writing. While various studies have investigated the effectiveness of providing error correction, there has been relatively little research incorporating teachers' beliefs, practices, and students' preferences in written error correction. The current study adopted features of an ethnographic research design in order to explore the beliefs and practices of ESL teachers, and investigate the preferences of L2 students regarding written error correction in the context of a language institute situated in the Brisbane metropolitan district. In this study, two ESL teachers and two groups of adult intermediate L2 students were interviewed and observed. The beliefs and practices of the teachers were elicited through interviews and classroom observations. The preferences of L2 students were elicited through focus group interviews. Responses of the participants were encoded and analysed. Results of the teacher interviews showed that teachers believe that providing written error correction has advantages and disadvantages. Teachers believe that providing written error correction helps students improve their proof-reading skills in order to revise their writing more efficiently. However, results also indicate that providing written error correction is very time consuming. Furthermore, teachers prefer to provide explicit written feedback strategies during the early stages of the language course, and move to a more implicit strategy of providing written error correction in order to facilitate language learning. On the other hand, results of the focus group interviews suggest that students regard their teachers' practice of written error correction as important in helping them locate their errors and revise their writing. However, students also feel that the process of providing written error correction is time consuming. Nevertheless, students want and expect their teachers to provide written feedback because they believe that the benefits they gain from receiving feedback on their writing outweigh the apparent disadvantages of their teachers' written error correction strategies.