881 resultados para Unit root tests


Relevância:

80.00% 80.00%

Publicador:

Resumo:

O trabalho avalia a dinâmica descrita pelo consumo de bens duráveis e poupança dos consumidores brasileiros entre setembro de 2005 e abril de 2011 e contribui com a literatura ao utilizar como ferramenta de análise um modelo autoregressivo com valor limite endógeno e dados qualitativos da pesquisa Sondagem de Expectativas do Consumidor Brasileiro, da FGV. Indicadores qualitativos para essas duas variáveis foram calculados e a metodologia proposta permitiu investigar, simultaneamente, a linearidade e estacionaridade de suas trajetórias. Os resultados sugerem, em ambos os casos, uma dinâmica não-linear com raiz unitária parcial. Adicionalmente, a estacionaridade constatada a partir de um valor limite estimado de 3,3 pontos percentuais para o Indicador de Compras de Bens Duráveis e de 3,6 pontos percentuais para o Indicador de Poupança permitem classificar seus históricos com indícios de saturação da capacidade de poupança e consumo dos indivíduos.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper introduces a residual based test where the null hypothesis of c:&InOvement between two processes with local persistenc~ can be tested, even under the presence of an endogenous regressor. It, therefore, fills in an existing lacuna in econometrics, in which longrun relationships can also be tested if the dependent and independent variables do not have a unit root, but do exhibit local persistence.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Image compress consists in represent by small amount of data, without loss a visual quality. Data compression is important when large images are used, for example satellite image. Full color digital images typically use 24 bits to specify the color of each pixel of the Images with 8 bits for each of the primary components, red, green and blue (RGB). Compress an image with three or more bands (multispectral) is fundamental to reduce the transmission time, process time and record time. Because many applications need images, that compression image data is important: medical image, satellite image, sensor etc. In this work a new compression color images method is proposed. This method is based in measure of information of each band. This technique is called by Self-Adaptive Compression (S.A.C.) and each band of image is compressed with a different threshold, for preserve information with better result. SAC do a large compression in large redundancy bands, that is, lower information and soft compression to bands with bigger amount of information. Two image transforms are used in this technique: Discrete Cosine Transform (DCT) and Principal Component Analysis (PCA). Primary step is convert data to new bands without relationship, with PCA. Later Apply DCT in each band. Data Loss is doing when a threshold discarding any coefficients. This threshold is calculated with two elements: PCA result and a parameter user. Parameters user define a compression tax. The system produce three different thresholds, one to each band of image, that is proportional of amount information. For image reconstruction is realized DCT and PCA inverse. SAC was compared with JPEG (Joint Photographic Experts Group) standard and YIQ compression and better results are obtain, in MSE (Mean Square Root). Tests shown that SAC has better quality in hard compressions. With two advantages: (a) like is adaptive is sensible to image type, that is, presents good results to divers images kinds (synthetic, landscapes, people etc., and, (b) it need only one parameters user, that is, just letter human intervention is required

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The minority game (MG) model introduced recently provides promising insights into the understanding of the evolution of prices, indices and rates in the financial markets. In this paper we perform a time series analysis of the model employing tools from statistics, dynamical systems theory and stochastic processes. Using benchmark systems and a financial index for comparison, several conclusions are obtained about the generating mechanism for this kind of evolution. The motion is deterministic, driven by occasional random external perturbation. When the interval between two successive perturbations is sufficiently large, one can find low dimensional chaos in this regime. However, the full motion of the MG model is found to be similar to that of the first differences of the SP500 index: stochastic, nonlinear and (unit root) stationary. (C) 2002 Elsevier B.V. B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Objective: This ex vivo study evaluated the effect of pre-flaring and file size on the accuracy of the Root ZX and Novapex electronic apex locators (EALs). Material and methods: The actual working length (WL) was set 1 mm short of the apical foramen in the palatal root canals of 24 extracted maxillary molars. The teeth were embedded in an alginate mold, and two examiners performed the electronic measurements using #10, #15, and #20 K-files. The files were inserted into the root canals until the "0.0" or "APEX" signals were observed on the LED or display screens for the Novapex and Root ZX, respectively, retracting to the 1.0 mark. The measurements were repeated after the pre-flaring using the S1 and SX Pro-Taper instruments. Two measurements were performed for each condition and the means were used. Intra-class correlation coefficients (ICCs) were calculated to verify the intra-and inter-examiner agreement. The mean differences between the WL and electronic length values were analyzed by the three-way ANOVA test (p<0.05). Results: ICCs were high (>0.8) and the results demonstrated a similar accuracy for both EALs (p>0.05). Statistically significant accurate measurements were verified in the pre-flared canals, except for the Novapex using a #20 K-file. Conclusions: The tested EALs showed acceptable accuracy, whereas the pre-flaring procedure revealed a more significant effect than the used file size.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A control-oriented model of a Dual Clutch Transmission was developed for real-time Hardware In the Loop (HIL) applications, to support model-based development of the DCT controller. The model is an innovative attempt to reproduce the fast dynamics of the actuation system while maintaining a step size large enough for real-time applications. The model comprehends a detailed physical description of hydraulic circuit, clutches, synchronizers and gears, and simplified vehicle and internal combustion engine sub-models. As the oil circulating in the system has a large bulk modulus, the pressure dynamics are very fast, possibly causing instability in a real-time simulation; the same challenge involves the servo valves dynamics, due to the very small masses of the moving elements. Therefore, the hydraulic circuit model has been modified and simplified without losing physical validity, in order to adapt it to the real-time simulation requirements. The results of offline simulations have been compared to on-board measurements to verify the validity of the developed model, that was then implemented in a HIL system and connected to the TCU (Transmission Control Unit). Several tests have been performed: electrical failure tests on sensors and actuators, hydraulic and mechanical failure tests on hydraulic valves, clutches and synchronizers, and application tests comprehending all the main features of the control performed by the TCU. Being based on physical laws, in every condition the model simulates a plausible reaction of the system. The first intensive use of the HIL application led to the validation of the new safety strategies implemented inside the TCU software. A test automation procedure has been developed to permit the execution of a pattern of tests without the interaction of the user; fully repeatable tests can be performed for non-regression verification, allowing the testing of new software releases in fully automatic mode.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Sei $\pi:X\rightarrow S$ eine \&quot;uber $\Z$ definierte Familie von Calabi-Yau Varietaten der Dimension drei. Es existiere ein unter dem Gauss-Manin Zusammenhang invarianter Untermodul $M\subset H^3_{DR}(X/S)$ von Rang vier, sodass der Picard-Fuchs Operator $P$ auf $M$ ein sogenannter {\em Calabi-Yau } Operator von Ordnung vier ist. Sei $k$ ein endlicher K\&quot;orper der Charaktetristik $p$, und sei $\pi_0:X_0\rightarrow S_0$ die Reduktion von $\pi$ \uber $k$. F\ur die gew\ohnlichen (ordinary) Fasern $X_{t_0}$ der Familie leiten wir eine explizite Formel zur Berechnung des charakteristischen Polynoms des Frobeniusendomorphismus, des {\em Frobeniuspolynoms}, auf dem korrespondierenden Untermodul $M_{cris}\subset H^3_{cris}(X_{t_0})$ her. Sei nun $f_0(z)$ die Potenzreihenl\osung der Differentialgleichung $Pf=0$ in einer Umgebung der Null. Da eine reziproke Nullstelle des Frobeniuspolynoms in einem Teichm\uller-Punkt $t$ durch $f_0(z)/f_0(z^p)|_{z=t}$ gegeben ist, ist ein entscheidender Schritt in der Berechnung des Frobeniuspolynoms die Konstruktion einer $p-$adischen analytischen Fortsetzung des Quotienten $f_0(z)/f_0(z^p)$ auf den Rand des $p-$adischen Einheitskreises. Kann man die Koeffizienten von $f_0$ mithilfe der konstanten Terme in den Potenzen eines Laurent-Polynoms, dessen Newton-Polyeder den Ursprung als einzigen inneren Gitterpunkt enth\alt, ausdr\ucken,so beweisen wir gewisse Kongruenz-Eigenschaften unter den Koeffizienten von $f_0$. Diese sind entscheidend bei der Konstruktion der analytischen Fortsetzung. Enth\alt die Faser $X_{t_0}$ einen gew\ohnlichen Doppelpunkt, so erwarten wir im Grenz\ubergang, dass das Frobeniuspolynom in zwei Faktoren von Grad eins und einen Faktor von Grad zwei zerf\allt. Der Faktor von Grad zwei ist dabei durch einen Koeffizienten $a_p$ eindeutig bestimmt. Durchl\auft nun $p$ die Menge aller Primzahlen, so erwarten wir aufgrund des Modularit\atssatzes, dass es eine Modulform von Gewicht vier gibt, deren Koeffizienten durch die Koeffizienten $a_p$ gegeben sind. Diese Erwartung hat sich durch unsere umfangreichen Rechnungen best\atigt. Dar\uberhinaus leiten wir weitere Formeln zur Bestimmung des Frobeniuspolynoms her, in welchen auch die nicht-holomorphen L\osungen der Gleichung $Pf=0$ in einer Umgebung der Null eine Rolle spielen.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The advances that have been characterizing spatial econometrics in recent years are mostly theoretical and have not found an extensive empirical application yet. In this work we aim at supplying a review of the main tools of spatial econometrics and to show an empirical application for one of the most recently introduced estimators. Despite the numerous alternatives that the econometric theory provides for the treatment of spatial (and spatiotemporal) data, empirical analyses are still limited by the lack of availability of the correspondent routines in statistical and econometric software. Spatiotemporal modeling represents one of the most recent developments in spatial econometric theory and the finite sample properties of the estimators that have been proposed are currently being tested in the literature. We provide a comparison between some estimators (a quasi-maximum likelihood, QML, estimator and some GMM-type estimators) for a fixed effects dynamic panel data model under certain conditions, by means of a Monte Carlo simulation analysis. We focus on different settings, which are characterized either by fully stable or quasi-unit root series. We also investigate the extent of the bias that is caused by a non-spatial estimation of a model when the data are characterized by different degrees of spatial dependence. Finally, we provide an empirical application of a QML estimator for a time-space dynamic model which includes a temporal, a spatial and a spatiotemporal lag of the dependent variable. This is done by choosing a relevant and prolific field of analysis, in which spatial econometrics has only found limited space so far, in order to explore the value-added of considering the spatial dimension of the data. In particular, we study the determinants of cropland value in Midwestern U.S.A. in the years 1971-2009, by taking the present value model (PVM) as the theoretical framework of analysis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Questa tesi è incentrata sull'analisi dell'arbitraggio statistico, strategia di trading che cerca di trarre profitto dalle fluttuazioni statistiche di prezzo di uno o più asset sulla base del loro valore atteso. In generale, si creano opportunità di arbitraggio statistico quando si riescono ad individuare delle componenti sistematiche nelle dinamiche dei prezzi di alcuni asset che si muovono con regolarità persistenti e prevalenti. Perturbazioni casuali della domanda e dell’offerta nei mercati possono causare divergenze nei prezzi, dando luogo a opportunità di intermarket spread, ossia simultanei acquisto e vendita di commodities correlate tra loro. Vengono approfonditi vari test econometrici, i test unit root utilizzati per verificare se una serie storica possa essere modellizzata con un processo random walk. Infine viene costruita una strategia di trading basata sull'arbitraggio statistico e applicata numericamente alle serie storiche dal 2010 al 2014 di due titoli azionari sul petrolio: Brent e WTI.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Category-management models serve to assist in the development of plans for pricing and promotions of individual brands. Techniques to solve the models can have problems of accuracy and interpretability because they are susceptible to spurious regression problems due to nonstationary time-series data. Improperly stated nonstationary systems can reduce the accuracy of the forecasts and undermine the interpretation of the results. This is problematic because recent studies indicate that sales are often a nonstationary time-series. Newly developed correction techniques can account for nonstationarity by incorporating error-correction terms into the model when using a Bayesian Vector Error-Correction Model. The benefit of using such a technique is that shocks to control variates can be separated into permanent and temporary effects and allow cointegration of series for analysis purposes. Analysis of a brand data set indicates that this is important even at the brand level. Thus, additional information is generated that allows a decision maker to examine controllable variables in terms of whether they influence sales over a short or long duration. Only products that are nonstationary in sales volume can be manipulated for long-term profit gain, and promotions must be cointegrated with brand sales volume. The brand data set is used to explore the capabilities and interpretation of cointegration.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Subsequent to the influential paper of [Chan, K.C., Karolyi, G.A., Longstaff, F.A., Sanders, A.B., 1992. An empirical comparison of alternative models of the short-term interest rate. Journal of Finance 47, 1209-1227], the generalised method of moments (GMM) has been a popular technique for estimation and inference relating to continuous-time models of the short-term interest rate. GMM has been widely employed to estimate model parameters and to assess the goodness-of-fit of competing short-rate specifications. The current paper conducts a series of simulation experiments to document the bias and precision of GMM estimates of short-rate parameters, as well as the size and power of [Hansen, L.P., 1982. Large sample properties of generalised method of moments estimators. Econometrica 50, 1029-1054], J-test of over-identifying restrictions. While the J-test appears to have appropriate size and good power in sample sizes commonly encountered in the short-rate literature, GMM estimates of the speed of mean reversion are shown to be severely biased. Consequently, it is dangerous to draw strong conclusions about the strength of mean reversion using GMM. In contrast, the parameter capturing the levels effect, which is important in differentiating between competing short-rate specifications, is estimated with little bias. (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A framework for developing marketing category management decision support systems (DSS) based upon the Bayesian Vector Autoregressive (BVAR) model is extended. Since the BVAR model is vulnerable to permanent and temporary shifts in purchasing patterns over time, a form that can correct for the shifts and still provide the other advantages of the BVAR is a Bayesian Vector Error-Correction Model (BVECM). We present the mechanics of extending the DSS to move from a BVAR model to the BVECM model for the category management problem. Several additional iterative steps are required in the DSS to allow the decision maker to arrive at the best forecast possible. The revised marketing DSS framework and model fitting procedures are described. Validation is conducted on a sample problem.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A sávosan rögzített devizaárfolyamok elméleti és gyakorlati vizsgálatai a nemzetközi közgazdaságtan egyik legnépszerűbb témaköre volt a kilencvenes évek elején. A gyakorlati módszerek közül az alkalmazások és hivatkozások száma tekintetében az úgynevezett eltolódással igazítás módszere emelkedett ki. A módszert alkalmazó szerzők szerint amíg a lebegő árfolyamú devizák előrejelzése céltalan feladatnak tűnik, addig sávos árfolyam esetén az árfolyam sávon belüli helyzetének előrejelzése sikeresen végezhető. E tanulmány bemutatja, hogy az Európai Monetáris Rendszer és az északeurópai államok sávos árfolyamrendszereinél e módszer alkalmazásával adódott eredmények például a lebegő árfolyamú amerikai dollárra és az egységgyökfolyamatok többségére is érvényesek. A tanulmány feltárja e látszólagos ellentmondás okait, és bemutat egy olyan, a sávos árfolyamrendszerek főbb megfigyelt jellemzőire épülő modellt, amelynek keretei között a sávon belüli árfolyam előrejelzése nem feltétlenül lehetséges, mert a leértékelés előtti időszakban a sávon belüli árfolyam alakulása kaotikus lehet. / === / Following the development of the first exchange rate target zone model at the end of the eighties dozens of papers analyzed theoretical and empirical topics of currency bands. This paper reviews different empirical methods to analyze the credibility of the band and lays special emphasis on the most widely used method, the so-called drift-adjustment method. Papers applying that method claim that while forecasting a freely floating currency is hopeless, predicting an exchange rate within the future band is successful. This paper shows that the results achieved by applications to EMS and Nordic currencies are not specific to data of target zone currencies. For example, application to US dollar and even to most unit root processes leads qualitatively to the same. This paper explores the solutions of this puzzle and shows a model of target zones in which the exchange rate within the band is not necessarily predictable since the process might follow chaotic dynamics before devaluation.