924 resultados para Bias-corrected bootstrap
Resumo:
Using data from the United States, Japan, Germany , United Kingdom and France, Sims (1992) found that positive innovations to shortterm interest rates led to sharp, persistent increases in the price level. The result was conÖrmed by other authors and, as a consequence of its non-expectable nature, was given the name "price puzzle" by Eichenbaum (1992). In this paper I investigate the existence of a price puzzle in Brazil using the same type of estimation and benchmark identiÖcation scheme employed by Christiano et al. (2000). In a methodological improvement over these studies, I qualify the results with the construction of bias-corrected bootstrap conÖdence intervals. Even though the data does show the existence of a statistically signiÖcant price puzzle in Brazil, it lasts for only one quarter and is quantitatively immaterial
Resumo:
Using data from the United States, Japan, Germany , United Kingdom and France, Sims (1992) found that positive innovations to shortterm interest rates led to sharp, persistent increases in the price leveI. The result was confirmed by other authors and, as a consequence of its non-expectable nature, was given the name "price puzzle" by Eichenbaum (1992). In this paper I investigate the existence of a price puzzle in Brazil using the same type of estimation and benchmark identification scheme employed by Christiano et aI. (2000). In a methodological improvement over these studies, I qualify the results with the construction of bias-corrected bootstrap confidence intervals. Even though the data does show the existence of a statistically significant price puzzle in Brazil, it lasts for .only one quarter and is quantitatively immaterial.
Resumo:
In this paper we discuss bias-corrected estimators for the regression and the dispersion parameters in an extended class of dispersion models (Jorgensen, 1997b). This class extends the regular dispersion models by letting the dispersion parameter vary throughout the observations, and contains the dispersion models as particular case. General formulae for the O(n(-1)) bias are obtained explicitly in dispersion models with dispersion covariates, which generalize previous results obtained by Botter and Cordeiro (1998), Cordeiro and McCullagh (1991), Cordeiro and Vasconcellos (1999), and Paula (1992). The practical use of the formulae is that we can derive closed-form expressions for the O(n(-1)) biases of the maximum likelihood estimators of the regression and dispersion parameters when the information matrix has a closed-form. Various expressions for the O(n(-1)) biases are given for special models. The formulae have advantages for numerical purposes because they require only a supplementary weighted linear regression. We also compare these bias-corrected estimators with two different estimators which are also bias-free to order O(n(-1)) that are based on bootstrap methods. These estimators are compared by simulation. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Projections of Arctic sea ice thickness (SIT) have the potential to inform stakeholders about accessibility to the region, but are currently rather uncertain. The latest suite of CMIP5 Global Climate Models (GCMs) produce a wide range of simulated SIT in the historical period (1979–2014) and exhibit various biases when compared with the Pan-Arctic Ice Ocean Modelling and Assimilation System (PIOMAS) sea ice reanalysis. We present a new method to constrain such GCM simulations of SIT via a statistical bias correction technique. The bias correction successfully constrains the spatial SIT distribution and temporal variability in the CMIP5 projections whilst retaining the climatic fluctuations from individual ensemble members. The bias correction acts to reduce the spread in projections of SIT and reveals the significant contributions of climate internal variability in the first half of the century and of scenario uncertainty from mid-century onwards. The projected date of ice-free conditions in the Arctic under the RCP8.5 high emission scenario occurs in the 2050s, which is a decade earlier than without the bias correction, with potentially significant implications for stakeholders in the Arctic such as the shipping industry. The bias correction methodology developed could be similarly applied to other variables to reduce spread in climate projections more generally.
Resumo:
In this paper, we propose a novel approach to econometric forecasting of stationary and ergodic time series within a panel-data framework. Our key element is to employ the (feasible) bias-corrected average forecast. Using panel-data sequential asymptotics we show that it is potentially superior to other techniques in several contexts. In particular, it is asymptotically equivalent to the conditional expectation, i.e., has an optimal limiting mean-squared error. We also develop a zeromean test for the average bias and discuss the forecast-combination puzzle in small and large samples. Monte-Carlo simulations are conducted to evaluate the performance of the feasible bias-corrected average forecast in finite samples. An empirical exercise based upon data from a well known survey is also presented. Overall, theoretical and empirical results show promise for the feasible bias-corrected average forecast.
Resumo:
In this paper, we propose a novel approach to econometric forecasting of stationary and ergodic time series within a panel-data framework. Our key element is to employ the bias-corrected average forecast. Using panel-data sequential asymptotics we show that it is potentially superior to other techniques in several contexts. In particular it delivers a zero-limiting mean-squared error if the number of forecasts and the number of post-sample time periods is sufficiently large. We also develop a zero-mean test for the average bias. Monte-Carlo simulations are conducted to evaluate the performance of this new technique in finite samples. An empirical exercise, based upon data from well known surveys is also presented. Overall, these results show promise for the bias-corrected average forecast.
Resumo:
In this paper, we propose a novel approach to econometric forecasting of stationary and ergodic time series within a panel-data framework. Our key element is to employ the (feasible) bias-corrected average forecast. Using panel-data sequential asymptotics we show that it is potentially superior to other techniques in several contexts. In particular, it is asymptotically equivalent to the conditional expectation, i.e., has an optimal limiting mean-squared error. We also develop a zeromean test for the average bias and discuss the forecast-combination puzzle in small and large samples. Monte-Carlo simulations are conducted to evaluate the performance of the feasible bias-corrected average forecast in finite samples. An empirical exercise, based upon data from a well known survey is also presented. Overall, these results show promise for the feasible bias-corrected average forecast.
Resumo:
This report discusses the calculation of analytic second-order bias techniques for the maximum likelihood estimates (for short, MLEs) of the unknown parameters of the distribution in quality and reliability analysis. It is well-known that the MLEs are widely used to estimate the unknown parameters of the probability distributions due to their various desirable properties; for example, the MLEs are asymptotically unbiased, consistent, and asymptotically normal. However, many of these properties depend on an extremely large sample sizes. Those properties, such as unbiasedness, may not be valid for small or even moderate sample sizes, which are more practical in real data applications. Therefore, some bias-corrected techniques for the MLEs are desired in practice, especially when the sample size is small. Two commonly used popular techniques to reduce the bias of the MLEs, are ‘preventive’ and ‘corrective’ approaches. They both can reduce the bias of the MLEs to order O(n−2), whereas the ‘preventive’ approach does not have an explicit closed form expression. Consequently, we mainly focus on the ‘corrective’ approach in this report. To illustrate the importance of the bias-correction in practice, we apply the bias-corrected method to two popular lifetime distributions: the inverse Lindley distribution and the weighted Lindley distribution. Numerical studies based on the two distributions show that the considered bias-corrected technique is highly recommended over other commonly used estimators without bias-correction. Therefore, special attention should be paid when we estimate the unknown parameters of the probability distributions under the scenario in which the sample size is small or moderate.
Resumo:
This paper evaluates the performances of prediction intervals generated from alternative time series models, in the context of tourism forecasting. The forecasting methods considered include the autoregressive (AR) model, the AR model using the bias-corrected bootstrap, seasonal ARIMA models, innovations state space models for exponential smoothing, and Harvey’s structural time series models. We use thirteen monthly time series for the number of tourist arrivals to Hong Kong and Australia. The mean coverage rates and widths of the alternative prediction intervals are evaluated in an empirical setting. It is found that all models produce satisfactory prediction intervals, except for the autoregressive model. In particular, those based on the biascorrected bootstrap perform best in general, providing tight intervals with accurate coverage rates, especially when the forecast horizon is long.
Resumo:
Time series classification has been extensively explored in many fields of study. Most methods are based on the historical or current information extracted from data. However, if interest is in a specific future time period, methods that directly relate to forecasts of time series are much more appropriate. An approach to time series classification is proposed based on a polarization measure of forecast densities of time series. By fitting autoregressive models, forecast replicates of each time series are obtained via the bias-corrected bootstrap, and a stationarity correction is considered when necessary. Kernel estimators are then employed to approximate forecast densities, and discrepancies of forecast densities of pairs of time series are estimated by a polarization measure, which evaluates the extent to which two densities overlap. Following the distributional properties of the polarization measure, a discriminant rule and a clustering method are proposed to conduct the supervised and unsupervised classification, respectively. The proposed methodology is applied to both simulated and real data sets, and the results show desirable properties.
Resumo:
I start presenting an explicit solution to Taylorís (2001) model, in order to illustrate the link between the target interest rate and the overnight interest rate prevailing in the economy. Next, I use Vector Auto Regressions to shed some light on the evolution of key macroeconomic variables after the Central Bank of Brazil increases the target interest rate by 1%. Point estimates show a four-year accumulated output loss ranging from 0:04% (whole sample, 1980 : 1-2004 : 2; quarterly data) to 0:25% (Post-Real data only) with a Örst-year peak output response between 0:04% and 1:0%; respectively. Prices decline between 2% and 4% in a 4-year horizon. The accumulated output response is found to be between 3:5 and 6 times higher after the Real Plan than when the whole sample is considered. The 95% confidence bands obtained using bias-corrected bootstrap always include the null output response when the whole sample is used, but not when the data is restricted to the Post-Real period. Innovations to interest rates explain between 4:9% (whole sample) and 9:2% (post-Real sample) of the forecast error of GDP.
Resumo:
This study compared four alternative approaches (Taylor, Fieller, percentile bootstrap, and bias-corrected bootstrap methods) to estimating confidence intervals (CIs) around cost-effectiveness (CE) ratio. The study consisted of two components: (1) Monte Carlo simulation was conducted to identify characteristics of hypothetical cost-effectiveness data sets which might lead one CI estimation technique to outperform another. These results were matched to the characteristics of an (2) extant data set derived from the National AIDS Demonstration Research (NADR) project. The methods were used to calculate (CIs) for data set. These results were then compared. The main performance criterion in the simulation study was the percentage of times the estimated (CIs) contained the “true” CE. A secondary criterion was the average width of the confidence intervals. For the bootstrap methods, bias was estimated. ^ Simulation results for Taylor and Fieller methods indicated that the CIs estimated using the Taylor series method contained the true CE more often than did those obtained using the Fieller method, but the opposite was true when the correlation was positive and the CV of effectiveness was high for each value of CV of costs. Similarly, the CIs obtained by applying the Taylor series method to the NADR data set were wider than those obtained using the Fieller method for positive correlation values and for values for which the CV of effectiveness were not equal to 30% for each value of the CV of costs. ^ The general trend for the bootstrap methods was that the percentage of times the true CE ratio was contained in CIs was higher for the percentile method for higher values of the CV of effectiveness, given the correlation between average costs and effects and the CV of effectiveness. The results for the data set indicated that the bias corrected CIs were wider than the percentile method CIs. This result was in accordance with the prediction derived from the simulation experiment. ^ Generally, the bootstrap methods are more favorable for parameter specifications investigated in this study. However, the Taylor method is preferred for low CV of effect, and the percentile method is more favorable for higher CV of effect. ^
Resumo:
Theoretischer Hintergrund und Fragestellung: Schulische Tests dienen der Feststellung von Wissen und Können. Wie jede Messung kann auch diese durch Störvariablen verzerrt werden. Während Tests erlebte Angst ist ein solcher potentieller Störeinfluss: Angst kann Testleistungen beinträchtigen, da sie sich hinderlich auf die Informationsverarbeitung auswirken kann (Störung des Wissensabrufs und des Denkens; Zeidner, 1998). Dieser kognitiven Angstmanifestation (Rost & Schermer, 1997) liegt die angstbedingte automatische Aufmerksamkeitsorientierung auf aufgaben-irrelevante Gedanken während der Testbearbeitung zugrunde (Eysenck, Derakshan, Santos & Calvo, 2007). Es hat sich allerdings gezeigt, dass Angst nicht grundsätzlich mit Testleistungseinbußen einhergeht (Eysenck et al., 2007). Wir gehen davon aus, dass die Kapazität zur Selbstkontrolle bzw. Aufmerksamkeitsregulation (Baumeister, Muraven & Tice, 2000; Schmeichel & Baumeister, 2010) ein Faktor ist, der bedingt, wie stark kognitive Angstmanifestation während Tests und damit zusammenhängende Leistungseinbußen auftreten. Ängstliche Lernende mit höherer Aufmerksamkeitsregulationskapazität sollten ihrer automatischen Aufmerksamkeitsorientierung auf aufgaben-irrelevante Gedanken erfolgreicher entgegensteuern und ihre Aufmerksamkeit weiterhin auf die Aufgabenbearbeitung richten können. Dem entsprechend sollten sie trotz Angst weniger kognitive Angstmanifestation während Tests erleben als ängstliche Lernende mit geringerer Aufmerksamkeitsregulationskapazität. Auch die Selbstwirksamkeitserwartung und das Selbstwertgefühl sind Variablen, die in der Vergangenheit mit der Bewältigung von Angst und Stress in Verbindung gebracht wurden (Bandura, 1977; Baumeister, Campbell, Krueger & Vohs, 2003). Daher wurden diese Variablen als weitere Prädiktoren berücksichtigt. Es wurde die Hypothese getestet, dass die dispositionelle Aufmerksamkeitsregulationskapazität über die dispositionelle Selbstwirksamkeitserwartung und das dispositionelle Selbstwertgefühl hinaus Veränderungen in der kognitiven Angstmanifestation während Mathematiktests in einer Wirtschaftsschülerstichprobe vorhersagt. Es wurde des Weiteren davon ausgegangen, dass eine indirekte Verbindung zwischen der Aufmerksamkeitsregulationskapazität und der Veränderung in den Mathematiknoten, vermittelt über die Veränderung in der kognitiven Angstmanifestation, besteht. Methode: Einhundertachtundfünfzig Wirtschaftsschüler bearbeiteten im September 2011 (T1) einen Fragebogen, der die folgenden Messungen enthielt:-Subskala Kognitive Angstmanifestation aus dem Differentiellen Leistungsangstinventar (Rost & Schermer, 1997) bezogen auf Mathematiktests (Sparfeldt, Schilling, Rost, Stelzl & Peipert, 2005); Alpha = .90; -Skala zur dispositionellen Aufmerksamkeitsregulationskapazität (Bertrams & Englert, 2013); Alpha = .88; -Skala zur Selbstwirksamkeitserwartung (Schwarzer & Jerusalem, 1995); Alpha = .83; -Skala zum Selbstwertgefühl (von Collani & Herzberg, 2003); Alpha = .83; -Angabe der letzten Mathematikzeugnisnote. Im Februar 2012 (T2), also nach 5 Monaten und kurz nach dem Erhalt des Halbjahreszeugnisses, gaben die Schüler erneut ihre kognitive Angstmanifestation während Mathematiktests (Alpha = .93) und ihre letzte Mathematikzeugnisnote an. Ergebnisse: Die Daten wurden mittels Korrelationsanalyse, multipler Regressionsanalyse und Bootstrapping ausgewertet. Die Aufmerksamkeitsregulationskapazität, die Selbstwirksamkeitserwartung und das Selbstwertgefühl (alle zu T1) waren positiv interkorreliert, r= .50/.59/.59. Diese Variablen wurden gemeinsam als Prädiktoren in ein Regressionsmodell zur Vorhersage der kognitiven Angstmanifestation zu T2 eingefügt. Gleichzeitig wurde die kognitive Angstmanifestation zu T1 konstant gehalten. Es zeigte sich, dass die Aufmerksamkeitsregulationskapazität erwartungskonform die Veränderungen in der kognitiven Angstmanifestation vorhersagte, Beta = -.21, p= .02. Das heißt, dass höhere Aufmerksamkeitsregulationskapazität zu T1 mit verringerter kognitiver Angstmanifestation zu T2 einherging. Die Selbstwirksamkeitserwartung, Beta = .12, p= .14, und das Selbstwertgefühl, Beta = .05, p= .54, hatten hingegen keinen eigenen Vorhersagewert für die Veränderungen in der kognitiven Angstmanifestation. Des Weiteren ergab eine Mediationsanalyse mittels Bootstrapping (bias-corrected bootstrap 95% confidence interval, 5000 resamples; siehe Hayes & Scharkow, in press), dass die Aufmerksamkeitsregulationskapazität (T1), vermittelt über die Veränderung in der kognitiven Angstmanifestation, indirekt mit der Veränderung in der Mathematikleistung verbunden war (d.h. das Bootstrap-Konfidenzintervall schloss nicht die Null ein; CI [0.01, 0.24]). Für die Selbstwirksamkeitserwartung und das Selbstwertgefühl fand sich keine analoge indirekte Verbindung zur Mathematikleistung. Fazit: Die Befunde verweisen auf die Bedeutsamkeit der Aufmerksamkeitsregulationskapazität für die Bewältigung kognitiver Angstreaktionen während schulischer Tests. Losgelöst von der Aufmerksamkeitsregulationskapazität scheinen positive Erwartungen und ein positives Selbstbild keine protektive Wirkung hinsichtlich der leistungsbeeinträchtigenden kognitiven Angstmanifestation während Mathematiktests zu besitzen.
Resumo:
Ma thèse est composée de trois essais sur l'inférence par le bootstrap à la fois dans les modèles de données de panel et les modèles à grands nombres de variables instrumentales #VI# dont un grand nombre peut être faible. La théorie asymptotique n'étant pas toujours une bonne approximation de la distribution d'échantillonnage des estimateurs et statistiques de tests, je considère le bootstrap comme une alternative. Ces essais tentent d'étudier la validité asymptotique des procédures bootstrap existantes et quand invalides, proposent de nouvelles méthodes bootstrap valides. Le premier chapitre #co-écrit avec Sílvia Gonçalves# étudie la validité du bootstrap pour l'inférence dans un modèle de panel de données linéaire, dynamique et stationnaire à effets fixes. Nous considérons trois méthodes bootstrap: le recursive-design bootstrap, le fixed-design bootstrap et le pairs bootstrap. Ces méthodes sont des généralisations naturelles au contexte des panels des méthodes bootstrap considérées par Gonçalves et Kilian #2004# dans les modèles autorégressifs en séries temporelles. Nous montrons que l'estimateur MCO obtenu par le recursive-design bootstrap contient un terme intégré qui imite le biais de l'estimateur original. Ceci est en contraste avec le fixed-design bootstrap et le pairs bootstrap dont les distributions sont incorrectement centrées à zéro. Cependant, le recursive-design bootstrap et le pairs bootstrap sont asymptotiquement valides quand ils sont appliqués à l'estimateur corrigé du biais, contrairement au fixed-design bootstrap. Dans les simulations, le recursive-design bootstrap est la méthode qui produit les meilleurs résultats. Le deuxième chapitre étend les résultats du pairs bootstrap aux modèles de panel non linéaires dynamiques avec des effets fixes. Ces modèles sont souvent estimés par l'estimateur du maximum de vraisemblance #EMV# qui souffre également d'un biais. Récemment, Dhaene et Johmans #2014# ont proposé la méthode d'estimation split-jackknife. Bien que ces estimateurs ont des approximations asymptotiques normales centrées sur le vrai paramètre, de sérieuses distorsions demeurent à échantillons finis. Dhaene et Johmans #2014# ont proposé le pairs bootstrap comme alternative dans ce contexte sans aucune justification théorique. Pour combler cette lacune, je montre que cette méthode est asymptotiquement valide lorsqu'elle est utilisée pour estimer la distribution de l'estimateur split-jackknife bien qu'incapable d'estimer la distribution de l'EMV. Des simulations Monte Carlo montrent que les intervalles de confiance bootstrap basés sur l'estimateur split-jackknife aident grandement à réduire les distorsions liées à l'approximation normale en échantillons finis. En outre, j'applique cette méthode bootstrap à un modèle de participation des femmes au marché du travail pour construire des intervalles de confiance valides. Dans le dernier chapitre #co-écrit avec Wenjie Wang#, nous étudions la validité asymptotique des procédures bootstrap pour les modèles à grands nombres de variables instrumentales #VI# dont un grand nombre peu être faible. Nous montrons analytiquement qu'un bootstrap standard basé sur les résidus et le bootstrap restreint et efficace #RE# de Davidson et MacKinnon #2008, 2010, 2014# ne peuvent pas estimer la distribution limite de l'estimateur du maximum de vraisemblance à information limitée #EMVIL#. La raison principale est qu'ils ne parviennent pas à bien imiter le paramètre qui caractérise l'intensité de l'identification dans l'échantillon. Par conséquent, nous proposons une méthode bootstrap modifiée qui estime de facon convergente cette distribution limite. Nos simulations montrent que la méthode bootstrap modifiée réduit considérablement les distorsions des tests asymptotiques de type Wald #$t$# dans les échantillons finis, en particulier lorsque le degré d'endogénéité est élevé.