889 resultados para Forecast error variance
Resumo:
Atmosphere only and ocean only variational data assimilation (DA) schemes are able to use window lengths that are optimal for the error growth rate, non-linearity and observation density of the respective systems. Typical window lengths are 6-12 hours for the atmosphere and 2-10 days for the ocean. However, in the implementation of coupled DA schemes it has been necessary to match the window length of the ocean to that of the atmosphere, which may potentially sacrifice the accuracy of the ocean analysis in order to provide a more balanced coupled state. This paper investigates how extending the window length in the presence of model error affects both the analysis of the coupled state and the initialized forecast when using coupled DA with differing degrees of coupling. Results are illustrated using an idealized single column model of the coupled atmosphere-ocean system. It is found that the analysis error from an uncoupled DA scheme can be smaller than that from a coupled analysis at the initial time, due to faster error growth in the coupled system. However, this does not necessarily lead to a more accurate forecast due to imbalances in the coupled state. Instead coupled DA is more able to update the initial state to reduce the impact of the model error on the accuracy of the forecast. The effect of model error is potentially most detrimental in the weakly coupled formulation due to the inconsistency between the coupled model used in the outer loop and uncoupled models used in the inner loop.
Resumo:
In this paper, we propose a novel approach to econometric forecasting of stationary and ergodic time series within a panel-data framework. Our key element is to employ the (feasible) bias-corrected average forecast. Using panel-data sequential asymptotics we show that it is potentially superior to other techniques in several contexts. In particular, it is asymptotically equivalent to the conditional expectation, i.e., has an optimal limiting mean-squared error. We also develop a zeromean test for the average bias and discuss the forecast-combination puzzle in small and large samples. Monte-Carlo simulations are conducted to evaluate the performance of the feasible bias-corrected average forecast in finite samples. An empirical exercise based upon data from a well known survey is also presented. Overall, theoretical and empirical results show promise for the feasible bias-corrected average forecast.
Resumo:
In this paper, we propose a novel approach to econometric forecasting of stationary and ergodic time series within a panel-data framework. Our key element is to employ the bias-corrected average forecast. Using panel-data sequential asymptotics we show that it is potentially superior to other techniques in several contexts. In particular it delivers a zero-limiting mean-squared error if the number of forecasts and the number of post-sample time periods is sufficiently large. We also develop a zero-mean test for the average bias. Monte-Carlo simulations are conducted to evaluate the performance of this new technique in finite samples. An empirical exercise, based upon data from well known surveys is also presented. Overall, these results show promise for the bias-corrected average forecast.
Resumo:
In this paper, we propose a novel approach to econometric forecasting of stationary and ergodic time series within a panel-data framework. Our key element is to employ the (feasible) bias-corrected average forecast. Using panel-data sequential asymptotics we show that it is potentially superior to other techniques in several contexts. In particular, it is asymptotically equivalent to the conditional expectation, i.e., has an optimal limiting mean-squared error. We also develop a zeromean test for the average bias and discuss the forecast-combination puzzle in small and large samples. Monte-Carlo simulations are conducted to evaluate the performance of the feasible bias-corrected average forecast in finite samples. An empirical exercise, based upon data from a well known survey is also presented. Overall, these results show promise for the feasible bias-corrected average forecast.
Resumo:
This paper investigates the implications of the credit channel of the monetary policy transmission mechanism in the case of Brazil, using a structural FAVAR (SFAVAR) approach. The term structural comes from the estimation strategy, which generates factors that have a clear economic interpretation. The results show that unexpected shocks in the proxies for the external nance premium and the bank balance sheet channel produce large and persistent uctuations in in ation and economic activity accounting for more than 30% of the error forecast variance of the latter in a three-year horizon. The central bank seems to incorporate developments in credit markets especially variations in credit spreads into its reaction function, as impulse-response exercises show the Selic rate is declining in response to wider credit spreads and a contraction in the volume of new loans. Counterfactual simulations also demonstrate that the credit channel ampli ed the economic contraction in Brazil during the acute phase of the global nancial crisis in the last quarter of 2008, thus gave an important impulse to the recovery period that followed.
Resumo:
This paper has two original contributions. First, we show that the present value model (PVM hereafter), which has a wide application in macroeconomics and fi nance, entails common cyclical feature restrictions in the dynamics of the vector error-correction representation (Vahid and Engle, 1993); something that has been already investigated in that VECM context by Johansen and Swensen (1999, 2011) but has not been discussed before with this new emphasis. We also provide the present value reduced rank constraints to be tested within the log-linear model. Our second contribution relates to forecasting time series that are subject to those long and short-run reduced rank restrictions. The reason why appropriate common cyclical feature restrictions might improve forecasting is because it finds natural exclusion restrictions preventing the estimation of useless parameters, which would otherwise contribute to the increase of forecast variance with no expected reduction in bias. We applied the techniques discussed in this paper to data known to be subject to present value restrictions, i.e. the online series maintained and up-dated by Shiller. We focus on three different data sets. The fi rst includes the levels of interest rates with long and short maturities, the second includes the level of real price and dividend for the S&P composite index, and the third includes the logarithmic transformation of prices and dividends. Our exhaustive investigation of several different multivariate models reveals that better forecasts can be achieved when restrictions are applied to them. Moreover, imposing short-run restrictions produce forecast winners 70% of the time for target variables of PVMs and 63.33% of the time when all variables in the system are considered.
Resumo:
The onset of the financial crisis in 2008 and the European sovereign crisis in 2010 renewed the interest of macroeconomists on the role played by credit in business cycle fluctuations. The purpose of the present work is to present empirical evidence on the monetary policy transmission mechanism in Brazil with a special eye on the role played by the credit channel, using different econometric techniques. It is comprised by three articles. The first one presents a review of the literature of financial frictions, with a focus on the overlaps between credit activity and the monetary policy. It highlights how the sharp disruptions in the financial markets spurred central banks in developed and emerging nations to deploy of a broad set of non conventional tools to overcome the damage on financial intermediation. A chapter is dedicated to the challenge face by the policymaking in emerging markets and Brazil in particular in the highly integrated global capital market. This second article investigates the implications of the credit channel of the monetary policy transmission mechanism in the case of Brazil, using a structural FAVAR (SFAVAR) approach. The term “structural” comes from the estimation strategy, which generates factors that have a clear economic interpretation. The results show that unexpected shocks in the proxies for the external finance premium and the credit volume produce large and persistent fluctuations in inflation and economic activity – accounting for more than 30% of the error forecast variance of the latter in a three-year horizon. Counterfactual simulations demonstrate that the credit channel amplified the economic contraction in Brazil during the acute phase of the global financial crisis in the last quarter of 2008, thus gave an important impulse to the recovery period that followed. In the third articles, I make use of Bayesian estimation of a classical neo-Keynesian DSGE model, incorporating the financial accelerator channel developed by Bernanke, Gertler and Gilchrist (1999). The results present evidences in line to those already seen in the previous article: disturbances on the external finance premium – represented here by credit spreads – trigger significant responses on the aggregate demand and inflation and monetary policy shocks are amplified by the financial accelerator mechanism. Keywords: Macroeconomics, Monetary Policy, Credit Channel, Financial Accelerator, FAVAR, DSGE, Bayesian Econometrics
Resumo:
Uma equação de regressão múltipla MOS (da sigla em inglês para Model Output Statistics), para previsão da temperatura mínima diária do ar na cidade de Bauru, estado de São Paulo, é desenvolvida. A equação de regressão múltipla, obtida usando análise de regressão stepwise, tem quatro preditores, três do modelo numérico global do Centro de Previsão de Tempo e Estudos Climáticos (CPTEC) e um observacional da estação meteorológica do Instituto de Pesquisas Meteorológicas (IPMet), Bauru. Os preditores são prognósticos para 24 horas do modelo global, válidos para 00:00GMT, da temperatura em 1000hPa, vento meridional em 850hPa e umidade relativa em 1000hPa, e temperatura observada às 18:00GMT. Esses quatro preditores explicam, aproximadamente, 80% da variância total do preditando, com erro quadrático médio de 1,4°C, que é aproximadamente metade do desvio padrão da temperatura mínima diária do ar observada na estação do IPMet. Uma verificação da equação MOS com uma amostra independente de 47 casos mostra que a previsão não se deteriora significativamente quando o preditor observacional for desconsiderado. A equação MOS, com ou sem esse preditor, produz previsões com erro absoluto menor do que 1,5°C em 70% dos casos examinados. Este resultado encoraja a utilização da técnica MOS para previsão operacional da temperatura mínima e seu desenvolvimento para outros elementos do tempo e outras localidades.
Resumo:
We derive the additive-multiplicative error model for microarray intensities, and describe two applications. For the detection of differentially expressed genes, we obtain a statistic whose variance is approximately independent of the mean intensity. For the post hoc calibration (normalization) of data with respect to experimental factors, we describe a method for parameter estimation.
Resumo:
The construction of a reliable, practically useful prediction rule for future response is heavily dependent on the "adequacy" of the fitted regression model. In this article, we consider the absolute prediction error, the expected value of the absolute difference between the future and predicted responses, as the model evaluation criterion. This prediction error is easier to interpret than the average squared error and is equivalent to the mis-classification error for the binary outcome. We show that the distributions of the apparent error and its cross-validation counterparts are approximately normal even under a misspecified fitted model. When the prediction rule is "unsmooth", the variance of the above normal distribution can be estimated well via a perturbation-resampling method. We also show how to approximate the distribution of the difference of the estimated prediction errors from two competing models. With two real examples, we demonstrate that the resulting interval estimates for prediction errors provide much more information about model adequacy than the point estimates alone.
Resumo:
Functional Magnetic Resonance Imaging (fMRI) is a non-invasive technique which is commonly used to quantify changes in blood oxygenation and flow coupled to neuronal activation. One of the primary goals of fMRI studies is to identify localized brain regions where neuronal activation levels vary between groups. Single voxel t-tests have been commonly used to determine whether activation related to the protocol differs across groups. Due to the generally limited number of subjects within each study, accurate estimation of variance at each voxel is difficult. Thus, combining information across voxels in the statistical analysis of fMRI data is desirable in order to improve efficiency. Here we construct a hierarchical model and apply an Empirical Bayes framework on the analysis of group fMRI data, employing techniques used in high throughput genomic studies. The key idea is to shrink residual variances by combining information across voxels, and subsequently to construct an improved test statistic in lieu of the classical t-statistic. This hierarchical model results in a shrinkage of voxel-wise residual sample variances towards a common value. The shrunken estimator for voxelspecific variance components on the group analyses outperforms the classical residual error estimator in terms of mean squared error. Moreover, the shrunken test-statistic decreases false positive rate when testing differences in brain contrast maps across a wide range of simulation studies. This methodology was also applied to experimental data regarding a cognitive activation task.
Resumo:
The purpose of this study is to investigate the effects of predictor variable correlations and patterns of missingness with dichotomous and/or continuous data in small samples when missing data is multiply imputed. Missing data of predictor variables is multiply imputed under three different multivariate models: the multivariate normal model for continuous data, the multinomial model for dichotomous data and the general location model for mixed dichotomous and continuous data. Subsequent to the multiple imputation process, Type I error rates of the regression coefficients obtained with logistic regression analysis are estimated under various conditions of correlation structure, sample size, type of data and patterns of missing data. The distributional properties of average mean, variance and correlations among the predictor variables are assessed after the multiple imputation process. ^ For continuous predictor data under the multivariate normal model, Type I error rates are generally within the nominal values with samples of size n = 100. Smaller samples of size n = 50 resulted in more conservative estimates (i.e., lower than the nominal value). Correlation and variance estimates of the original data are retained after multiple imputation with less than 50% missing continuous predictor data. For dichotomous predictor data under the multinomial model, Type I error rates are generally conservative, which in part is due to the sparseness of the data. The correlation structure for the predictor variables is not well retained on multiply-imputed data from small samples with more than 50% missing data with this model. For mixed continuous and dichotomous predictor data, the results are similar to those found under the multivariate normal model for continuous data and under the multinomial model for dichotomous data. With all data types, a fully-observed variable included with variables subject to missingness in the multiple imputation process and subsequent statistical analysis provided liberal (larger than nominal values) Type I error rates under a specific pattern of missing data. It is suggested that future studies focus on the effects of multiple imputation in multivariate settings with more realistic data characteristics and a variety of multivariate analyses, assessing both Type I error and power. ^
Resumo:
Geostrophic surface velocities can be derived from the gradients of the mean dynamic topography-the difference between the mean sea surface and the geoid. Therefore, independently observed mean dynamic topography data are valuable input parameters and constraints for ocean circulation models. For a successful fit to observational dynamic topography data, not only the mean dynamic topography on the particular ocean model grid is required, but also information about its inverse covariance matrix. The calculation of the mean dynamic topography from satellite-based gravity field models and altimetric sea surface height measurements, however, is not straightforward. For this purpose, we previously developed an integrated approach to combining these two different observation groups in a consistent way without using the common filter approaches (Becker et al. in J Geodyn 59(60):99-110, 2012, doi:10.1016/j.jog.2011.07.0069; Becker in Konsistente Kombination von Schwerefeld, Altimetrie und hydrographischen Daten zur Modellierung der dynamischen Ozeantopographie, 2012, http://nbn-resolving.de/nbn:de:hbz:5n-29199). Within this combination method, the full spectral range of the observations is considered. Further, it allows the direct determination of the normal equations (i.e., the inverse of the error covariance matrix) of the mean dynamic topography on arbitrary grids, which is one of the requirements for ocean data assimilation. In this paper, we report progress through selection and improved processing of altimetric data sets. We focus on the preprocessing steps of along-track altimetry data from Jason-1 and Envisat to obtain a mean sea surface profile. During this procedure, a rigorous variance propagation is accomplished, so that, for the first time, the full covariance matrix of the mean sea surface is available. The combination of the mean profile and a combined GRACE/GOCE gravity field model yields a mean dynamic topography model for the North Atlantic Ocean that is characterized by a defined set of assumptions. We show that including the geodetically derived mean dynamic topography with the full error structure in a 3D stationary inverse ocean model improves modeled oceanographic features over previous estimates.
Resumo:
Corals are acclimatized to populate dynamic habitats that neighbour coral reefs. Habitats such as seagrass beds exhibit broad diel changes in temperature and pH that routinely expose corals to conditions predicted for reefs over the next 50-100 years. However, whether such acclimatization effectively enhances physiological tolerance to, and hence provides refuge against, future climate scenarios remains unknown. Also, whether corals living in low-variance habitats can tolerate present-day high-variance conditions remains untested. We experimentally examined how pH and temperature predicted for the year 2100 affects the growth and physiology of two dominant Caribbean corals (Acropora palmata and Porites astreoides) native to habitats with intrinsically low (outer-reef terrace, LV) and/or high (neighbouring seagrass, HV) environmental variance. Under present-day temperature and pH, growth and metabolic rates (calcification, respiration and photosynthesis) were unchanged for HV versus LV populations. Superimposing future climate scenarios onto the HV and LV conditions did not result in any enhanced tolerance to colonies native to HV. Calcification rates were always lower for elevated temperature and/or reduced pH. Together, these results suggest that seagrass habitats may not serve as refugia against climate change if the magnitude of future temperature and pH changes is equivalent to neighbouring reef habitats.
Resumo:
Risk-ranking protocols are used widely to classify the conservation status of the world's species. Here we report on the first empirical assessment of their reliability by using a retrospective study of 18 pairs of bird and mammal species (one species extinct and the other extant) with eight different assessors. The performance of individual assessors varied substantially, but performance was improved by incorporating uncertainty in parameter estimates and consensus among the assessors. When this was done, the ranks from the protocols were consistent with the extinction outcome in 70-80% of pairs and there were mismatches in only 10-20% of cases. This performance was similar to the subjective judgements of the assessors after they had estimated the range and population parameters required by the protocols, and better than any single parameter. When used to inform subjective judgement, the protocols therefore offer a means of reducing unpredictable biases that may be associated with expert input and have the advantage of making the logic behind assessments explicit. We conclude that the protocols are useful for forecasting extinctions, although they are prone to some errors that have implications for conservation. Some level of error is to be expected, however, given the influence of chance on extinction. The performance of risk assessment protocols may be improved by providing training in the application of the protocols, incorporating uncertainty in parameter estimates and using consensus among multiple assessors, including some who are experts in the application of the protocols. Continued testing and refinement of the protocols may help to provide better absolute estimates of risk, particularly by re-evaluating how the protocols accommodate missing data.