45 resultados para Macroeconomic variables

em CentAUR: Central Archive University of Reading - UK


Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper investigates the properties of implied volatility series calculated from options on Treasury bond futures, traded on LIFFE. We demonstrate that the use of near-maturity at the money options to calculate implied volatilities causes less mis-pricing and is therefore superior to, a weighted average measure encompassing all relevant options. We demonstrate that, whilst a set of macroeconomic variables has some predictive power for implied volatilities, we are not able to earn excess returns by trading on the basis of these predictions once we allow for typical investor transactions costs.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The aim of this paper is to explore effects of macroeconomic variables on house prices and also, the lead-lag relationships of real estate markets to examine house price diffusion across Asian financial centres. The analysis is based on the Global Vector Auto-Regression (GVAR) model estimated using quarterly data for six Asian financial centres (Hong Kong, Tokyo, Seoul, Singapore, Taipei and Bangkok) from 1991Q1 to 2011Q2. The empirical results indicate that the global economic conditions play significant roles in shaping house price movements across Asian financial centres. In particular, a small open economy that heavily relies on international trade such as – Singapore and Tokyo - shows positive correlations between economy’s openness and house prices, consistent with the Balassa-Samuelson hypothesis in international trade. However, region-specific conditions do play important roles as determinants of house prices, partly due to restrictive housing policies and demand-supply imbalances, as found in Singapore and Bangkok.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We investigate alternative robust approaches to forecasting, using a new class of robust devices, contrasted with equilibrium-correction models. Their forecasting properties are derived facing a range of likely empirical problems at the forecast origin, including measurement errors, impulses, omitted variables, unanticipated location shifts and incorrectly included variables that experience a shift. We derive the resulting forecast biases and error variances, and indicate when the methods are likely to perform well. The robust methods are applied to forecasting US GDP using autoregressive models, and also to autoregressive models with factors extracted from a large dataset of macroeconomic variables. We consider forecasting performance over the Great Recession, and over an earlier more quiescent period.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We consider the forecasting of macroeconomic variables that are subject to revisions, using Bayesian vintage-based vector autoregressions. The prior incorporates the belief that, after the first few data releases, subsequent ones are likely to consist of revisions that are largely unpredictable. The Bayesian approach allows the joint modelling of the data revisions of more than one variable, while keeping the concomitant increase in parameter estimation uncertainty manageable. Our model provides markedly more accurate forecasts of post-revision values of inflation than do other models in the literature.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The study examines the impact of liquidity risk on freight derivatives returns. The Amihud liquidity ratio and bid–ask spreads are utilized to assess the existence of liquidity risk in the freight derivatives market. Other macroeconomic variables are used to control for market risk. Results indicate that liquidity risk is priced and both liquidity measures have a significant role in determining freight derivatives returns. Consistent with expectations, both liquidity measures are found to have positive and significant effects on the returns of freight derivatives. The results have important implications for modeling freight derivatives, and consequently, for trading and risk management purposes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Model-based estimates of future uncertainty are generally based on the in-sample fit of the model, as when Box-Jenkins prediction intervals are calculated. However, this approach will generate biased uncertainty estimates in real time when there are data revisions. A simple remedy is suggested, and used to generate more accurate prediction intervals for 25 macroeconomic variables, in line with the theory. A simulation study based on an empirically-estimated model of data revisions for US output growth is used to investigate small-sample properties.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The variogram is essential for local estimation and mapping of any variable by kriging. The variogram itself must usually be estimated from sample data. The sampling density is a compromise between precision and cost, but it must be sufficiently dense to encompass the principal spatial sources of variance. A nested, multi-stage, sampling with separating distances increasing in geometric progression from stage to stage will do that. The data may then be analyzed by a hierarchical analysis of variance to estimate the components of variance for every stage, and hence lag. By accumulating the components starting from the shortest lag one obtains a rough variogram for modest effort. For balanced designs the analysis of variance is optimal; for unbalanced ones, however, these estimators are not necessarily the best, and the analysis by residual maximum likelihood (REML) will usually be preferable. The paper summarizes the underlying theory and illustrates its application with data from three surveys, one in which the design had four stages and was balanced and two implemented with unbalanced designs to economize when there were more stages. A Fortran program is available for the analysis of variance, and code for the REML analysis is listed in the paper. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The variogram is essential for local estimation and mapping of any variable by kriging. The variogram itself must usually be estimated from sample data. The sampling density is a compromise between precision and cost, but it must be sufficiently dense to encompass the principal spatial sources of variance. A nested, multi-stage, sampling with separating distances increasing in geometric progression from stage to stage will do that. The data may then be analyzed by a hierarchical analysis of variance to estimate the components of variance for every stage, and hence lag. By accumulating the components starting from the shortest lag one obtains a rough variogram for modest effort. For balanced designs the analysis of variance is optimal; for unbalanced ones, however, these estimators are not necessarily the best, and the analysis by residual maximum likelihood (REML) will usually be preferable. The paper summarizes the underlying theory and illustrates its application with data from three surveys, one in which the design had four stages and was balanced and two implemented with unbalanced designs to economize when there were more stages. A Fortran program is available for the analysis of variance, and code for the REML analysis is listed in the paper. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the aims of a broad ethnographic study into how the apportionment of risk influences pricing levels of contactors was to ascertain the significant risks affecting contractors in Ghana, and their impact on prices. To do this, in the context of contractors, the difference between expected and realized return on a project is the key dependent variable examined using documentary analyses and semi-structured interviews. Most work in this has focused on identifying and prioritising risks using relative importance indices generated from the analysis of questionnaire survey responses. However, this approach may be argued to constitute perceptions rather than direct measures of the project risk. Here, instead, project risk is investigated by examining two measures of the same quantity; one ‘before’ and one ‘after’ construction of a project has taken place. Risks events are identified by ascertaining the independent variables causing deviations between expected and actual rates of return. Risk impact is then measured by ascertaining additions or reductions to expected costs due to the occurrence of risk events. So far, data from eight substantially complete building projects indicates that consultants’ inefficiency, payment delays, subcontractor-related problems and changes in macroeconomic factors are significant risks affecting contractors in Ghana.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new digital atlas of the geomorphology of the Namib Sand Sea in southern Africa has been developed. This atlas incorporates a number of databases including a digital elevation model (ASTER and SRTM) and other remote sensing databases that cover climate (ERA-40) and vegetation (PAL and GIMMS). A map of dune types in the Namib Sand Sea has been derived from Landsat and CNES/SPOT imagery. The atlas also includes a collation of geochronometric dates, largely derived from luminescence techniques, and a bibliographic survey of the research literature on the geomorphology of the Namib dune system. Together these databases provide valuable information that can be used as a starting point for tackling important questions about the development of the Namib and other sand seas in the past, present and future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Variational data assimilation systems for numerical weather prediction rely on a transformation of model variables to a set of control variables that are assumed to be uncorrelated. Most implementations of this transformation are based on the assumption that the balanced part of the flow can be represented by the vorticity. However, this assumption is likely to break down in dynamical regimes characterized by low Burger number. It has recently been proposed that a variable transformation based on potential vorticity should lead to control variables that are uncorrelated over a wider range of regimes. In this paper we test the assumption that a transform based on vorticity and one based on potential vorticity produce an uncorrelated set of control variables. Using a shallow-water model we calculate the correlations between the transformed variables in the different methods. We show that the control variables resulting from a vorticity-based transformation may retain large correlations in some dynamical regimes, whereas a potential vorticity based transformation successfully produces a set of uncorrelated control variables. Calculations of spatial correlations show that the benefit of the potential vorticity transformation is linked to its ability to capture more accurately the balanced component of the flow.