16 resultados para Forecasting and replenishment (CPFR)
em Helda - Digital Repository of University of Helsinki
Resumo:
Yhteenveto: Vesistömalleihin perustuva vesistöjen seuranta- ja ennustejärjestelmä vesi- ja ympäristöhallinnossa
Resumo:
Modeling and forecasting of implied volatility (IV) is important to both practitioners and academics, especially in trading, pricing, hedging, and risk management activities, all of which require an accurate volatility. However, it has become challenging since the 1987 stock market crash, as implied volatilities (IVs) recovered from stock index options present two patterns: volatility smirk(skew) and volatility term-structure, if the two are examined at the same time, presents a rich implied volatility surface (IVS). This implies that the assumptions behind the Black-Scholes (1973) model do not hold empirically, as asset prices are mostly influenced by many underlying risk factors. This thesis, consists of four essays, is modeling and forecasting implied volatility in the presence of options markets’ empirical regularities. The first essay is modeling the dynamics IVS, it extends the Dumas, Fleming and Whaley (DFW) (1998) framework; for instance, using moneyness in the implied forward price and OTM put-call options on the FTSE100 index, a nonlinear optimization is used to estimate different models and thereby produce rich, smooth IVSs. Here, the constant-volatility model fails to explain the variations in the rich IVS. Next, it is found that three factors can explain about 69-88% of the variance in the IVS. Of this, on average, 56% is explained by the level factor, 15% by the term-structure factor, and the additional 7% by the jump-fear factor. The second essay proposes a quantile regression model for modeling contemporaneous asymmetric return-volatility relationship, which is the generalization of Hibbert et al. (2008) model. The results show strong negative asymmetric return-volatility relationship at various quantiles of IV distributions, it is monotonically increasing when moving from the median quantile to the uppermost quantile (i.e., 95%); therefore, OLS underestimates this relationship at upper quantiles. Additionally, the asymmetric relationship is more pronounced with the smirk (skew) adjusted volatility index measure in comparison to the old volatility index measure. Nonetheless, the volatility indices are ranked in terms of asymmetric volatility as follows: VIX, VSTOXX, VDAX, and VXN. The third essay examines the information content of the new-VDAX volatility index to forecast daily Value-at-Risk (VaR) estimates and compares its VaR forecasts with the forecasts of the Filtered Historical Simulation and RiskMetrics. All daily VaR models are then backtested from 1992-2009 using unconditional, independence, conditional coverage, and quadratic-score tests. It is found that the VDAX subsumes almost all information required for the volatility of daily VaR forecasts for a portfolio of the DAX30 index; implied-VaR models outperform all other VaR models. The fourth essay models the risk factors driving the swaption IVs. It is found that three factors can explain 94-97% of the variation in each of the EUR, USD, and GBP swaption IVs. There are significant linkages across factors, and bi-directional causality is at work between the factors implied by EUR and USD swaption IVs. Furthermore, the factors implied by EUR and USD IVs respond to each others’ shocks; however, surprisingly, GBP does not affect them. Second, the string market model calibration results show it can efficiently reproduce (or forecast) the volatility surface for each of the swaptions markets.
Resumo:
Recently, focus of real estate investment has expanded from the building-specific level to the aggregate portfolio level. The portfolio perspective requires investment analysis for real estate which is comparable with that of other asset classes, such as stocks and bonds. Thus, despite its distinctive features, such as heterogeneity, high unit value, illiquidity and the use of valuations to measure performance, real estate should not be considered in isolation. This means that techniques which are widely used for other assets classes can also be applied to real estate. An important part of investment strategies which support decisions on multi-asset portfolios is identifying the fundamentals of movements in property rents and returns, and predicting them on the basis of these fundamentals. The main objective of this thesis is to find the key drivers and the best methods for modelling and forecasting property rents and returns in markets which have experienced structural changes. The Finnish property market, which is a small European market with structural changes and limited property data, is used as a case study. The findings in the thesis show that is it possible to use modern econometric tools for modelling and forecasting property markets. The thesis consists of an introduction part and four essays. Essays 1 and 3 model Helsinki office rents and returns, and assess the suitability of alternative techniques for forecasting these series. Simple time series techniques are able to account for structural changes in the way markets operate, and thus provide the best forecasting tool. Theory-based econometric models, in particular error correction models, which are constrained by long-run information, are better for explaining past movements in rents and returns than for predicting their future movements. Essay 2 proceeds by examining the key drivers of rent movements for several property types in a number of Finnish property markets. The essay shows that commercial rents in local markets can be modelled using national macroeconomic variables and a panel approach. Finally, Essay 4 investigates whether forecasting models can be improved by accounting for asymmetric responses of office returns to the business cycle. The essay finds that the forecast performance of time series models can be improved by introducing asymmetries, and the improvement is sufficient to justify the extra computational time and effort associated with the application of these techniques.
Resumo:
A diffusion/replacement model for new consumer durables designed to be used as a long-term forecasting tool is developed. The model simulates new demand as well as replacement demand over time. The model is called DEMSIM and is built upon a counteractive adoption model specifying the basic forces affecting the adoption behaviour of individual consumers. These forces are the promoting forces and the resisting forces. The promoting forces are further divided into internal and external influences. These influences are operationalized within a multi-segmental diffusion model generating the adoption behaviour of the consumers in each segment as an expected value. This diffusion model is combined with a replacement model built upon the same segmental structure as the diffusion model. This model generates, in turn, the expected replacement behaviour in each segment. To be able to use DEMSIM as a forecasting tool in early stages of a diffusion process estimates of the model parameters are needed as soon as possible after product launch. However, traditional statistical techniques are not very helpful in estimating such parameters in early stages of a diffusion process. To enable early parameter calibration an optimization algorithm is developed by which the main parameters of the diffusion model can be estimated on the basis of very few sales observations. The optimization is carried out in iterative simulation runs. Empirical validations using the optimization algorithm reveal that the diffusion model performs well in early long-term sales forecasts, especially as it comes to the timing of future sales peaks.
Resumo:
In recent years, thanks to developments in information technology, large-dimensional datasets have been increasingly available. Researchers now have access to thousands of economic series and the information contained in them can be used to create accurate forecasts and to test economic theories. To exploit this large amount of information, researchers and policymakers need an appropriate econometric model.Usual time series models, vector autoregression for example, cannot incorporate more than a few variables. There are two ways to solve this problem: use variable selection procedures or gather the information contained in the series to create an index model. This thesis focuses on one of the most widespread index model, the dynamic factor model (the theory behind this model, based on previous literature, is the core of the first part of this study), and its use in forecasting Finnish macroeconomic indicators (which is the focus of the second part of the thesis). In particular, I forecast economic activity indicators (e.g. GDP) and price indicators (e.g. consumer price index), from 3 large Finnish datasets. The first dataset contains a large series of aggregated data obtained from the Statistics Finland database. The second dataset is composed by economic indicators from Bank of Finland. The last dataset is formed by disaggregated data from Statistic Finland, which I call micro dataset. The forecasts are computed following a two steps procedure: in the first step I estimate a set of common factors from the original dataset. The second step consists in formulating forecasting equations including the factors extracted previously. The predictions are evaluated using relative mean squared forecast error, where the benchmark model is a univariate autoregressive model. The results are dataset-dependent. The forecasts based on factor models are very accurate for the first dataset (the Statistics Finland one), while they are considerably worse for the Bank of Finland dataset. The forecasts derived from the micro dataset are still good, but less accurate than the ones obtained in the first case. This work leads to multiple research developments. The results here obtained can be replicated for longer datasets. The non-aggregated data can be represented in an even more disaggregated form (firm level). Finally, the use of the micro data, one of the major contributions of this thesis, can be useful in the imputation of missing values and the creation of flash estimates of macroeconomic indicator (nowcasting).
Resumo:
The ongoing rapid fragmentation of tropical forests is a major threat to global biodiversity. This is because many of the tropical forests are so-called biodiversity 'hotspots', areas that host exceptional species richness and concentrations of endemic species. Forest fragmentation has negative ecological and genetic consequences for plant survival. Proposed reasons for plant species' loss in forest fragments are, e.g., abiotic edge effects, altered species interactions, increased genetic drift, and inbreeding depression. To be able to conserve plants in forest fragments, the ecological and genetic processes that threaten the species have to be understood. That is possible only after obtaining adequate information on their biology, including taxonomy, life history, reproduction, and spatial and genetic structure of the populations. In this research, I focused on the African violet (genus Saintpaulia), a little-studied conservation flagship from the Eastern Arc Mountains and Coastal Forests hotspot of Tanzania and Kenya. The main objective of the research was to increase understanding of the life history, ecology and population genetics of Saintpaulia that is needed for the design of appropriate conservation measures. A further aim was to provide population-level insights into the difficult taxonomy of Saintpaulia. Ecological field work was conducted in a relatively little fragmented protected forest in the Amani Nature Reserve in the East Usambara Mountains, in northeastern Tanzania, complemented by population genetic laboratory work and ecological experiments in Helsinki, Finland. All components of the research were conducted with Saintpaulia ionantha ssp. grotei, which forms a taxonomically controversial population complex in the study area. My results suggest that Saintpaulia has good reproductive performance in forests with low disturbance levels in the East Usambara Mountains. Another important finding was that seed production depends on sufficient pollinator service. The availability of pollinators should thus be considered in the in situ management of threatened populations. Dynamic population stage structures were observed suggesting that the studied populations are demographically viable. High mortality of seedlings and juveniles was observed during the dry season but this was compensated by ample recruitment of new seedlings after the rainy season. Reduced tree canopy closure and substrate quality are likely to exacerbate seedling and juvenile mortality, and, therefore, forest fragmentation and disturbance are serious threats to the regeneration of Saintpaulia. Restoration of sufficient shade to enhance seedling establishment is an important conservation measure in populations located in disturbed habitats. Long-term demographic monitoring, which enables the forecasting of a population s future, is also recommended in disturbed habitats. High genetic diversities were observed in the populations, which suggest that they possess the variation that is needed for evolutionary responses in a changing environment. Thus, genetic management of the studied populations does not seem necessary as long as the habitats remain favourable for Saintpaulia. The observed high levels of inbreeding in some of the populations, and the reduced fitness of the inbred progeny compared to the outbred progeny, as revealed by the hand-pollination experiment, indicate that inbreeding and inbreeding depression are potential mechanisms contributing to the extinction of Saintpaulia populations. The relatively weak genetic divergence of the three different morphotypes of Saintpaulia ionantha ssp. grotei lend support to the hypothesis that the populations in the Usambara/lowlands region represent a segregating metapopulation (or metapopulations), where subpopulations are adapting to their particular environments. The partial genetic and phenological integrity, and the distinct trailing habit of the morphotype 'grotei' would, however, justify its placement in a taxonomic rank of its own, perhaps in a subspecific rank.
Resumo:
This thesis contains three subject areas concerning particulate matter in urban area air quality: 1) Analysis of the measured concentrations of particulate matter mass concentrations in the Helsinki Metropolitan Area (HMA) in different locations in relation to traffic sources, and at different times of year and day. 2) The evolution of traffic exhaust originated particulate matter number concentrations and sizes in local street scale are studied by a combination of a dispersion model and an aerosol process model. 3) Some situations of high particulate matter concentrations are analysed with regard to their meteorological origins, especially temperature inversion situations, in the HMA and three other European cities. The prediction of the occurrence of meteorological conditions conducive to elevated particulate matter concentrations in the studied cities is examined. The performance of current numerical weather forecasting models in the case of air pollution episode situations is considered. The study of the ambient measurements revealed clear diurnal variation of the PM10 concentrations in the HMA measurement sites, irrespective of the year and the season of the year. The diurnal variation of local vehicular traffic flows seemed to have no substantial correlation with the PM2.5 concentrations, indicating that the PM10 concentrations were originated mainly from local vehicular traffic (direct emissions and suspension), while the PM2.5 concentrations were mostly of regionally and long-range transported origin. The modelling study of traffic exhaust dispersion and transformation showed that the number concentrations of particles originating from street traffic exhaust undergo a substantial change during the first tens of seconds after being emitted from the vehicle tailpipe. The dilution process was shown to dominate total number concentrations. Minimal effect of both condensation and coagulation was seen in the Aitken mode number concentrations. The included air pollution episodes were chosen on the basis of occurrence in either winter or spring, and having at least partly local origin. In the HMA, air pollution episodes were shown to be linked to predominantly stable atmospheric conditions with high atmospheric pressure and low wind speeds in conjunction with relatively low ambient temperatures. For the other European cities studied, the best meteorological predictors for the elevated concentrations of PM10 were shown to be temporal (hourly) evolutions of temperature inversions, stable atmospheric stability and in some cases, wind speed. Concerning the weather prediction during particulate matter related air pollution episodes, the use of the studied models were found to overpredict pollutant dispersion, leading to underprediction of pollutant concentration levels.
Resumo:
This thesis studies binary time series models and their applications in empirical macroeconomics and finance. In addition to previously suggested models, new dynamic extensions are proposed to the static probit model commonly used in the previous literature. In particular, we are interested in probit models with an autoregressive model structure. In Chapter 2, the main objective is to compare the predictive performance of the static and dynamic probit models in forecasting the U.S. and German business cycle recession periods. Financial variables, such as interest rates and stock market returns, are used as predictive variables. The empirical results suggest that the recession periods are predictable and dynamic probit models, especially models with the autoregressive structure, outperform the static model. Chapter 3 proposes a Lagrange Multiplier (LM) test for the usefulness of the autoregressive structure of the probit model. The finite sample properties of the LM test are considered with simulation experiments. Results indicate that the two alternative LM test statistics have reasonable size and power in large samples. In small samples, a parametric bootstrap method is suggested to obtain approximately correct size. In Chapter 4, the predictive power of dynamic probit models in predicting the direction of stock market returns are examined. The novel idea is to use recession forecast (see Chapter 2) as a predictor of the stock return sign. The evidence suggests that the signs of the U.S. excess stock returns over the risk-free return are predictable both in and out of sample. The new "error correction" probit model yields the best forecasts and it also outperforms other predictive models, such as ARMAX models, in terms of statistical and economic goodness-of-fit measures. Chapter 5 generalizes the analysis of univariate models considered in Chapters 2 4 to the case of a bivariate model. A new bivariate autoregressive probit model is applied to predict the current state of the U.S. business cycle and growth rate cycle periods. Evidence of predictability of both cycle indicators is obtained and the bivariate model is found to outperform the univariate models in terms of predictive power.
Resumo:
In this thesis we deal with the concept of risk. The objective is to bring together and conclude on some normative information regarding quantitative portfolio management and risk assessment. The first essay concentrates on return dependency. We propose an algorithm for classifying markets into rising and falling. Given the algorithm, we derive a statistic: the Trend Switch Probability, for detection of long-term return dependency in the first moment. The empirical results suggest that the Trend Switch Probability is robust over various volatility specifications. The serial dependency in bear and bull markets behaves however differently. It is strongly positive in rising market whereas in bear markets it is closer to a random walk. Realized volatility, a technique for estimating volatility from high frequency data, is investigated in essays two and three. In the second essay we find, when measuring realized variance on a set of German stocks, that the second moment dependency structure is highly unstable and changes randomly. Results also suggest that volatility is non-stationary from time to time. In the third essay we examine the impact from market microstructure on the error between estimated realized volatility and the volatility of the underlying process. With simulation-based techniques we show that autocorrelation in returns leads to biased variance estimates and that lower sampling frequency and non-constant volatility increases the error variation between the estimated variance and the variance of the underlying process. From these essays we can conclude that volatility is not easily estimated, even from high frequency data. It is neither very well behaved in terms of stability nor dependency over time. Based on these observations, we would recommend the use of simple, transparent methods that are likely to be more robust over differing volatility regimes than models with a complex parameter universe. In analyzing long-term return dependency in the first moment we find that the Trend Switch Probability is a robust estimator. This is an interesting area for further research, with important implications for active asset allocation.
Resumo:
The low predictive power of implied volatility in forecasting the subsequently realized volatility is a well-documented empirical puzzle. As suggested by e.g. Feinstein (1989), Jackwerth and Rubinstein (1996), and Bates (1997), we test whether unrealized expectations of jumps in volatility could explain this phenomenon. Our findings show that expectations of infrequently occurring jumps in volatility are indeed priced in implied volatility. This has two important consequences. First, implied volatility is actually expected to exceed realized volatility over long periods of time only to be greatly less than realized volatility during infrequently occurring periods of very high volatility. Second, the slope coefficient in the classic forecasting regression of realized volatility on implied volatility is very sensitive to the discrepancy between ex ante expected and ex post realized jump frequencies. If the in-sample frequency of positive volatility jumps is lower than ex ante assessed by the market, the classic regression test tends to reject the hypothesis of informational efficiency even if markets are informationally effective.
Resumo:
Yhteenveto: Lumimallit vesistöjen ennustemalleissa