13 resultados para Optimal hedge ratio. Garch. Effectiveness
em CentAUR: Central Archive University of Reading - UK
Resumo:
There is widespread evidence that the volatility of stock returns displays an asymmetric response to good and bad news. This article considers the impact of asymmetry on time-varying hedges for financial futures. An asymmetric model that allows forecasts of cash and futures return volatility to respond differently to positive and negative return innovations gives superior in-sample hedging performance. However, the simpler symmetric model is not inferior in a hold-out sample. A method for evaluating the models in a modern risk-management framework is presented, highlighting the importance of allowing optimal hedge ratios to be both time-varying and asymmetric.
Resumo:
This study proposes a utility-based framework for the determination of optimal hedge ratios (OHRs) that can allow for the impact of higher moments on hedging decisions. We examine the entire hyperbolic absolute risk aversion family of utilities which include quadratic, logarithmic, power, and exponential utility functions. We find that for both moderate and large spot (commodity) exposures, the performance of out-of-sample hedges constructed allowing for nonzero higher moments is better than the performance of the simpler OLS hedge ratio. The picture is, however, not uniform throughout our seven spot commodities as there is one instance (cotton) for which the modeling of higher moments decreases welfare out-of-sample relative to the simpler OLS. We support our empirical findings by a theoretical analysis of optimal hedging decisions and we uncover a novel link between OHRs and the minimax hedge ratio, that is the ratio which minimizes the largest loss of the hedged position. © 2011 Wiley Periodicals, Inc. Jrl Fut Mark
Resumo:
This article examines the ability of several models to generate optimal hedge ratios. Statistical models employed include univariate and multivariate generalized autoregressive conditionally heteroscedastic (GARCH) models, and exponentially weighted and simple moving averages. The variances of the hedged portfolios derived using these hedge ratios are compared with those based on market expectations implied by the prices of traded options. One-month and three-month hedging horizons are considered for four currency pairs. Overall, it has been found that an exponentially weighted moving-average model leads to lower portfolio variances than any of the GARCH-based, implied or time-invariant approaches.
Resumo:
1. The production of food for human consumption has led to an historical and global conflict with terrestrial carnivores, which in turn has resulted in the extinction or extirpation of many species, although some have benefited. At present, carnivores affect food production by: (i) killing human producers; killing and/or eating (ii) fish/shellfish; (iii) game/wildfowl; (iv) livestock; (v) damaging crops; (vi) transmitting diseases; and (vii) through trophic interactions with other species in agricultural landscapes. Conversely, carnivores can themselves be a source of dietary protein (bushmeat). 2. Globally, the major areas of conflict are predation on livestock and the transmission of rabies. At a broad scale, livestock predation is a customary problem where predators are present and has been quantified for a broad range of carnivore species, although the veracity of these estimates is equivocal. Typically, but not always, losses are small relative to the numbers held, but can be a significant proportion of total livestock mortality. Losses experienced by producers are often highly variable, indicating that factors such as husbandry practices and predator behaviour may significantly affect the relative vulnerability of properties in the wider landscape. Within livestock herds, juvenile animals are particularly vulnerable. 3. Proactive and reactive culling are widely practised as a means to limit predation on livestock and game. Historic changes in species' distributions and abundance illustrate that culling programmes can be very effective at reducing predator density, although such substantive impacts are generally considered undesirable for native predators. However, despite their prevalence, the effectiveness, efficiency and the benefit:cost ratio of culling programmes have been poorly studied. 4. A wide range of non-lethal methods to limit predation has been studied. However, many of these have their practical limitations and are unlikely to be widely applicable. 5. Lethal approaches are likely to dominate the management of terrestrial carnivores for the foreseeable future, but animal welfare considerations are increasingly likely to influence management strategies. The adoption of non-lethal approaches will depend upon proof of their effectiveness and the willingness of stakeholders to implement them, and, in some cases, appropriate licensing and legislation. 6. Overall, it is apparent that we still understand relatively little about the importance of factors affecting predation on livestock and how to manage this conflict effectively. We consider the following avenues of research to be essential: (i) quantified assessments of the loss of viable livestock; (ii) landscape-level studies of contiguous properties to quantify losses associated with variables such as different husbandry practices; (iii) replicated experimental manipulations to identify the relative benefit of particular management practices, incorporating (iv) techniques to identify individual predators killing stock; and (v) economic analyses of different management approaches to quantify optimal production strategies.
Resumo:
A predictability index was defined as the ratio of the variance of the optimal prediction to the variance of the original time series by Granger and Anderson (1976) and Bhansali (1989). A new simplified algorithm for estimating the predictability index is introduced and the new estimator is shown to be a simple and effective tool in applications of predictability ranking and as an aid in the preliminary analysis of time series. The relationship between the predictability index and the position of the poles and lag p of a time series which can be modelled as an AR(p) model are also investigated. The effectiveness of the algorithm is demonstrated using numerical examples including an application to stock prices.
Resumo:
A technique is derived for solving a non-linear optimal control problem by iterating on a sequence of simplified problems in linear quadratic form. The technique is designed to achieve the correct solution of the original non-linear optimal control problem in spite of these simplifications. A mixed approach with a discrete performance index and continuous state variable system description is used as the basis of the design, and it is shown how the global problem can be decomposed into local sub-system problems and a co-ordinator within a hierarchical framework. An analysis of the optimality and convergence properties of the algorithm is presented and the effectiveness of the technique is demonstrated using a simulation example with a non-separable performance index.
Resumo:
School effectiveness is a microtechnology of change. It is a relay device, which transfers macro policy into everyday processes and priorities in schools. It is part of the growing apparatus of performance evaluation. Change is brought about by a focus on the school as a site-based system to be managed. There has been corporate restructuring in response to the changing political economy of education. There are now new work regimes and radical changes in organizational cultures. Education, like other public services, is now characterized by a range of structural realignments, new relationships between purchasers and providers and new coalitions between management and politics. In this article, we will argue that the school effectiveness movement is an example of new managerialism in education. It is part of an ideological and technological process to industrialize educational productivity. That is to say, the emphasis on standards and standardization is evocative of production regimes drawn from industry. There is a belief that education, like other public services can be managed to ensure optimal outputs and zero defects in the educational product.
Resumo:
The development of the real estate swap market offers many opportunities for investors to adjust the exposure of their portfolios to real estate. A number of OTC transactions have been observed in markets around the world. In this paper we examine the Japanese commercial real estate market from the point of view of an investor holding a portfolio of properties seeking to reduce the portfolio exposure to the real estate market by swapping an index of real estate for LIBOR. This paper explores the practicalities of hedging portfolios comprising small numbers of individual properties against an appropriate index. We use the returns from 74 properties owned by Japanese Real Estate Investment Trusts over the period up to September 2007. The paper also discusses and applies the appropriate stochastic processes required to model real estate returns in this application and presents alternative ways of reporting hedging effectiveness. We find that the development of the derivative does provide the capacity for hedging market risk but that the effectiveness of the hedge varies considerably over time. We explore the factors that cause this variability.
Resumo:
Single-carrier frequency division multiple access (SC-FDMA) has appeared to be a promising technique for high data rate uplink communications. Aimed at SC-FDMA applications, a cyclic prefixed version of the offset quadrature amplitude modulation based OFDM (OQAM-OFDM) is first proposed in this paper. We show that cyclic prefixed OQAMOFDM CP-OQAM-OFDM) can be realized within the framework of the standard OFDM system, and perfect recovery condition in the ideal channel is derived. We then apply CP-OQAMOFDM to SC-FDMA transmission in frequency selective fading channels. Signal model and joint widely linear minimum mean square error (WLMMSE) equalization using a prior information with low complexity are developed. Compared with the existing DFTS-OFDM based SC-FDMA, the proposed SC-FDMA can significantly reduce envelope fluctuation (EF) of the transmitted signal while maintaining the bandwidth efficiency. The inherent structure of CP-OQAM-OFDM enables low-complexity joint equalization in the frequency domain to combat both the multiple access interference and the intersymbol interference. The joint WLMMSE equalization using a prior information guarantees optimal MMSE performance and supports Turbo receiver for improved bit error rate (BER) performance. Simulation resultsconfirm the effectiveness of the proposed SC-FDMA in termsof EF (including peak-to-average power ratio, instantaneous-toaverage power ratio and cubic metric) and BER performances.
Resumo:
In this paper a modified algorithm is suggested for developing polynomial neural network (PNN) models. Optimal partial description (PD) modeling is introduced at each layer of the PNN expansion, a task accomplished using the orthogonal least squares (OLS) method. Based on the initial PD models determined by the polynomial order and the number of PD inputs, OLS selects the most significant regressor terms reducing the output error variance. The method produces PNN models exhibiting a high level of accuracy and superior generalization capabilities. Additionally, parsimonious models are obtained comprising a considerably smaller number of parameters compared to the ones generated by means of the conventional PNN algorithm. Three benchmark examples are elaborated, including modeling of the gas furnace process as well as the iris and wine classification problems. Extensive simulation results and comparison with other methods in the literature, demonstrate the effectiveness of the suggested modeling approach.
Resumo:
An efficient data based-modeling algorithm for nonlinear system identification is introduced for radial basis function (RBF) neural networks with the aim of maximizing generalization capability based on the concept of leave-one-out (LOO) cross validation. Each of the RBF kernels has its own kernel width parameter and the basic idea is to optimize the multiple pairs of regularization parameters and kernel widths, each of which is associated with a kernel, one at a time within the orthogonal forward regression (OFR) procedure. Thus, each OFR step consists of one model term selection based on the LOO mean square error (LOOMSE), followed by the optimization of the associated kernel width and regularization parameter, also based on the LOOMSE. Since like our previous state-of-the-art local regularization assisted orthogonal least squares (LROLS) algorithm, the same LOOMSE is adopted for model selection, our proposed new OFR algorithm is also capable of producing a very sparse RBF model with excellent generalization performance. Unlike our previous LROLS algorithm which requires an additional iterative loop to optimize the regularization parameters as well as an additional procedure to optimize the kernel width, the proposed new OFR algorithm optimizes both the kernel widths and regularization parameters within the single OFR procedure, and consequently the required computational complexity is dramatically reduced. Nonlinear system identification examples are included to demonstrate the effectiveness of this new approach in comparison to the well-known approaches of support vector machine and least absolute shrinkage and selection operator as well as the LROLS algorithm.
Resumo:
The detection of anthropogenic climate change can be improved by recognising the seasonality in the climate change response. This is demonstrated for the North Atlantic jet (zonal wind at 850 hPa, U850) and European precipitation responses projected by the CMIP5 climate models. The U850 future response is characterised by a marked seasonality: an eastward extension of the North Atlantic jet into Europe in November-April, and a poleward shift in May-October. Under the RCP8.5 scenario, the multi-model mean response in U850 in these two extended seasonal means emerges by 2035-2040 for the lower--latitude features and by 2050-2070 for the higher--latitude features, relative to the 1960-1990 climate. This is 5-15 years earlier than when evaluated in the traditional meteorological seasons (December--February, June--August), and it results from an increase in the signal to noise ratio associated with the spatial coherence of the response within the extended seasons. The annual mean response lacks important information on the seasonality of the response without improving the signal to noise ratio. The same two extended seasons are demonstrated to capture the seasonality of the European precipitation response to climate change and to anticipate its emergence by 10-20 years. Furthermore, some of the regional responses, such as the Mediterranean precipitation decline and the U850 response in North Africa in the extended winter, are projected to emerge by 2020-2025, according to the models with a strong response. Therefore, observations might soon be useful to test aspects of the atmospheric circulation response predicted by some of the CMIP5 models.