936 resultados para mean and variance ratio
Resumo:
A trade-off between return and risk plays a central role in financial economics. The intertemporal capital asset pricing model (ICAPM) proposed by Merton (1973) provides a neoclassical theory for expected returns on risky assets. The model assumes that risk-averse investors (seeking to maximize their expected utility of lifetime consumption) demand compensation for bearing systematic market risk and the risk of unfavorable shifts in the investment opportunity set. Although the ICAPM postulates a positive relation between the conditional expected market return and its conditional variance, the empirical evidence on the sign of the risk-return trade-off is conflicting. In contrast, autocorrelation in stock returns is one of the most consistent and robust findings in empirical finance. While autocorrelation is often interpreted as a violation of market efficiency, it can also reflect factors such as market microstructure or time-varying risk premia. This doctoral thesis investigates a relation between the mixed risk-return trade-off results and autocorrelation in stock returns. The results suggest that, in the case of the US stock market, the relative contribution of the risk-return trade-off and autocorrelation in explaining the aggregate return fluctuates with volatility. This effect is then shown to be even more pronounced in the case of emerging stock markets. During high-volatility periods, expected returns can be described using rational (intertemporal) investors acting to maximize their expected utility. During lowvolatility periods, market-wide persistence in returns increases, leading to a failure of traditional equilibrium-model descriptions for expected returns. Consistent with this finding, traditional models yield conflicting evidence concerning the sign of the risk-return trade-off. The changing relevance of the risk-return trade-off and autocorrelation can be explained by heterogeneous agents or, more generally, by the inadequacy of the neoclassical view on asset pricing with unboundedly rational investors and perfect market efficiency. In the latter case, the empirical results imply that the neoclassical view is valid only under certain market conditions. This offers an economic explanation as to why it has been so difficult to detect a positive tradeoff between the conditional mean and variance of the aggregate stock return. The results highlight the importance, especially in the case of emerging stock markets, of noting both the risk-return trade-off and autocorrelation in applications that require estimates for expected returns.
Resumo:
In this paper, we provide both qualitative and quantitative measures of the cost of measuring the integrated volatility by the realized volatility when the frequency of observation is fixed. We start by characterizing for a general diffusion the difference between the realized and the integrated volatilities for a given frequency of observations. Then, we compute the mean and variance of this noise and the correlation between the noise and the integrated volatility in the Eigenfunction Stochastic Volatility model of Meddahi (2001a). This model has, as special examples, log-normal, affine, and GARCH diffusion models. Using some previous empirical works, we show that the standard deviation of the noise is not negligible with respect to the mean and the standard deviation of the integrated volatility, even if one considers returns at five minutes. We also propose a simple approach to capture the information about the integrated volatility contained in the returns through the leverage effect.
Resumo:
This paper develops a model of money demand where the opportunity cost of holding money is subject to regime changes. The regimes are fully characterized by the mean and variance of inflation and are assumed to be the result of alternative government policies. Agents are unable to directly observe whether government actions are indeed consistent with the inflation rate targeted as part of a stabilization program but can construct probability inferences on the basis of available observations of inflation and money growth. Government announcements are assumed to provide agents with additional, possibly truthful information regarding the regime. This specification is estimated and tested using data from the Israeli and Argentine high inflation periods. Results indicate the successful stabilization program implemented in Israel in July 1985 was more credible than either the earlier Israeli attempt in November 1984 or the Argentine programs. Government’s signaling might substantially simplify the inference problem and increase the speed of learning on the part of the agents. However, under certain conditions, it might increase the volatility of inflation. After the introduction of an inflation stabilization plan, the welfare gains from a temporary increase in real balances might be high enough to induce agents to raise their real balances in the short-term, even if they are uncertain about the nature of government policy and the eventual outcome of the stabilization attempt. Statistically, the model restrictions cannot be rejected at the 1% significance level.
Resumo:
Cette thèse de doctorat consiste en trois chapitres qui traitent des sujets de choix de portefeuilles de grande taille, et de mesure de risque. Le premier chapitre traite du problème d’erreur d’estimation dans les portefeuilles de grande taille, et utilise le cadre d'analyse moyenne-variance. Le second chapitre explore l'importance du risque de devise pour les portefeuilles d'actifs domestiques, et étudie les liens entre la stabilité des poids de portefeuille de grande taille et le risque de devise. Pour finir, sous l'hypothèse que le preneur de décision est pessimiste, le troisième chapitre dérive la prime de risque, une mesure du pessimisme, et propose une méthodologie pour estimer les mesures dérivées. Le premier chapitre améliore le choix optimal de portefeuille dans le cadre du principe moyenne-variance de Markowitz (1952). Ceci est motivé par les résultats très décevants obtenus, lorsque la moyenne et la variance sont remplacées par leurs estimations empiriques. Ce problème est amplifié lorsque le nombre d’actifs est grand et que la matrice de covariance empirique est singulière ou presque singulière. Dans ce chapitre, nous examinons quatre techniques de régularisation pour stabiliser l’inverse de la matrice de covariance: le ridge, spectral cut-off, Landweber-Fridman et LARS Lasso. Ces méthodes font chacune intervenir un paramètre d’ajustement, qui doit être sélectionné. La contribution principale de cette partie, est de dériver une méthode basée uniquement sur les données pour sélectionner le paramètre de régularisation de manière optimale, i.e. pour minimiser la perte espérée d’utilité. Précisément, un critère de validation croisée qui prend une même forme pour les quatre méthodes de régularisation est dérivé. Les règles régularisées obtenues sont alors comparées à la règle utilisant directement les données et à la stratégie naïve 1/N, selon leur perte espérée d’utilité et leur ratio de Sharpe. Ces performances sont mesurée dans l’échantillon (in-sample) et hors-échantillon (out-of-sample) en considérant différentes tailles d’échantillon et nombre d’actifs. Des simulations et de l’illustration empirique menées, il ressort principalement que la régularisation de la matrice de covariance améliore de manière significative la règle de Markowitz basée sur les données, et donne de meilleurs résultats que le portefeuille naïf, surtout dans les cas le problème d’erreur d’estimation est très sévère. Dans le second chapitre, nous investiguons dans quelle mesure, les portefeuilles optimaux et stables d'actifs domestiques, peuvent réduire ou éliminer le risque de devise. Pour cela nous utilisons des rendements mensuelles de 48 industries américaines, au cours de la période 1976-2008. Pour résoudre les problèmes d'instabilité inhérents aux portefeuilles de grandes tailles, nous adoptons la méthode de régularisation spectral cut-off. Ceci aboutit à une famille de portefeuilles optimaux et stables, en permettant aux investisseurs de choisir différents pourcentages des composantes principales (ou dégrées de stabilité). Nos tests empiriques sont basés sur un modèle International d'évaluation d'actifs financiers (IAPM). Dans ce modèle, le risque de devise est décomposé en deux facteurs représentant les devises des pays industrialisés d'une part, et celles des pays émergents d'autres part. Nos résultats indiquent que le risque de devise est primé et varie à travers le temps pour les portefeuilles stables de risque minimum. De plus ces stratégies conduisent à une réduction significative de l'exposition au risque de change, tandis que la contribution de la prime risque de change reste en moyenne inchangée. Les poids de portefeuille optimaux sont une alternative aux poids de capitalisation boursière. Par conséquent ce chapitre complète la littérature selon laquelle la prime de risque est importante au niveau de l'industrie et au niveau national dans la plupart des pays. Dans le dernier chapitre, nous dérivons une mesure de la prime de risque pour des préférences dépendent du rang et proposons une mesure du degré de pessimisme, étant donné une fonction de distorsion. Les mesures introduites généralisent la mesure de prime de risque dérivée dans le cadre de la théorie de l'utilité espérée, qui est fréquemment violée aussi bien dans des situations expérimentales que dans des situations réelles. Dans la grande famille des préférences considérées, une attention particulière est accordée à la CVaR (valeur à risque conditionnelle). Cette dernière mesure de risque est de plus en plus utilisée pour la construction de portefeuilles et est préconisée pour compléter la VaR (valeur à risque) utilisée depuis 1996 par le comité de Bâle. De plus, nous fournissons le cadre statistique nécessaire pour faire de l’inférence sur les mesures proposées. Pour finir, les propriétés des estimateurs proposés sont évaluées à travers une étude Monte-Carlo, et une illustration empirique en utilisant les rendements journaliers du marché boursier américain sur de la période 2000-2011.
Resumo:
The preceding two editions of CoDaWork included talks on the possible consideration of densities as infinite compositions: Egozcue and D´ıaz-Barrero (2003) extended the Euclidean structure of the simplex to a Hilbert space structure of the set of densities within a bounded interval, and van den Boogaart (2005) generalized this to the set of densities bounded by an arbitrary reference density. From the many variations of the Hilbert structures available, we work with three cases. For bounded variables, a basis derived from Legendre polynomials is used. For variables with a lower bound, we standardize them with respect to an exponential distribution and express their densities as coordinates in a basis derived from Laguerre polynomials. Finally, for unbounded variables, a normal distribution is used as reference, and coordinates are obtained with respect to a Hermite-polynomials-based basis. To get the coordinates, several approaches can be considered. A numerical accuracy problem occurs if one estimates the coordinates directly by using discretized scalar products. Thus we propose to use a weighted linear regression approach, where all k- order polynomials are used as predictand variables and weights are proportional to the reference density. Finally, for the case of 2-order Hermite polinomials (normal reference) and 1-order Laguerre polinomials (exponential), one can also derive the coordinates from their relationships to the classical mean and variance. Apart of these theoretical issues, this contribution focuses on the application of this theory to two main problems in sedimentary geology: the comparison of several grain size distributions, and the comparison among different rocks of the empirical distribution of a property measured on a batch of individual grains from the same rock or sediment, like their composition
Resumo:
Previous assessments of the impacts of climate change on heat-related mortality use the "delta method" to create temperature projection time series that are applied to temperature-mortality models to estimate future mortality impacts. The delta method means that climate model bias in the modelled present does not influence the temperature projection time series and impacts. However, the delta method assumes that climate change will result only in a change in the mean temperature but there is evidence that there will also be changes in the variability of temperature with climate change. The aim of this paper is to demonstrate the importance of considering changes in temperature variability with climate change in impacts assessments of future heat-related mortality. We investigate future heatrelated mortality impacts in six cities (Boston, Budapest, Dallas, Lisbon, London and Sydney) by applying temperature projections from the UK Meteorological Office HadCM3 climate model to the temperature-mortality models constructed and validated in Part 1. We investigate the impacts for four cases based on various combinations of mean and variability changes in temperature with climate change. The results demonstrate that higher mortality is attributed to increases in the mean and variability of temperature with climate change rather than with the change in mean temperature alone. This has implications for interpreting existing impacts estimates that have used the delta method. We present a novel method for the creation of temperature projection time series that includes changes in the mean and variability of temperature with climate change and is not influenced by climate model bias in the modelled present. The method should be useful for future impacts assessments. Few studies consider the implications that the limitations of the climate model may have on the heatrelated mortality impacts. Here, we demonstrate the importance of considering this by conducting an evaluation of the daily and extreme temperatures from HadCM3, which demonstrates that the estimates of future heat-related mortality for Dallas and Lisbon may be overestimated due to positive climate model bias. Likewise, estimates for Boston and London may be underestimated due to negative climate model bias. Finally, we briefly consider uncertainties in the impacts associated with greenhouse gas emissions and acclimatisation. The uncertainties in the mortality impacts due to different emissions scenarios of greenhouse gases in the future varied considerably by location. Allowing for acclimatisation to an extra 2°C in mean temperatures reduced future heat-related mortality by approximately half that of no acclimatisation in each city.
Resumo:
The aim of this paper is to demonstrate the importance of changing temperature variability with climate change in assessments of future heat-related mortality. Previous studies have only considered changes in the mean temperature. Here we present estimates of heat-related mortality resulting from climate change for six cities: Boston, Budapest, Dallas, Lisbon, London and Sydney. They are based on climate change scenarios for the 2080s (2070-2099) and the temperature-mortality (t-m) models constructed and validated in Gosling et al. (2007). We propose a novel methodology for assessing the impacts of climate change on heat-related mortality that considers both changes in the mean and variability of the temperature distribution.
Resumo:
The importance of temperature in the determination of the yield of an annual crop (groundnut; Arachis hypogaea L. in India) was assessed. Simulations from a regional climate model (PRECIS) were used with a crop model (GLAM) to examine crop growth under simulated current (1961-1990) and future (2071-2100) climates. Two processes were examined: the response of crop duration to mean temperature and the response of seed-set to extremes of temperature. The relative importance of, and interaction between, these two processes was examined for a number of genotypic characteristics, which were represented by using different values of crop model parameters derived from experiments. The impact of mean and extreme temperatures varied geographically, and depended upon the simulated genotypic properties. High temperature stress was not a major determinant of simulated yields in the current climate, but affected the mean and variability of yield under climate change in two regions which had contrasting statistics of daily maximum temperature. Changes in mean temperature had a similar impact on mean yield to that of high temperature stress in some locations and its effects were more widespread. Where the optimal temperature for development was exceeded, the resulting increase in duration in some simulations fully mitigated the negative impacts of extreme temperatures when sufficient water was available for the extended growing period. For some simulations the reduction in mean yield between the current and future climates was as large as 70%, indicating the importance of genotypic adaptation to changes in both means and extremes of temperature under climate change. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
It is accepted that an important source of variation in the response of anoestrous ewes, to the introduction of rams, is the intensity of male stimulation. The aim of this study was to investigate strategies capable of increasing the impact and transmission of the ram stimuli. In Experiment 1, two groups of seven ewes (Bluefaced Leicester male x Swaledale female) were individually penned with one ram and for the next 6 h the rams either remained in the pen or were replaced hourly. Blood samples revealed no difference in the pattern of plasma LH secretion. In Experiment 2, three groups of 16 ewes were either introduced to one ram, individually (H) or in groups of 8 (L), or remained isolated. Ram introduction increased the plasma LH pulsatility (P < 0.001). H ewes displayed more (nine versus six) male-induced LH pulses (pulses occurring within the first 45 min) and more pulses per 8 h intervals than the L group of ewes (1.9 +/- 0.3 versus 1.3 +/- 0.3), but these differences were not significant. It was concluded that (i) frequent replacement of rams within a few hours following ram introduction to ewes does not further improve the response of ewes, especially if the ram:ewe ratio is high; (ii) the characterization of the plasma LH secretion parameters during a period of 6-8 h does not seem to be an effective method to detect small differences in the intensity of stimulation received by the ewes when exposed to rams; (iii) North Country Mule ewes (Bluefaced Leicester male x Swaledale female) in the UK respond to the presence of rams in spring (late oestrous/early anoestrous season) with an elevation in plasma LH secretion. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Optical density measurements were used to estimate the effect of heat treatments on the single-cell lag times of Listeria innocua fitted to a shifted gamma distribution. The single-cell lag time was subdivided into repair time ( the shift of the distribution assumed to be uniform for all cells) and adjustment time (varying randomly from cell to cell). After heat treatments in which all of the cells recovered (sublethal), the repair time and the mean and the variance of the single-cell adjustment time increased with the severity of the treatment. When the heat treatments resulted in a loss of viability (lethal), the repair time of the survivors increased with the decimal reduction of the cell numbers independently of the temperature, while the mean and variance of the single-cell adjustment times remained the same irrespective of the heat treatment. Based on these observations and modeling of the effect of time and temperature of the heat treatment, we propose that the severity of a heat treatment can be characterized by the repair time of the cells whether the heat treatment is lethal or not, an extension of the F value concept for sublethal heat treatments. In addition, the repair time could be interpreted as the extent or degree of injury with a multiple-hit lethality model. Another implication of these results is that the distribution of the time for cells to reach unacceptable numbers in food is not affected by the time-temperature combination resulting in a given decimal reduction.
Resumo:
Although financial theory rests heavily upon the assumption that asset returns are normally distributed, value indices of commercial real estate display significant departures from normality. In this paper, we apply and compare the properties of two recently proposed regime switching models for value indices of commercial real estate in the US and the UK, both of which relax the assumption that observations are drawn from a single distribution with constant mean and variance. Statistical tests of the models' specification indicate that the Markov switching model is better able to capture the non-stationary features of the data than the threshold autoregressive model, although both represent superior descriptions of the data than the models that allow for only one state. Our results have several implications for theoretical models and empirical research in finance.
Resumo:
Purpose – Progress in retrofitting the UK's commercial properties continues to be slow and fragmented. New research from the UK and USA suggests that radical changes are needed to drive large-scale retrofitting, and that new and innovative models of financing can create new opportunities. The purpose of this paper is to offer insights into the terminology of retrofit and the changes in UK policy and practice that are needed to scale up activity in the sector. Design/methodology/approach – The paper reviews and synthesises key published research into commercial property retrofitting in the UK and USA and also draws on policy and practice from the EU and Australia. Findings – The paper provides a definition of “retrofit”, and compares and contrasts this with “refurbishment” and “renovation” in an international context. The paper summarises key findings from recent research and suggests that there are a number of policy and practice measures which need to be implemented in the UK for commercial retrofitting to succeed at scale. These include improved funding vehicles for retrofit; better transparency in actual energy performance; and consistency in measurement, verification and assessment standards. Practical implications – Policy and practice in the UK needs to change if large-scale commercial property retrofit is to be rolled out successfully. This requires mandatory legislation underpinned by incentives and penalties for non-compliance. Originality/value – This paper synthesises recent research to provide a set of policy and practice recommendations which draw on international experience, and can assist on implementation in the UK.
Resumo:
We analyze the risk premia embedded in the S&P 500 spot index and option markets. We use a long time-series of spot prices and a large panel of option prices to jointly estimate the diffusive stock risk premium, the price jump risk premium, the diffusive variance risk premium and the variance jump risk premium. The risk premia are statistically and economically significant and move over time. Investigating the economic drivers of the risk premia, we are able to explain up to 63 % of these variations.
Resumo:
In this paper, we proposed a new two-parameter lifetime distribution with increasing failure rate, the complementary exponential geometric distribution, which is complementary to the exponential geometric model proposed by Adamidis and Loukas (1998). The new distribution arises on a latent complementary risks scenario, in which the lifetime associated with a particular risk is not observable; rather, we observe only the maximum lifetime value among all risks. The properties of the proposed distribution are discussed, including a formal proof of its probability density function and explicit algebraic formulas for its reliability and failure rate functions, moments, including the mean and variance, variation coefficient, and modal value. The parameter estimation is based on the usual maximum likelihood approach. We report the results of a misspecification simulation study performed in order to assess the extent of misspecification errors when testing the exponential geometric distribution against our complementary one in the presence of different sample size and censoring percentage. The methodology is illustrated on four real datasets; we also make a comparison between both modeling approaches. (C) 2011 Elsevier B.V. All rights reserved.