48 resultados para Parametric VaR (Value-at-Risk)

em Deakin Research Online - Australia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper discusses the various aspects of Value-at-Risk (VaR) and the VaR-based risk management process as it pertains to the banking industry. Since its inception in the 1990’s, VaR has become the industry standard by which market risk is both measured and managed by financial institutions today. However, there has been much debate regarding VaR’s validity and the extent of its role within the banking industry. Yet, now that it is an integral part of the regulatory framework, establishing VaR’s legitimacy is more important than ever. Therefore, this paper examines the recent literature on VaR’s use as a market risk management tool within the banking environment in an attempt to clarify some of the more contentious issues which have been raised by researchers. The discussion begins by highlighting the underlying theory on which VaR is based, specific aspects which have proven controversial and its use from a regulatory perspective. The focus then turns to what little literature exists on the subject of VaR and asset returns in an attempt to provide some direction for future research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We examine numerical performance of various methods of calculation of the Conditional Value-at-risk (CVaR), and portfolio optimization with respect to this risk measure. We concentrate on the method proposed by Rockafellar and Uryasev in (Rockafellar, R.T. and Uryasev, S., 2000, Optimization of conditional value-at-risk. Journal of Risk, 2, 21-41), which converts this problem to that of convex optimization. We compare the use of linear programming techniques against a non-smooth optimization method of the discrete gradient, and establish the supremacy of the latter. We show that non-smooth optimization can be used efficiently for large portfolio optimization, and also examine parallel execution of this method on computer clusters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Using ‘low-frequency’ volatility extracted from aggregate volatility shocks in interest rate swap (hereafter, IRS) market, this paper investigates whether Japanese yen IRS volatility can be explained by macroeconomic risks. The analysis suggests that this low-frequency yen IRS volatility has strong and positive association with most of the macroeconomic risk proxies (e.g., volatility of consumer price index, industrial production volatility, foreign exchange volatility, slope of the term structure and money supply) with the exception of the unemployment rate, which is negatively related to IRS volatility. This finding is fairly consistent with the argument that the greater the macroeconomic risk the greater is the use of derivative instruments to hedge or speculate. The relationship between the macroeconomic risks and IRS volatility varies slightly across the different swap maturities but is robust to alternative volatility specifications. This linkage between swap market and macroeconomy has practical implications since market makers and hedgers use the swap rate as benchmark for pricing long-term interest rates, corporate bonds and various other securities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

 An optimisation framework is proposed to enable investors to select the right risk measures in portfolio selection. Verification is deployed by performing experiments in developed markets (e.g., the US stock market), emerging markets (e.g., the South Korean stock market) and global investments. A preselection procedure dealing with large datasets is also introduced to eliminate stocks that have low diversification potential before running the portfolio optimisation model. Portfolios are evaluated by four performance indices, i.e., the Sortino ratio, the Sharpe ratio, the Stutzer performance index, and the Omega measure. Experimental results demonstrate that high performance and also well-diversified portfolios are obtained if modified value-at-risk, variance, or semi-variance is concerned whereas emphasising only skewness, kurtosis or higher moments in general produces low performance and poorly diversified portfolios. In addition, the preselection applied to large datasets results in portfolios that have not only high performance but also high diversification degree.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new portfolio risk measure that is the uncertainty of portfolio fuzzy return is introduced in this paper. Beyond the well-known Sharpe ratio (i.e., the reward-to-variability ratio) in modern portfolio theory, we initiate the so-called fuzzy Sharpe ratio in the fuzzy modeling context. In addition to the introduction of the new risk measure, we also put forward the reward-to-uncertainty ratio to assess the portfolio performance in fuzzy modeling. Corresponding to two approaches based on TM and TW fuzzy arithmetic, two portfolio optimization models are formulated in which the uncertainty of portfolio fuzzy returns is minimized, while the fuzzy Sharpe ratio is maximized. These models are solved by the fuzzy approach or by the genetic algorithm (GA). Solutions of the two proposed models are shown to be dominant in terms of portfolio return uncertainty compared with those of the conventional mean-variance optimization (MVO) model used prevalently in the financial literature. In terms of portfolio performance evaluated by the fuzzy Sharpe ratio and the reward-to-uncertainty ratio, the model using TW fuzzy arithmetic results in higher performance portfolios than those obtained by both the MVO and the fuzzy model, which employs TM fuzzy arithmetic. We also find that using the fuzzy approach for solving multiobjective problems appears to achieve more optimal solutions than using GA, although GA can offer a series of well-diversified portfolio solutions diagrammed in a Pareto frontier.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper provides an examination of the determinants of derivative use by Australian corporations. We analysed the characteristics of a sample of 469 firm/year observations drawn from the largest Australian publicly listed companies in 1999 and 2000 to address two issues: the decision to use financial derivatives and the extent to which they are used. Logit analysis suggests that a firm's leverage (distress proxy), size (financial distress and setup costs) and liquidity (financial constraints proxy) are important factors associated with the decision to use derivatives. These findings support the financial distress hypothesis while the evidence on the underinvestment hypothesis is mixed. Additionally, setup costs appear to be important, as larger firms are more likely to use derivatives. Tobit results, on the other hand, show that once the decision to use derivatives has been made, a firm uses more derivatives as its leverage increases and as it pays out more dividends (hedging substitute proxy). The overall results indicate that Australian companies use derivatives with a view to enhancing the firms' value rather than to maximizing managerial wealth. In particular, corporations' derivative policies are mostly concerned with reducing the expected cost of financial distress and managing cash flows. Our inability to identify managerial influences behind the derivative decision suggests a competitive Australian managerial labor market.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we consider an extension of the recently proposed bivariate Markov-switching multifractal model of Calvet, Fisher, and Thompson [2006. "Volatility Comovement: A Multifrequency Approach." Journal of Econometrics {131}: 179-215]. In particular, we allow correlations between volatility components to be non-homogeneous with two different parameters governing the volatility correlations at high and low frequencies. Specification tests confirm the added explanatory value of this specification. In order to explore its practical performance, we apply the model for computing value-at-risk statistics for different classes of financial assets and compare the results with the baseline, homogeneous bivariate multifractal model and the bivariate DCC-GARCH of Engle [2002. "Dynamic Conditional Correlation: A Simple Class of Multivariate Generalized Autoregressive Conditional Heteroskedasticity Models." Journal of Business & Economic Statistics 20 (3): 339-350]. As it turns out, the multifractal model with heterogeneous volatility correlations provides more reliable results than both the homogeneous benchmark and the DCC-GARCH model. © 2014 Taylor & Francis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Contradictory results are documented in the literature regarding which type of mutual fund has superior performance; an Islamic or conventional mutual fund. Due to the relative short history of the Islamic mutual funds' industry, prior literature has inevitably relied on a small sample size with a short sample period. With the longest applicable sample period, this study represents one of the most recent attempts to address this conflicting evidence. We find there is no clear cut over performance by Islamic mutual funds against their conventional peers across the three financial crises in our sample period, with the exception of the most recent global financial crisis, where Islamic mutual funds generally outperformed their conventional counterparts. We further find that Islamic funds significantly outperformed conventional funds in the riskiest asset class, equity, one year before and during the global financial crisis. We further reveal that the modified value at risk for Islamic mutual funds was significantly lower than their conventional peers during the global financial crisis. This seems to indicate that Islamic mutual funds have better risk management compared to conventional peers.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Fracture risk is determined by bone mineral density (BMD). The T-score, a measure of fracture risk, is the position of an individual's BMD in relation to a reference range. The aim of this study was to determine the magnitude of change in the T-score when different sampling techniques were used to produce the reference range. Reference ranges were derived from three samples, drawn from the same region: (1) an age-stratified population-based random sample, (2) unselected volunteers, and (3) a selected healthy subset of the population-based sample with no diseases or drugs known to affect bone. T-scores were calculated using the three reference ranges for a cohort of women who had sustained a fracture and as a group had a low mean BMD (ages 35-72 yr; n = 484). For most comparisons, the T-scores for the fracture cohort were more negative using the population reference range. The difference in T-scores reached 1.0 SD. The proportion of the fracture cohort classified as having osteoporosis at the spine was 26, 14, and 23% when the population, volunteer, and healthy reference ranges were applied, respectively. The use of inappropriate reference ranges results in substantial changes to T-scores and may lead to inappropriate management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper examines whether the financial performance of the firm is associated with the risk-taking propensity of executives, which is inferred from the structure of their share option portfolio. The objective of this paper is to determine if executives have greater risk bearing preferences when they have more share options than shares in their firm. In turn, executives' risk-taking preferences suggest that these decision-makers adopt value-increasing strategies. The results of this study support this notion. The results of the study of 182 Australian firms demonstrate that the negative relationship between firm risk and firm performance is weaker when executives hold a higher proportion of share options than shares in their investment in the firm. These results hold implications for executives' compensation contracts. That is, executives who share in their firms' risk via share options are more likely to undertake risky activities with high-expected performance outcome.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Forest policy decisions inherently involve multiple attributes and risk and uncertainty as they largely deal with complex biological, ecological, and socio-political systems. Identifying risk preferences and quantifying their inter-relationships and tradeoffs are useful in formulating better forest policy. Often, technocrats and experts deal with risky decisions, but ideally, stakeholder risk characteristics should be explicitly considered in making policy decisions. This paper analysed societal risk preferences on public forest land-use attributes using multi-attribute utility theory (MAUT). The results indicate significant risk-averse behaviour towards old-growth forest conservation and forest-based recreation but less risk-averse behaviour towards native timber extraction. Overall, the respondents preferred a more conservative forest land-use option, which is consistent with their risk attitudes. The method provides insights into risk preferences of forest stakeholders, which could lead to better understanding of forest management conflicts. Moreover, the method explicitly distinguishes the technical and value components of the decision and is useful in unravelling public risk preferences in multiple-use forest planning situations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reuse of wastewater to irrigate food crops is being practiced in many parts of the world and is becoming more commonplace as the competition for, and stresses on, freshwater resources intensify. But there are risks associated with wastewater irrigation, including the possibility of transmission of pathogens causing infectious disease, to both workers in the field and to consumers buying and eating produce irrigated with wastewater. To manage these risks appropriately we need objective and quantitative estimates of them. This is typically achieved through one of two modelling approaches: deterministic or stochastic. Each parameter in a deterministic model is represented by a single value, whereas in stochastic models probability functions are used. Stochastic models are theoretically superior because they account for variability and uncertainty, but they are computationally demanding and not readily accessible to water resource and public health managers. We constructed models to estimate risk of enteric virus infection arising from the consumption of wastewater-irrigated horticultural crops (broccoli, cucumber and lettuce), and compared the resultant levels of risk between the deterministic and stochastic approaches. Several scenarios were tested for each crop, accounting for different concentrations of enteric viruses and different lengths of environmental exposure (i.e. the time between the last irrigation event and harvest, when the viruses are liable to decay or inactivation). In most situations modelled the two approaches yielded similar estimates of risk (within 1 order-of-magnitude). The two methods diverged most markedly, up to around 2 orders-of-magnitude, when there was large uncertainty associated with the estimate of virus concentration and the exposure period was short (1 day). Therefore, in some circumstances deterministic modelling may offer water resource managers a pragmatic alternative to stochastic modelling, but its usefulness as a surrogate will depend upon the level of uncertainty in the model parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Analysis of recurrent event data is frequently needed in clinical and epidemiological studies. An important issue in such analysis is how to account for the dependence of the events in an individual and any unobserved heterogeneity of the event propensity across individuals.Methods We applied a number of conditional frailty and nonfrailty models in an analysis involving recurrent myocardial infarction events in the Long-Term Intervention with Pravastatin in Ischaemic Disease study. A multiple variable risk prediction model was developed for both males and females. Results A Weibull model with a gamma frailty term fitted the data better than other frailty models for each gender. Among nonfrailty models the stratified survival model fitted the data best for each gender. The relative risk estimated by the elapsed time model was close to that estimated by the gap time model. We found that a cholesterol-lowering drug, pravastatin (the intervention being tested in the trial) had significant protective effect against the occurrence of myocardial infarction in men (HR¼0.71, 95% CI0.60–0.83). However, the treatment effect was not significant in women due to smaller sample size (HR¼0.75, 95% CI 0.51–1.10). There were no significant interactions between the treatment effect and each recurrent MI event (p¼0.24 for men and p¼0.55 for women). The risk of developing an MI event for a male who had an MI event during follow-up was about 3.4 (95% CI 2.6–4.4) times the risk compared with those who did not have an MI event. The corresponding relative risk for a female was about 7.8 (95% CI 4.4–13.6). Limitations The number of female patients was relatively small compared with their male counterparts, which may result in low statistical power to find real differences in the effect of treatment and other potential risk factors.Conclusions The conditional frailty model suggested that after accounting for all the risk factors in the model, there was still unmeasured heterogeneity of the risk for myocardial infarction, indicating the effect of subject-specific risk factors. These risk prediction models can be used to classify cardiovascular disease patients into different risk categories and may be useful for the most effective targeting of preventive therapies for cardiovascular disease.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this article is to review the development and assessment of cardiovascular risk prediction models and to discuss the predictive value of a risk factor as well as to introduce new assessment methods to evaluate a risk prediction model. Many cardiovascular risk prediction models have been developed during the past three decades. However, there has not been consistent agreement regarding how to appropriately assess a risk prediction model, especially when new markers are added to an existing model. The area under the receiver operating characteristic (ROC) curve has traditionally been used to assess the discriminatory ability of a risk prediction model. However, recent studies suggest that this method has its limitations and cannot be the sole approach to evaluate the usefulness of a new marker. New assessment methods are being developed to appropriately assess a risk prediction model and they will be gradually used in clinical and epidemiological studies.