353 resultados para Value at Risk (VaR)

em Queensland University of Technology - ePrints Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study compares Value-at-Risk (VaR) measures for Australian banks over a period that includes the Global Financial Crisis (GFC) to determine whether the methodology and parameter selection are important for capital adequacy holdings that will ultimately support a bank in a crisis period. VaR methodology promoted under Basel II was largely criticised during the GFC for its failure to capture downside risk. However, results from this study indicate that 1-year parametric and historical models produce better measures of VaR than models with longer time frames. VaR estimates produced using Monte Carlo simulations show a high percentage of violations but with lower average magnitude of a violation when they occur. VaR estimates produced by the ARMA GARCH model also show a relatively high percentage of violations, however, the average magnitude of a violation is quite low. Our findings support the design of the revised Basel II VaR methodology which has also been adopted under Basel III.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis we are interested in financial risk and the instrument we want to use is Value-at-Risk (VaR). VaR is the maximum loss over a given period of time at a given confidence level. Many definitions of VaR exist and some will be introduced throughout this thesis. There two main ways to measure risk and VaR: through volatility and through percentiles. Large volatility in financial returns implies greater probability of large losses, but also larger probability of large profits. Percentiles describe tail behaviour. The estimation of VaR is a complex task. It is important to know the main characteristics of financial data to choose the best model. The existing literature is very wide, maybe controversial, but helpful in drawing a picture of the problem. It is commonly recognised that financial data are characterised by heavy tails, time-varying volatility, asymmetric response to bad and good news, and skewness. Ignoring any of these features can lead to underestimating VaR with a possible ultimate consequence being the default of the protagonist (firm, bank or investor). In recent years, skewness has attracted special attention. An open problem is the detection and modelling of time-varying skewness. Is skewness constant or there is some significant variability which in turn can affect the estimation of VaR? This thesis aims to answer this question and to open the way to a new approach to model simultaneously time-varying volatility (conditional variance) and skewness. The new tools are modifications of the Generalised Lambda Distributions (GLDs). They are four-parameter distributions, which allow the first four moments to be modelled nearly independently: in particular we are interested in what we will call para-moments, i.e., mean, variance, skewness and kurtosis. The GLDs will be used in two different ways. Firstly, semi-parametrically, we consider a moving window to estimate the parameters and calculate the percentiles of the GLDs. Secondly, parametrically, we attempt to extend the GLDs to include time-varying dependence in the parameters. We used the local linear regression to estimate semi-parametrically conditional mean and conditional variance. The method is not efficient enough to capture all the dependence structure in the three indices —ASX 200, S&P 500 and FT 30—, however it provides an idea of the DGP underlying the process and helps choosing a good technique to model the data. We find that GLDs suggest that moments up to the fourth order do not always exist, there existence appears to vary over time. This is a very important finding, considering that past papers (see for example Bali et al., 2008; Hashmi and Tay, 2007; Lanne and Pentti, 2007) modelled time-varying skewness, implicitly assuming the existence of the third moment. However, the GLDs suggest that mean, variance, skewness and in general the conditional distribution vary over time, as already suggested by the existing literature. The GLDs give good results in estimating VaR on three real indices, ASX 200, S&P 500 and FT 30, with results very similar to the results provided by historical simulation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A pervasive and puzzling feature of banks’ Value-at-Risk (VaR) is its abnormally high level, which leads to excessive regulatory capital. A possible explanation for the tendency of commercial banks to overstate their VaR is that they incompletely account for the diversification effect among broad risk categories (e.g., equity, interest rate, commodity, credit spread, and foreign exchange). By underestimating the diversification effect, bank’s proprietary VaR models produce overly prudent market risk assessments. In this paper, we examine empirically the validity of this hypothesis using actual VaR data from major US commercial banks. In contrast to the VaR diversification hypothesis, we find that US banks show no sign of systematic underestimation of the diversification effect. In particular, diversification effects used by banks is very close to (and quite often larger than) our empirical diversification estimates. A direct implication of this finding is that individual VaRs for each broad risk category, just like aggregate VaRs, are biased risk assessments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we study both the level of Value-at-Risk (VaR) disclosure and the accuracy of the disclosed VaR figures for a sample of US and international commercial banks. To measure the level of VaR disclosures, we develop a VaR Disclosure Index that captures many different facets of market risk disclosure. Using panel data over the period 1996–2005, we find an overall upward trend in the quantity of information released to the public. We also find that Historical Simulation is by far the most popular VaR method. We assess the accuracy of VaR figures by studying the number of VaR exceedances and whether actual daily VaRs contain information about the volatility of subsequent trading revenues. Unlike the level of VaR disclosure, the quality of VaR disclosure shows no sign of improvement over time. We find that VaR computed using Historical Simulation contains very little information about future volatility.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent literature has focused on realized volatility models to predict financial risk. This paper studies the benefit of explicitly modeling jumps in this class of models for value at risk (VaR) prediction. Several popular realized volatility models are compared in terms of their VaR forecasting performances through a Monte Carlo study and an analysis based on empirical data of eight Chinese stocks. The results suggest that careful modeling of jumps in realized volatility models can largely improve VaR prediction, especially for emerging markets where jumps play a stronger role than those in developed markets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The interdependence of Greece and other European stock markets and the subsequent portfolio implications are examined in wavelet and variational mode decomposition domain. In applying the decomposition techniques, we analyze the structural properties of data and distinguish between short and long term dynamics of stock market returns. First, the GARCH-type models are fitted to obtain the standardized residuals. Next, different copula functions are evaluated, and based on the conventional information criteria and time varying parameter, Joe-Clayton copula is chosen to model the tail dependence between the stock markets. The short-run lower tail dependence time paths show a sudden increase in comovement during the global financial crises. The results of the long-run dependence suggest that European stock markets have higher interdependence with Greece stock market. Individual country’s Value at Risk (VaR) separates the countries into two distinct groups. Finally, the two-asset portfolio VaR measures provide potential markets for Greece stock market investment diversification.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The high morbidity and mortality associated with atherosclerotic coronary vascular disease (CVD) and its complications are being lessened by the increased knowledge of risk factors, effective preventative measures and proven therapeutic interventions. However, significant CVD morbidity remains and sudden cardiac death continues to be a presenting feature for some subsequently diagnosed with CVD. Coronary vascular disease is also the leading cause of anaesthesia related complications. Stress electrocardiography/exercise testing is predictive of 10 year risk of CVD events and the cardiovascular variables used to score this test are monitored peri-operatively. Similar physiological time-series datasets are being subjected to data mining methods for the prediction of medical diagnoses and outcomes. This study aims to find predictors of CVD using anaesthesia time-series data and patient risk factor data. Several pre-processing and predictive data mining methods are applied to this data. Physiological time-series data related to anaesthetic procedures are subjected to pre-processing methods for removal of outliers, calculation of moving averages as well as data summarisation and data abstraction methods. Feature selection methods of both wrapper and filter types are applied to derived physiological time-series variable sets alone and to the same variables combined with risk factor variables. The ability of these methods to identify subsets of highly correlated but non-redundant variables is assessed. The major dataset is derived from the entire anaesthesia population and subsets of this population are considered to be at increased anaesthesia risk based on their need for more intensive monitoring (invasive haemodynamic monitoring and additional ECG leads). Because of the unbalanced class distribution in the data, majority class under-sampling and Kappa statistic together with misclassification rate and area under the ROC curve (AUC) are used for evaluation of models generated using different prediction algorithms. The performance based on models derived from feature reduced datasets reveal the filter method, Cfs subset evaluation, to be most consistently effective although Consistency derived subsets tended to slightly increased accuracy but markedly increased complexity. The use of misclassification rate (MR) for model performance evaluation is influenced by class distribution. This could be eliminated by consideration of the AUC or Kappa statistic as well by evaluation of subsets with under-sampled majority class. The noise and outlier removal pre-processing methods produced models with MR ranging from 10.69 to 12.62 with the lowest value being for data from which both outliers and noise were removed (MR 10.69). For the raw time-series dataset, MR is 12.34. Feature selection results in reduction in MR to 9.8 to 10.16 with time segmented summary data (dataset F) MR being 9.8 and raw time-series summary data (dataset A) being 9.92. However, for all time-series only based datasets, the complexity is high. For most pre-processing methods, Cfs could identify a subset of correlated and non-redundant variables from the time-series alone datasets but models derived from these subsets are of one leaf only. MR values are consistent with class distribution in the subset folds evaluated in the n-cross validation method. For models based on Cfs selected time-series derived and risk factor (RF) variables, the MR ranges from 8.83 to 10.36 with dataset RF_A (raw time-series data and RF) being 8.85 and dataset RF_F (time segmented time-series variables and RF) being 9.09. The models based on counts of outliers and counts of data points outside normal range (Dataset RF_E) and derived variables based on time series transformed using Symbolic Aggregate Approximation (SAX) with associated time-series pattern cluster membership (Dataset RF_ G) perform the least well with MR of 10.25 and 10.36 respectively. For coronary vascular disease prediction, nearest neighbour (NNge) and the support vector machine based method, SMO, have the highest MR of 10.1 and 10.28 while logistic regression (LR) and the decision tree (DT) method, J48, have MR of 8.85 and 9.0 respectively. DT rules are most comprehensible and clinically relevant. The predictive accuracy increase achieved by addition of risk factor variables to time-series variable based models is significant. The addition of time-series derived variables to models based on risk factor variables alone is associated with a trend to improved performance. Data mining of feature reduced, anaesthesia time-series variables together with risk factor variables can produce compact and moderately accurate models able to predict coronary vascular disease. Decision tree analysis of time-series data combined with risk factor variables yields rules which are more accurate than models based on time-series data alone. The limited additional value provided by electrocardiographic variables when compared to use of risk factors alone is similar to recent suggestions that exercise electrocardiography (exECG) under standardised conditions has limited additional diagnostic value over risk factor analysis and symptom pattern. The effect of the pre-processing used in this study had limited effect when time-series variables and risk factor variables are used as model input. In the absence of risk factor input, the use of time-series variables after outlier removal and time series variables based on physiological variable values’ being outside the accepted normal range is associated with some improvement in model performance.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background Genome-wide association studies have identified multiple genetic variants associated with prostate cancer risk which explain a substantial proportion of familial relative risk. These variants can be used to stratify individuals by their risk of prostate cancer. Methods We genotyped 25 prostate cancer susceptibility loci in 40,414 individuals and derived a polygenic risk score (PRS).We estimated empirical odds ratios (OR) for prostate cancer associated with different risk strata defined by PRS and derived agespecific absolute risks of developing prostate cancer by PRS stratum and family history. Results The prostate cancer risk for men in the top 1% of the PRS distribution was 30.6 (95% CI, 16.4-57.3) fold compared with men in the bottom 1%, and 4.2 (95% CI, 3.2-5.5) fold compared with the median risk. The absolute risk of prostate cancer by age of 85 years was 65.8% for a man with family history in the top 1% of the PRS distribution, compared with 3.7% for a man in the bottom 1%. The PRS was only weakly correlated with serum PSA level (correlation = 0.09). Conclusions Risk profiling can identify men at substantially increased or reduced risk of prostate cancer. The effect size, measured by OR per unit PRS, was higher in men at younger ages and in men with family history of prostate cancer. Incorporating additional newly identified loci into a PRS should improve the predictive value of risk profiles. Impact:We demonstrate that the risk profiling based on SNPs can identify men at substantially increased or reduced risk that could have useful implications for targeted prevention and screening programs.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Hedging against tail events in equity markets has been forcefully advocated in the aftermath of recent global financial crisis. Whether this is beneficial to long horizon investors like employees enrolled in defined contribution (DC) plans, however, has been subject to criticism. We conduct historical simulation since 1928 to examine the effectiveness of active and passive tail risk hedging using out of money put options for hypothetical equity portfolios of DC plan participants with 20 years to retirement. Our findings show that the cost of tail hedging exceeds the benefits for a majority of the plan participants during the sample period. However, for a significant number of simulations, hedging result in superior outcomes relative to an unhedged position. Active tail hedging is more effective when employees confront several panic-driven periods characterized by short and sharp market swings in the equity markets over the investment horizon. Passive hedging, on the other hand, proves beneficial when they encounter an extremely rare event like the Great Depression as equity markets go into deep and prolonged decline.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This study examines the role of corporate philanthropy in the management of reputation risk and shareholder value of the top 100 ASX listed Australian firms for the three years 2011-2013. The results of this study demonstrate the business case for corporate philanthropy and hence encourage corporate philanthropy by showing increasing firms’ investment in corporate giving as a percentage of profit before tax, increases the likelihood of an increase in shareholder value. However, the proviso is that firms must also manage their reputation risk at the same time. There is a negative association between corporate giving and shareholder value (Tobin’s Q) which is mitigated by firms’ management of reputation. The economic significance of this result is that for every cent in the dollar the firm spends on corporate giving, Tobin’s Q will decrease by 0.413%. In contrast, if the firm increase their reputation by 1 point then Tobin’s Q will increase by 0.267%. Consequently, the interaction of corporate giving and reputation risk management is positively associated with shareholder value. These results are robust while controlling for potential endogeneity and reverse causality. This paper assists both academics and practitioners by demonstrating that the benefits of corporate philanthropy extend beyond a gesture to improve reputation or an attempt to increase financial performance, to a direct collaboration between all the factors where the benefits far outweigh the costs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this paper is to extend marketing knowledge into perceived risk in online transactions beyond the current positivistic, hypotheses-driven research by providing qualitative insights into how individuals construct their accounts of perceived risk online. Additionally, the study reported in this paper aims to explore how communication sources influence both these subjective constructions and individuals' behavioural experiences with transaction activity on the web. Design/methodology/approach - The study was developed within a grounded theory method.Ten in-depth interviews were conducted which were analysed using constant comparison of incidents procedures to provide rich descriptions of the interviewees' subjective perceptions and lived experiences with online transaction activity. Findings - The findings provide insights into how the human clement is present in individuals'perceptions and constructions of their accounts of the risk involved online.The findings also identify the influence of mass communication sources on the construction of these accounts. The study provides insights into whether change agent communication sources, such as marketers or web designers,influence consumers' behaviours towards online transaction activity through mediating their perceptions of the risks involved. The study also reveals how social communication networks influence the interviewees' decisions to use the web (or transaction activities, in particular online purchasing, and how the group in this study might act as a communication source to influence others. Research limitations/Implications - While the findings cannot be generalised to the internet population overall, the sample used was able to provide relevant information regarding the phenomenon of interest. Future research should continue to examine perceived risk and the influence of communications sources, such as e-mail, discussion groups and virtual communities. Originality/value - The value of the paper lies in permitting the participants to account for perceived risk for themselves. The findings ex.plore what this means at increasing levels of personal relevance and the influence of communication sources to create, sustain or mediate perceptions of this phenomenon.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

China has a reputation as an economy based on utility: the large-scale manufacture of low-priced goods. But useful values like functionality, fitness for purpose and efficiency are only part of the story. More important are what Veblen called ‘honorific’ values, arguably the driving force of development, change and value in any economy. To understand the Chinese economy therefore, it is not sufficient to point to its utilitarian aspect. Honorific status-competition is a more fundamental driver than utilitarian cost-competition. We argue that ‘social network markets’ are the expression of these honorific values, relationships and connections that structure and coordinate individual choices. This paper explores how such markets are developing in China in the area of fashion and fashion media. These, we argue, are an expression of ‘risk culture’ for high-end entrepreneurial consumers and producers alike, providing a stimulus to dynamic innovation in the arena of personal taste and comportment, as part of an international cultural system based on constant change. We examine the launch of Vogue China in 2005, and China’s reception as a fashion player among the international editions of Vogue, as an expression of a ‘decisive moment’ in the integration of China into an international social network market based on honorific values.