852 resultados para Parametric VaR (Value-at-Risk)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

En este trabajo se implementa una metodología para incluir momentos de orden superior en la selección de portafolios, haciendo uso de la Distribución Hiperbólica Generalizada, para posteriormente hacer un análisis comparativo frente al modelo de Markowitz.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La dependencia entre las series financieras, es un parámetro fundamental para la estimación de modelos de Riesgo. El Valor en Riesgo (VaR) es una de las medidas más importantes utilizadas para la administración y gestión de Riesgos Financieros, en la actualidad existen diferentes métodos para su estimación, como el método por simulación histórica, el cual no asume ninguna distribución sobre los retornos de los factores de riesgo o activos, o los métodos paramétricos que asumen normalidad sobre las distribuciones. En este documento se introduce la teoría de cópulas, como medida de dependencia entre las series, se estima un modelo ARMA-GARCH-Cópula para el cálculo del Valor en Riesgo de un portafolio compuesto por dos series financiera, la tasa de cambio Dólar-Peso y Euro-Peso. Los resultados obtenidos muestran que la estimación del VaR por medio de copulas es más preciso en relación a los métodos tradicionales.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is widely accepted that some of the most accurate Value-at-Risk (VaR) estimates are based on an appropriately specified GARCH process. But when the forecast horizon is greater than the frequency of the GARCH model, such predictions have typically required time-consuming simulations of the aggregated returns distributions. This paper shows that fast, quasi-analytic GARCH VaR calculations can be based on new formulae for the first four moments of aggregated GARCH returns. Our extensive empirical study compares the Cornish–Fisher expansion with the Johnson SU distribution for fitting distributions to analytic moments of normal and Student t, symmetric and asymmetric (GJR) GARCH processes to returns data on different financial assets, for the purpose of deriving accurate GARCH VaR forecasts over multiple horizons and significance levels.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dentre os principais desafios enfrentados no cálculo de medidas de risco de portfólios está em como agregar riscos. Esta agregação deve ser feita de tal sorte que possa de alguma forma identificar o efeito da diversificação do risco existente em uma operação ou em um portfólio. Desta forma, muito tem se feito para identificar a melhor forma para se chegar a esta definição, alguns modelos como o Valor em Risco (VaR) paramétrico assumem que a distribuição marginal de cada variável integrante do portfólio seguem a mesma distribuição , sendo esta uma distribuição normal, se preocupando apenas em modelar corretamente a volatilidade e a matriz de correlação. Modelos como o VaR histórico assume a distribuição real da variável e não se preocupam com o formato da distribuição resultante multivariada. Assim sendo, a teoria de Cópulas mostra-se um grande alternativa, à medida que esta teoria permite a criação de distribuições multivariadas sem a necessidade de se supor qualquer tipo de restrição às distribuições marginais e muito menos as multivariadas. Neste trabalho iremos abordar a utilização desta metodologia em confronto com as demais metodologias de cálculo de Risco, a saber: VaR multivariados paramétricos - VEC, Diagonal,BEKK, EWMA, CCC e DCC- e VaR histórico para um portfólio resultante de posições idênticas em quatro fatores de risco – Pre252, Cupo252, Índice Bovespa e Índice Dow Jones

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A abordagem do Value at Risk (VAR) neste trabalho será feita a partir da análise da curva de juros por componentes principais (Principal Component Analysis – PCA). Com essa técnica, os movimentos da curva de juros são decompostos em um pequeno número de fatores básicos independentes um do outro. Entre eles, um fator de deslocamento (shift), que faz com que as taxas da curva se movam na mesma direção, todas para cima ou para baixo; de inclinação (twist) que rotaciona a curva fazendo com que as taxas curtas se movam em uma direção e as longas em outra; e finalmente movimento de torção, que afeta vencimentos curtos e longos no mesmo sentido e vencimentos intermediários em sentido oposto. A combinação destes fatores produz cenários hipotéticos de curva de juros que podem ser utilizados para estimar lucros e perdas de portfolios. A maior perda entre os cenários gerados é uma maneira intuitiva e rápida de estimar o VAR. Este, tende a ser, conforme verificaremos, uma estimativa conservadora do respectivo percentual de perda utilizado. Existem artigos sobre aplicações de PCA para a curva de juros brasileira, mas desconhecemos algum que utilize PCA para construção de cenários e cálculo de VAR, como é feito no presente trabalho.Nesse trabalho, verificaremos que a primeira componente principal produz na curva um movimento de inclinação conjugado com uma ligeira inclinação, ao contrário dos resultados obtidos em curvas de juros de outros países, que apresentam deslocamentos praticamente paralelos.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Building Risk-Neutral Densities (RND) from options data can provide market-implied expectations about the future behavior of a financial variable. And market expectations on financial variables may influence macroeconomic policy decisions. It can be useful also for corporate and financial institutions decision making. This paper uses the Liu et all (2007) approach to estimate the option-implied Risk-neutral densities from the Brazilian Real/US Dollar exchange rate distribution. We then compare the RND with actual exchange rates, on a monthly basis, in order to estimate the relative risk-aversion of investors and also obtain a Real-world density for the exchange rate. We are the first to calculate relative risk-aversion and the option-implied Real World Density for an emerging market currency. Our empirical application uses a sample of Brazilian Real/US Dollar options traded at BM&F-Bovespa from 1999 to 2011. The RND is estimated using a Mixture of Two Log-Normals distribution and then the real-world density is obtained by means of the Liu et al. (2007) parametric risktransformations. The relative risk aversion is calculated for the full sample. Our estimated value of the relative risk aversion parameter is around 2.7, which is in line with other articles that have estimated this parameter for the Brazilian Economy, such as Araújo (2005) and Issler and Piqueira (2000). Our out-of-sample evaluation results showed that the RND has some ability to forecast the Brazilian Real exchange rate. Abe et all (2007) found also mixed results in the out-of-sample analysis of the RND forecast ability for exchange rate options. However, when we incorporate the risk aversion into RND in order to obtain a Real-world density, the out-of-sample performance improves substantially, with satisfactory results in both Kolmogorov and Berkowitz tests. Therefore, we would suggest not using the “pure” RND, but rather taking into account risk aversion in order to forecast the Brazilian Real exchange rate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes a new novel to calculate tail risks incorporating risk-neutral information without dependence on options data. Proceeding via a non parametric approach we derive a stochastic discount factor that correctly price a chosen panel of stocks returns. With the assumption that states probabilities are homogeneous we back out the risk neutral distribution and calculate five primitive tail risk measures, all extracted from this risk neutral probability. The final measure is than set as the first principal component of the preliminary measures. Using six Fama-French size and book to market portfolios to calculate our tail risk, we find that it has significant predictive power when forecasting market returns one month ahead, aggregate U.S. consumption and GDP one quarter ahead and also macroeconomic activity indexes. Conditional Fama-Macbeth two-pass cross-sectional regressions reveal that our factor present a positive risk premium when controlling for traditional factors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esse é um dos primeiros trabalhos a endereçar o problema de avaliar o efeito do default para fins de alocação de capital no trading book em ações listadas. E, mais especificamente, para o mercado brasileiro. Esse problema surgiu em crises mais recentes e que acabaram fazendo com que os reguladores impusessem uma alocação de capital adicional para essas operações. Por essa razão o comitê de Basiléia introduziu uma nova métrica de risco, conhecida como Incremental Risk Charge. Essa medida de risco é basicamente um VaR de um ano com um intervalo de confiança de 99.9%. O IRC visa medir o efeito do default e das migrações de rating, para instrumentos do trading book. Nessa dissertação, o IRC está focado em ações e como consequência, não leva em consideração o efeito da mudança de rating. Além disso, o modelo utilizado para avaliar o risco de crédito para os emissores de ação foi o Moody’s KMV, que é baseado no modelo de Merton. O modelo foi utilizado para calcular a PD dos casos usados como exemplo nessa dissertação. Após calcular a PD, simulei os retornos por Monte Carlo após utilizar um PCA. Essa abordagem permitiu obter os retornos correlacionados para fazer a simulação de perdas do portfolio. Nesse caso, como estamos lidando com ações, o LGD foi mantido constante e o valor utilizado foi baseado nas especificações de basiléia. Os resultados obtidos para o IRC adaptado foram comparados com um VaR de 252 dias e com um intervalo de confiança de 99.9%. Isso permitiu concluir que o IRC é uma métrica de risco relevante e da mesma escala de uma VaR de 252 dias. Adicionalmente, o IRC adaptado foi capaz de antecipar os eventos de default. Todos os resultados foram baseados em portfolios compostos por ações do índice Bovespa.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

[ES] El propósito de este artículo es suministrar al lector los primeros pasos para la comprensión de la metodología Value at Risk (Var). La necesidad de comprender esta metodología está justificada por el acuerdo de Basilea y la Directiva sobre los requerimientos de capital impuesta por la Unión Europea. Ambos proponen los métodos VaR para determinar el capital mínimo de los bancos comerciales en su operativa pero, ¿Qué es el riesgo exactamente?. El riesgo puede ser definido como la volatilidad de los resultados esperados.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

„Risikomaße in der Finanzmathematik“ Der Value-at -Risk (VaR) ist ein Risikomaß, dessen Verwendung von der Bankenaufsicht gefordert wird. Der Vorteil des VaR liegt – als Quantil der Ertrags- oder Verlustverteilung - vor allem in seiner einfachen Interpretierbarkeit. Nachteilig ist, dass der linke Rand der Wahrscheinlichkeitsverteilung nicht beachtet wird. Darüber hinaus ist die Berechnung des VaR schwierig, da Quantile nicht additiv sind. Der größte Nachteil des VaR ist in der fehlenden Subadditivität zu sehen. Deswegen werden Alternativen wie Expected Shortfall untersucht. In dieser Arbeit werden zunächst finanzielle Risikomaße eingeführt und einige ihre grundlegenden Eigenschaften festgehalten. Wir beschäftigen uns mit verschiedenen parametrischen und nichtparametrischen Methoden zur Ermittlung des VaR, unter anderen mit ihren Vorteilen und Nachteilen. Des Weiteren beschäftigen wir uns mit parametrischen und nichtparametrischen Schätzern vom VaR in diskreter Zeit. Wir stellen Portfoliooptimierungsprobleme im Black Scholes Modell mit beschränktem VaR und mit beschränkter Varianz vor. Der Vorteil des erstens Ansatzes gegenüber dem zweiten wird hier erläutert. Wir lösen Nutzenoptimierungsprobleme in Bezug auf das Endvermögen mit beschränktem VaR und mit beschränkter Varianz. VaR sagt nichts über den darüber hinausgehenden Verlust aus, während dieser von Expected Shortfall berücksichtigt wird. Deswegen verwenden wir hier den Expected Shortfall anstelle des von Emmer, Korn und Klüppelberg (2001) betrachteten Risikomaßes VaR für die Optimierung des Portfolios im Black Scholes Modell.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-06

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En la actualidad hay una especial preocupación de los inversionistas por realizar sus inversiones de manera más segura, obteniendo una buena rentabilidad y sin poner en riesgo su capital -- En este sentido, la posibilidad de generar nuevas herramientas que permitan tomar mejores decisiones de inversión es cada vez más relevante en el mundo financiero -- Así, uno de los aportes más importantes de los que se dispone para ese propósito es el de Markowitz, que propone la generación de carteras óptimamente diversificadas -- Sin embargo, el problema es cómo escoger entre algunas de estas carteras -- Por ese motivo, este proyecto tuvo como objetivo comparar el modelo de la desviación estándar (Ratio de Sharpe) con el de Value at Risk (VaR) como concepto de riesgo, para la elección de una cartera óptima dentro del entorno de un mercado desarrollado, en este caso, el mercado estadounidense, por medio de un backtesting se analizó también si el ciclo de mercado bajista, estable o alcista tiene incidencia de igual forma en esta elección -- Después de realizar el modelo y aplicarlo se concluyó que bajo situaciones normales, en un mercado desarrollado, elegir una cartera sobre otra tuvo mayores beneficios si se realiza teniendo en cuenta como concepto de riesgo el VaR bajo un modelo de Simulación de Montecarlo, en lugar de la desviación estándar -- Al aplicar este modelo a un entono menos desarrollado y más fluctuante como el colombiano, se determinó que no hay una ventaja significativa entre los dos modelos (desviación estándar y VaR)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a general multistage stochastic mixed 0-1 problem where the uncertainty appears everywhere in the objective function, constraints matrix and right-hand-side. The uncertainty is represented by a scenario tree that can be a symmetric or a nonsymmetric one. The stochastic model is converted in a mixed 0-1 Deterministic Equivalent Model in compact representation. Due to the difficulty of the problem, the solution offered by the stochastic model has been traditionally obtained by optimizing the objective function expected value (i.e., mean) over the scenarios, usually, along a time horizon. This approach (so named risk neutral) has the inconvenience of providing a solution that ignores the variance of the objective value of the scenarios and, so, the occurrence of scenarios with an objective value below the expected one. Alternatively, we present several approaches for risk averse management, namely, a scenario immunization strategy, the optimization of the well known Value-at-Risk (VaR) and several variants of the Conditional Value-at-Risk strategies, the optimization of the expected mean minus the weighted probability of having a "bad" scenario to occur for the given solution provided by the model, the optimization of the objective function expected value subject to stochastic dominance constraints (SDC) for a set of profiles given by the pairs of threshold objective values and either bounds on the probability of not reaching the thresholds or the expected shortfall over them, and the optimization of a mixture of the VaR and SDC strategies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The high morbidity and mortality associated with atherosclerotic coronary vascular disease (CVD) and its complications are being lessened by the increased knowledge of risk factors, effective preventative measures and proven therapeutic interventions. However, significant CVD morbidity remains and sudden cardiac death continues to be a presenting feature for some subsequently diagnosed with CVD. Coronary vascular disease is also the leading cause of anaesthesia related complications. Stress electrocardiography/exercise testing is predictive of 10 year risk of CVD events and the cardiovascular variables used to score this test are monitored peri-operatively. Similar physiological time-series datasets are being subjected to data mining methods for the prediction of medical diagnoses and outcomes. This study aims to find predictors of CVD using anaesthesia time-series data and patient risk factor data. Several pre-processing and predictive data mining methods are applied to this data. Physiological time-series data related to anaesthetic procedures are subjected to pre-processing methods for removal of outliers, calculation of moving averages as well as data summarisation and data abstraction methods. Feature selection methods of both wrapper and filter types are applied to derived physiological time-series variable sets alone and to the same variables combined with risk factor variables. The ability of these methods to identify subsets of highly correlated but non-redundant variables is assessed. The major dataset is derived from the entire anaesthesia population and subsets of this population are considered to be at increased anaesthesia risk based on their need for more intensive monitoring (invasive haemodynamic monitoring and additional ECG leads). Because of the unbalanced class distribution in the data, majority class under-sampling and Kappa statistic together with misclassification rate and area under the ROC curve (AUC) are used for evaluation of models generated using different prediction algorithms. The performance based on models derived from feature reduced datasets reveal the filter method, Cfs subset evaluation, to be most consistently effective although Consistency derived subsets tended to slightly increased accuracy but markedly increased complexity. The use of misclassification rate (MR) for model performance evaluation is influenced by class distribution. This could be eliminated by consideration of the AUC or Kappa statistic as well by evaluation of subsets with under-sampled majority class. The noise and outlier removal pre-processing methods produced models with MR ranging from 10.69 to 12.62 with the lowest value being for data from which both outliers and noise were removed (MR 10.69). For the raw time-series dataset, MR is 12.34. Feature selection results in reduction in MR to 9.8 to 10.16 with time segmented summary data (dataset F) MR being 9.8 and raw time-series summary data (dataset A) being 9.92. However, for all time-series only based datasets, the complexity is high. For most pre-processing methods, Cfs could identify a subset of correlated and non-redundant variables from the time-series alone datasets but models derived from these subsets are of one leaf only. MR values are consistent with class distribution in the subset folds evaluated in the n-cross validation method. For models based on Cfs selected time-series derived and risk factor (RF) variables, the MR ranges from 8.83 to 10.36 with dataset RF_A (raw time-series data and RF) being 8.85 and dataset RF_F (time segmented time-series variables and RF) being 9.09. The models based on counts of outliers and counts of data points outside normal range (Dataset RF_E) and derived variables based on time series transformed using Symbolic Aggregate Approximation (SAX) with associated time-series pattern cluster membership (Dataset RF_ G) perform the least well with MR of 10.25 and 10.36 respectively. For coronary vascular disease prediction, nearest neighbour (NNge) and the support vector machine based method, SMO, have the highest MR of 10.1 and 10.28 while logistic regression (LR) and the decision tree (DT) method, J48, have MR of 8.85 and 9.0 respectively. DT rules are most comprehensible and clinically relevant. The predictive accuracy increase achieved by addition of risk factor variables to time-series variable based models is significant. The addition of time-series derived variables to models based on risk factor variables alone is associated with a trend to improved performance. Data mining of feature reduced, anaesthesia time-series variables together with risk factor variables can produce compact and moderately accurate models able to predict coronary vascular disease. Decision tree analysis of time-series data combined with risk factor variables yields rules which are more accurate than models based on time-series data alone. The limited additional value provided by electrocardiographic variables when compared to use of risk factors alone is similar to recent suggestions that exercise electrocardiography (exECG) under standardised conditions has limited additional diagnostic value over risk factor analysis and symptom pattern. The effect of the pre-processing used in this study had limited effect when time-series variables and risk factor variables are used as model input. In the absence of risk factor input, the use of time-series variables after outlier removal and time series variables based on physiological variable values’ being outside the accepted normal range is associated with some improvement in model performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Genome-wide association studies have identified multiple genetic variants associated with prostate cancer risk which explain a substantial proportion of familial relative risk. These variants can be used to stratify individuals by their risk of prostate cancer. Methods We genotyped 25 prostate cancer susceptibility loci in 40,414 individuals and derived a polygenic risk score (PRS).We estimated empirical odds ratios (OR) for prostate cancer associated with different risk strata defined by PRS and derived agespecific absolute risks of developing prostate cancer by PRS stratum and family history. Results The prostate cancer risk for men in the top 1% of the PRS distribution was 30.6 (95% CI, 16.4-57.3) fold compared with men in the bottom 1%, and 4.2 (95% CI, 3.2-5.5) fold compared with the median risk. The absolute risk of prostate cancer by age of 85 years was 65.8% for a man with family history in the top 1% of the PRS distribution, compared with 3.7% for a man in the bottom 1%. The PRS was only weakly correlated with serum PSA level (correlation = 0.09). Conclusions Risk profiling can identify men at substantially increased or reduced risk of prostate cancer. The effect size, measured by OR per unit PRS, was higher in men at younger ages and in men with family history of prostate cancer. Incorporating additional newly identified loci into a PRS should improve the predictive value of risk profiles. Impact:We demonstrate that the risk profiling based on SNPs can identify men at substantially increased or reduced risk that could have useful implications for targeted prevention and screening programs.