933 resultados para Empirical risk


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis studies binary time series models and their applications in empirical macroeconomics and finance. In addition to previously suggested models, new dynamic extensions are proposed to the static probit model commonly used in the previous literature. In particular, we are interested in probit models with an autoregressive model structure. In Chapter 2, the main objective is to compare the predictive performance of the static and dynamic probit models in forecasting the U.S. and German business cycle recession periods. Financial variables, such as interest rates and stock market returns, are used as predictive variables. The empirical results suggest that the recession periods are predictable and dynamic probit models, especially models with the autoregressive structure, outperform the static model. Chapter 3 proposes a Lagrange Multiplier (LM) test for the usefulness of the autoregressive structure of the probit model. The finite sample properties of the LM test are considered with simulation experiments. Results indicate that the two alternative LM test statistics have reasonable size and power in large samples. In small samples, a parametric bootstrap method is suggested to obtain approximately correct size. In Chapter 4, the predictive power of dynamic probit models in predicting the direction of stock market returns are examined. The novel idea is to use recession forecast (see Chapter 2) as a predictor of the stock return sign. The evidence suggests that the signs of the U.S. excess stock returns over the risk-free return are predictable both in and out of sample. The new "error correction" probit model yields the best forecasts and it also outperforms other predictive models, such as ARMAX models, in terms of statistical and economic goodness-of-fit measures. Chapter 5 generalizes the analysis of univariate models considered in Chapters 2 4 to the case of a bivariate model. A new bivariate autoregressive probit model is applied to predict the current state of the U.S. business cycle and growth rate cycle periods. Evidence of predictability of both cycle indicators is obtained and the bivariate model is found to outperform the univariate models in terms of predictive power.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modeling and forecasting of implied volatility (IV) is important to both practitioners and academics, especially in trading, pricing, hedging, and risk management activities, all of which require an accurate volatility. However, it has become challenging since the 1987 stock market crash, as implied volatilities (IVs) recovered from stock index options present two patterns: volatility smirk(skew) and volatility term-structure, if the two are examined at the same time, presents a rich implied volatility surface (IVS). This implies that the assumptions behind the Black-Scholes (1973) model do not hold empirically, as asset prices are mostly influenced by many underlying risk factors. This thesis, consists of four essays, is modeling and forecasting implied volatility in the presence of options markets’ empirical regularities. The first essay is modeling the dynamics IVS, it extends the Dumas, Fleming and Whaley (DFW) (1998) framework; for instance, using moneyness in the implied forward price and OTM put-call options on the FTSE100 index, a nonlinear optimization is used to estimate different models and thereby produce rich, smooth IVSs. Here, the constant-volatility model fails to explain the variations in the rich IVS. Next, it is found that three factors can explain about 69-88% of the variance in the IVS. Of this, on average, 56% is explained by the level factor, 15% by the term-structure factor, and the additional 7% by the jump-fear factor. The second essay proposes a quantile regression model for modeling contemporaneous asymmetric return-volatility relationship, which is the generalization of Hibbert et al. (2008) model. The results show strong negative asymmetric return-volatility relationship at various quantiles of IV distributions, it is monotonically increasing when moving from the median quantile to the uppermost quantile (i.e., 95%); therefore, OLS underestimates this relationship at upper quantiles. Additionally, the asymmetric relationship is more pronounced with the smirk (skew) adjusted volatility index measure in comparison to the old volatility index measure. Nonetheless, the volatility indices are ranked in terms of asymmetric volatility as follows: VIX, VSTOXX, VDAX, and VXN. The third essay examines the information content of the new-VDAX volatility index to forecast daily Value-at-Risk (VaR) estimates and compares its VaR forecasts with the forecasts of the Filtered Historical Simulation and RiskMetrics. All daily VaR models are then backtested from 1992-2009 using unconditional, independence, conditional coverage, and quadratic-score tests. It is found that the VDAX subsumes almost all information required for the volatility of daily VaR forecasts for a portfolio of the DAX30 index; implied-VaR models outperform all other VaR models. The fourth essay models the risk factors driving the swaption IVs. It is found that three factors can explain 94-97% of the variation in each of the EUR, USD, and GBP swaption IVs. There are significant linkages across factors, and bi-directional causality is at work between the factors implied by EUR and USD swaption IVs. Furthermore, the factors implied by EUR and USD IVs respond to each others’ shocks; however, surprisingly, GBP does not affect them. Second, the string market model calibration results show it can efficiently reproduce (or forecast) the volatility surface for each of the swaptions markets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis we deal with the concept of risk. The objective is to bring together and conclude on some normative information regarding quantitative portfolio management and risk assessment. The first essay concentrates on return dependency. We propose an algorithm for classifying markets into rising and falling. Given the algorithm, we derive a statistic: the Trend Switch Probability, for detection of long-term return dependency in the first moment. The empirical results suggest that the Trend Switch Probability is robust over various volatility specifications. The serial dependency in bear and bull markets behaves however differently. It is strongly positive in rising market whereas in bear markets it is closer to a random walk. Realized volatility, a technique for estimating volatility from high frequency data, is investigated in essays two and three. In the second essay we find, when measuring realized variance on a set of German stocks, that the second moment dependency structure is highly unstable and changes randomly. Results also suggest that volatility is non-stationary from time to time. In the third essay we examine the impact from market microstructure on the error between estimated realized volatility and the volatility of the underlying process. With simulation-based techniques we show that autocorrelation in returns leads to biased variance estimates and that lower sampling frequency and non-constant volatility increases the error variation between the estimated variance and the variance of the underlying process. From these essays we can conclude that volatility is not easily estimated, even from high frequency data. It is neither very well behaved in terms of stability nor dependency over time. Based on these observations, we would recommend the use of simple, transparent methods that are likely to be more robust over differing volatility regimes than models with a complex parameter universe. In analyzing long-term return dependency in the first moment we find that the Trend Switch Probability is a robust estimator. This is an interesting area for further research, with important implications for active asset allocation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Managerial pay-for-performance sensitivity has increased rapidly around the world. Early empirical research showed that pay-for-performance sensitivity resulting from stock ownership and stock options appeared to be quite low during the 1970s and early 1980s in the U.S. However, recent empirical research from the U.S. shows an enormous increase in pay-for-performance sensitivity. The global trend has also reached Finland, where stock options have become a major ingredient of executive compensation. The fact that stock options seem to be an appealing form of remuneration from a theoretical point of view combined with the observation that the use of this compensation form has increased significantly during the recent years, implies that research on the dynamics of stock option compensation is highly relevant for the academic community, as well as for practitioners and regulators. The research questions of the thesis are analyzed in four separate essays. The first essay examines whether stock option compensation practices of Finnish firms are consistent with predictions from principal-agent theory. The second essay explores one of the major puzzles in the compensation literature by studying determinants of stock option contract design. In theory, optimal contract design should vary according to firm characteristics. However, in the U.S., variation in contract design seems to be surprisingly low, a phenomenon generally attributed to tax and accounting considerations. In Finland, however, firms are not subject to stringent contracting restrictions, and the variation in contract design tends, in fact, to be quite substantial. The third essay studies the impact of price- and risk incentives arising from stock option compensation on firm investment. In addition, the essay explores one of the most debated questions in the literature, in particular, the relation between incentives and firm performance. Finally, several strands of literature in both economics and corporate finance hypothesize that economic uncertainty is related to corporate decision-making. Previous research has shown that risk tends to slow down firm investment. In the fourth essay, it is hypothesized that firm risk slows down growth from a more universal perspective. Consistent with this view, it is shown that risk not only tends to slow down firm investment, but also employment growth. Moreover, the essay explores whether the nature of firms’ compensation policies, in particular, whether firms make use of stock option compensation, affects the relation between risk and firm growth. In summary, the four essays contribute to the current understanding of stock options as a form of equity incentives, and how incentives and risk affect corporate decision-making. By this, the thesis promotes the knowledge related to the modern theory of the firm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In many applications, the training data, from which one needs to learn a classifier, is corrupted with label noise. Many standard algorithms such as SVM perform poorly in the presence of label noise. In this paper we investigate the robustness of risk minimization to label noise. We prove a sufficient condition on a loss function for the risk minimization under that loss to be tolerant to uniform label noise. We show that the 0-1 loss, sigmoid loss, ramp loss and probit loss satisfy this condition though none of the standard convex loss functions satisfy it. We also prove that, by choosing a sufficiently large value of a parameter in the loss function, the sigmoid loss, ramp loss and probit loss can be made tolerant to nonuniform label noise also if we can assume the classes to be separable under noise-free data distribution. Through extensive empirical studies, we show that risk minimization under the 0-1 loss, the sigmoid loss and the ramp loss has much better robustness to label noise when compared to the SVM algorithm. (C) 2015 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Complex systems inspired analysis suggests a hypothesis that financial meltdowns are abrupt critical transitions that occur when the system reaches a tipping point. Theoretical and empirical studies on climatic and ecological dynamical systems have shown that approach to tipping points is preceded by a generic phenomenon called critical slowing down, i.e. an increasingly slow response of the system to perturbations. Therefore, it has been suggested that critical slowing down may be used as an early warning signal of imminent critical transitions. Whether financial markets exhibit critical slowing down prior to meltdowns remains unclear. Here, our analysis reveals that three major US (Dow Jones Index, S&P 500 and NASDAQ) and two European markets (DAX and FTSE) did not exhibit critical slowing down prior to major financial crashes over the last century. However, all markets showed strong trends of rising variability, quantified by time series variance and spectral function at low frequencies, prior to crashes. These results suggest that financial crashes are not critical transitions that occur in the vicinity of a tipping point. Using a simple model, we argue that financial crashes are likely to be stochastic transitions which can occur even when the system is far away from the tipping point. Specifically, we show that a gradually increasing strength of stochastic perturbations may have caused to abrupt transitions in the financial markets. Broadly, our results highlight the importance of stochastically driven abrupt transitions in real world scenarios. Our study offers rising variability as a precursor of financial meltdowns albeit with a limitation that they may signal false alarms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coal-fired power plants may enjoy a significant advantage relative to gas plants in terms of cheaper fuel cost. Still, this advantage may erode or even turn into disadvantage depending on CO2 emission allowance price. This price will presumably rise in both the Kyoto Protocol commitment period (2008-2012) and the first post-Kyoto years. Thus, in a carbon-constrained environment, coal plants face financial risks arising in their profit margins, which in turn hinge on their so-called "clean dark spread". These risks are further reinforced when the price of the output electricity is determined by natural gas-fired plants' marginal costs, which differ from coal plants' costs. We aim to assess the risks in coal plants' margins. We adopt parameter values estimated from empirical data. These in turn are derived from natural gas and electricity markets alongside the EU ETS market where emission allowances are traded. Monte Carlo simulation allows to compute the expected value and risk profile of coal-based electricity generation. We focus on the clean dark spread in both time periods under different future scenarios in the allowance market. Specifically, bottom 5% and 10% percentiles are derived. According to our results, certain future paths of the allowance price may impose significant risks on the clean dark spread obtained by coal plants.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis examines collapse risk of tall steel braced frame buildings using rupture-to-rafters simulations due to suite of San Andreas earthquakes. Two key advancements in this work are the development of (i) a rational methodology for assigning scenario earthquake probabilities and (ii) an artificial correction-free approach to broadband ground motion simulation. The work can be divided into the following sections: earthquake source modeling, earthquake probability calculations, ground motion simulations, building response, and performance analysis.

As a first step the kinematic source inversions of past earthquakes in the magnitude range of 6-8 are used to simulate 60 scenario earthquakes on the San Andreas fault. For each scenario earthquake a 30-year occurrence probability is calculated and we present a rational method to redistribute the forecast earthquake probabilities from UCERF to the simulated scenario earthquake. We illustrate the inner workings of the method through an example involving earthquakes on the San Andreas fault in southern California.

Next, three-component broadband ground motion histories are computed at 636 sites in the greater Los Angeles metropolitan area by superposing short-period (0.2~s-2.0~s) empirical Green's function synthetics on top of long-period ($>$ 2.0~s) spectral element synthetics. We superimpose these seismograms on low-frequency seismograms, computed from kinematic source models using the spectral element method, to produce broadband seismograms.

Using the ground motions at 636 sites for the 60 scenario earthquakes, 3-D nonlinear analysis of several variants of an 18-story steel braced frame building, designed for three soil types using the 1994 and 1997 Uniform Building Code provisions and subjected to these ground motions, are conducted. Model performance is classified into one of five performance levels: Immediate Occupancy, Life Safety, Collapse Prevention, Red-Tagged, and Model Collapse. The results are combined with the 30-year probability of occurrence of the San Andreas scenario earthquakes using the PEER performance based earthquake engineering framework to determine the probability of exceedance of these limit states over the next 30 years.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The design and construction of deep excavations in urban environment is often governed by serviceability limit state related to the risk of damage to adjacent buildings. In current practice, the assessment of excavation-induced building damage has focused on a deterministic approach. This paper presents a component/system reliability analysis framework to assess the probability that specified threshold design criteria for multiple serviceability limit states are exceeded. A recently developed Bayesian probabilistic framework is used to update the predictions of ground movements in the later stages of excavation based on the recorded deformation measurements. An example is presented to show how the serviceability performance for excavation problems can be assessed based on the component/system reliability analysis. © 2011 ASCE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Univ SE Calif, Ctr Syst & Software Engn, ABB, Microsoft Res, IEEE, ACMSIGSOFT, N Carolina State Univ Comp Sci

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Prostate cancer (PC) is the second leading cause of cancer death in men. Recent reports suggest that excess of nutrients involved in the one-carbon metabolism pathway increases PC risk; however, empirical data are lacking. Veteran American men (272 controls and 144 PC cases) who attended the Durham Veteran American Medical Center between 2004-2009 were enrolled into a case-control study. Intake of folate, vitamin B12, B6, and methionine were measured using a food frequency questionnaire. Regression models were used to evaluate the association among one-carbon cycle nutrients, MTHFR genetic variants, and prostate cancer. Higher dietary methionine intake was associated with PC risk (OR = 2.1; 95%CI 1.1-3.9) The risk was most pronounced in men with Gleason sum <7 (OR = 2.75; 95%CI 1.32- 5.73). The association of higher methionine intake and PC risk was only apparent in men who carried at least one MTHFR A1298C allele (OR = 6.7; 95%CI = 1.6-27.8), compared to MTHFR A1298A noncarrier men (OR = 0.9; 95%CI = 0.24-3.92) (p-interaction = 0.045). There was no evidence for associations between B vitamins (folate, B12, and B6) and PC risk. Our results suggest that carrying the MTHFR A1298C variants modifies the association between high methionine intake and PC risk. Larger studies are required to validate these findings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the extensive literature survey based both on theoretical rationales for hedging as well as the empirical evidence that support the implications of the theory regarding the arguments for the corporate risk management relevance and its influence on the company’s value. The survey of literature presented in this paper has revealed that there are two chief classes of rationales for corporate decision to hedge - maximisation of shareholder value or maximisation of managers’ private utility. The paper concludes that, the total benefit of hedging is the combination of all these motives and, if the costs of using corporate risk management instruments are less than the benefits provided via the avenues mentioned in this paper, or any other benefit perceived by the market, then risk management is a shareholder-value enhancing activity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper first explores the conflictual discourses employed by government agencies, citizens’ initiatives, and environmental organizations over the construction of a High Voltage Power Station (KYT) for demands of the 2004 Olympic Games, as presented in media reports and movement literature over a period of one year. Having in mind recent criticisms targeting the lack of empirical evidence in Ulrich Beck’s risk theorization, this exploration is of distinct importance. Secondly, it takes into account that both the defensive character of societal action and mistrust to expert authorities have been confirmed as prevalent characteristics of both the Greek and the general risk social context. The paper attempts to re-evaluate and/or complement existing perspectives of societal activism in general and environmental mobilizations in particular within the confines of the Greek social context. As a tentative conclusion, it is suggested that the risk perspective offers a novel prism for the examination of societal activism without confining it to the characteristics of individual national contexts.