888 resultados para CRASH TESTS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modeling and forecasting of implied volatility (IV) is important to both practitioners and academics, especially in trading, pricing, hedging, and risk management activities, all of which require an accurate volatility. However, it has become challenging since the 1987 stock market crash, as implied volatilities (IVs) recovered from stock index options present two patterns: volatility smirk(skew) and volatility term-structure, if the two are examined at the same time, presents a rich implied volatility surface (IVS). This implies that the assumptions behind the Black-Scholes (1973) model do not hold empirically, as asset prices are mostly influenced by many underlying risk factors. This thesis, consists of four essays, is modeling and forecasting implied volatility in the presence of options markets’ empirical regularities. The first essay is modeling the dynamics IVS, it extends the Dumas, Fleming and Whaley (DFW) (1998) framework; for instance, using moneyness in the implied forward price and OTM put-call options on the FTSE100 index, a nonlinear optimization is used to estimate different models and thereby produce rich, smooth IVSs. Here, the constant-volatility model fails to explain the variations in the rich IVS. Next, it is found that three factors can explain about 69-88% of the variance in the IVS. Of this, on average, 56% is explained by the level factor, 15% by the term-structure factor, and the additional 7% by the jump-fear factor. The second essay proposes a quantile regression model for modeling contemporaneous asymmetric return-volatility relationship, which is the generalization of Hibbert et al. (2008) model. The results show strong negative asymmetric return-volatility relationship at various quantiles of IV distributions, it is monotonically increasing when moving from the median quantile to the uppermost quantile (i.e., 95%); therefore, OLS underestimates this relationship at upper quantiles. Additionally, the asymmetric relationship is more pronounced with the smirk (skew) adjusted volatility index measure in comparison to the old volatility index measure. Nonetheless, the volatility indices are ranked in terms of asymmetric volatility as follows: VIX, VSTOXX, VDAX, and VXN. The third essay examines the information content of the new-VDAX volatility index to forecast daily Value-at-Risk (VaR) estimates and compares its VaR forecasts with the forecasts of the Filtered Historical Simulation and RiskMetrics. All daily VaR models are then backtested from 1992-2009 using unconditional, independence, conditional coverage, and quadratic-score tests. It is found that the VDAX subsumes almost all information required for the volatility of daily VaR forecasts for a portfolio of the DAX30 index; implied-VaR models outperform all other VaR models. The fourth essay models the risk factors driving the swaption IVs. It is found that three factors can explain 94-97% of the variation in each of the EUR, USD, and GBP swaption IVs. There are significant linkages across factors, and bi-directional causality is at work between the factors implied by EUR and USD swaption IVs. Furthermore, the factors implied by EUR and USD IVs respond to each others’ shocks; however, surprisingly, GBP does not affect them. Second, the string market model calibration results show it can efficiently reproduce (or forecast) the volatility surface for each of the swaptions markets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The likelihood ratio test of cointegration rank is the most widely used test for cointegration. Many studies have shown that its finite sample distribution is not well approximated by the limiting distribution. The article introduces and evaluates by Monte Carlo simulation experiments bootstrap and fast double bootstrap (FDB) algorithms for the likelihood ratio test. It finds that the performance of the bootstrap test is very good. The more sophisticated FDB produces a further improvement in cases where the performance of the asymptotic test is very unsatisfactory and the ordinary bootstrap does not work as well as it might. Furthermore, the Monte Carlo simulations provide a number of guidelines on when the bootstrap and FDB tests can be expected to work well. Finally, the tests are applied to US interest rates and international stock prices series. It is found that the asymptotic test tends to overestimate the cointegration rank, while the bootstrap and FDB tests choose the correct cointegration rank.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bootstrap likelihood ratio tests of cointegration rank are commonly used because they tend to have rejection probabilities that are closer to the nominal level than the rejection probabilities of the correspond- ing asymptotic tests. The e¤ect of bootstrapping the test on its power is largely unknown. We show that a new computationally inexpensive procedure can be applied to the estimation of the power function of the bootstrap test of cointegration rank. The bootstrap test is found to have a power function close to that of the level-adjusted asymp- totic test. The bootstrap test estimates the level-adjusted power of the asymptotic test highly accurately. The bootstrap test may have low power to reject the null hypothesis of cointegration rank zero, or underestimate the cointegration rank. An empirical application to Euribor interest rates is provided as an illustration of the findings.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Instrument landing systems (ILS) and the upcoming microwave landing systems (MLS) are (or are planned to be) very important navigational aids at most major airports of the world. However, their performance is directly affected by the features of the site in which they are located. Currently, validation of the ILS performance is through costly and time-consuming experimental methods. This paper outlines a powerful and versatile analytical approach for performing the site evaluation, as an alternative to the experimental methods. The approach combines a multi-plate model for the terrain with a powerful and exhaustive ray-tracing technique and a versatile and accurate formulation for estimating the electromagnetic fields due to the array antenna in the presence of the terrain. It can model the effects of the undulation, the roughness and the impedance (depending on the soil type) of the terrain at the site. The results computed from the analytical method are compared with the actual measurements and good agreement is shown. Considerations for site effects on MLS are also outlined.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

dThe work looks at the response to three-point loading of carbon-epoxy (CF-EP) composites with inserted buffer strip (BS) material. Short beam Shear tests were performed to study the load-deflection response as well as fracture features through macroscopy on the CF-EP system containing the interleaved PTFE-coated fabric material. Significant differences were noticed in the response of the CF-EP system to the bending process consequent to the architectural modification. It was inferred that introduction of small amounts of less adherent layers of material at specific locations causes a decrement in the load carrying capability. Further the number and the ease with which interface separation occurs is found to depend on the extent to which the inserted layer is present in either single or multiple layer positions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many economic events involve initial observations that substantially deviate from long-run steady state. Initial conditions of this type have been found to impact diversely on the power of univariate unit root tests, whereas the impact on multivariate tests is largely unknown. This paper investigates the impact of the initial condition on tests for cointegration rank. We compare the local power of the widely used likelihood ratio (LR) test with the local power of a test based on the eigenvalues of the companion matrix. We find that the power of the LR test is increasing in the magnitude of the initial condition, whereas the power of the other test is decreasing. The behaviour of the tests is investigated in an application to price convergence.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

General relativity has very specific predictions for the gravitational waveforms from inspiralling compact binaries obtained using the post-Newtonian (PN) approximation. We investigate the extent to which the measurement of the PN coefficients, possible with the second generation gravitational-wave detectors such as the Advanced Laser Interferometer Gravitational-Wave Observatory (LIGO) and the third generation gravitational-wave detectors such as the Einstein Telescope (ET), could be used to test post-Newtonian theory and to put bounds on a subclass of parametrized-post-Einstein theories which differ from general relativity in a parametrized sense. We demonstrate this possibility by employing the best inspiralling waveform model for nonspinning compact binaries which is 3.5PN accurate in phase and 3PN in amplitude. Within the class of theories considered, Advanced LIGO can test the theory at 1.5PN and thus the leading tail term. Future observations of stellar mass black hole binaries by ET can test the consistency between the various PN coefficients in the gravitational-wave phasing over the mass range of 11-44M(circle dot). The choice of the lower frequency cutoff is important for testing post-Newtonian theory using the ET. The bias in the test arising from the assumption of nonspinning binaries is indicated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Equilibrium sediment volume tests are conducted on field soils to classify them based on their degree of expansivity and/or to predict the liquid limit of soils. The present technical paper examines different equilibrium sediment volume tests, critically evaluating each of them. It discusses the settling behavior of fine-grained soils during the soil sediment formation to evolve a rationale for conducting the latest version of equilibrium sediment volume test. Probable limitations of equilibrium sediment volume test and the possible solution to overcome the same have also been indicated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Assuming an entropic origin for phason elasticity in quasicrystals, we derive predictions for the temperature dependence of grain-boundary structure and free energy, the nature of the elastic instability in these systems, and the behavior of sound damping near the instability. We believe that these will provide decisive tests of the entropic model for quasicrystals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Stress relaxation testing is often utilised for determining whether athermal straining contributes to plastic flow; if plastic strain rate is continuous across the transition from tension to relaxation then plastic strain is fully thermally activated. This method was applied to an aged type 316 stainless steel tested in the temperature range 973–1123 K and to a high purity Al in the recrystallised annealed condition tested in the temperature range 274–417 K. The results indicated that plastic strain is thermally activated in these materials at these corresponding test temperatures. For Al, because of its high strain rate sensitivity, it was necessary to adopt a back extrapolation procedure to correct for the finite period that the crosshead requires to decelerate from the constant speed during tension to a dead stop for stress relaxation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Genetic Algorithms are robust search and optimization techniques. A Genetic Algorithm based approach for determining the optimal input distributions for generating random test vectors is proposed in the paper. A cost function based on the COP testability measure for determining the efficacy of the input distributions is discussed, A brief overview of Genetic Algorithms (GAs) and the specific details of our implementation are described. Experimental results based on ISCAS-85 benchmark circuits are presented. The performance pf our GA-based approach is compared with previous results. While the GA generates more efficient input distributions than the previous methods which are based on gradient descent search, the overheads of the GA in computing the input distributions are larger. To account for the relatively quick convergence of the gradient descent methods, we analyze the landscape of the COP-based cost function. We prove that the cost function is unimodal in the search space. This feature makes the cost function amenable to optimization by gradient-descent techniques as compared to random search methods such as Genetic Algorithms.