999 resultados para Gumbel Extreme Value Autoregressive
Resumo:
El presente trabajo de investigación busca medir el impacto que tienen los eventos extremos, también llamados eventos de boom o eventos de crash, según la naturaleza y consecuencias de los mismos en la construcción de portafolios de inversión eficientes -- Se trabajará con los precios de acciones listadas en la bolsa de Nueva York, y con estas se construirán portafolios de inversión, siguiendo la metodología diseñada por Harry Markowitz en 1952 -- Se verificará la rentabilidad de los portafolios antes del evento extremo, y después de este, y se estudiarán las consecuencias de este sobre el portafolio -- El evento extremo que se introducirá en el estudio es la crisis económica y financiera del año 2008, que tiene sus orígenes en la crisis hipotecaria en Estados Unidos -- Con las variaciones en los precios de los activos en dicho periodo de tiempo, se espera estresar el modelo y revisar si lo propuesto por Markowitz sigue teniendo validez ante la aparición de dichos sucesos -- A partir de esto, se realizarán simulaciones con modelos en Excel y técnicas de Montecarlo, se plantearán posibles recomendaciones técnicas que debamos tener en cuenta al momento de construir nuestros portafolios, y se redactará un documento con recomendaciones para los inversionistas en general -- Como aporte adicional, se entregará el código en Visual Basic para automatizar la optimización de los portafolios
Resumo:
People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.
Resumo:
En el contexto de las compañías aseguradoras, el capital representa la solidez y capacidad de una compañía para responder ante las obligaciones adquiridas con los clientes en escenarios de pérdidas inesperadas -- Con la experiencia de las pasadas crisis se ha venido aumentando la exigencia de capital y para estimar este capital, el marco regulatorio europeo propone una metodología basada en riesgos, la cual se conoce como Solvencia II -- Sin embargo, en Colombia la metodología exigida en la actualidad no contempla la totalidad de riesgos a los que se encuentra expuesta una compañía en este sector -- El propósito de este trabajo es determinar las bases para el cálculo del capital, basado en riesgo de una compañía aseguradora en Colombia, adaptando las exigencias propuestas por Solvencia II a las condiciones del mercado colombiano -- Lo anterior, se realiza cuantificando las principales variables de riesgo relacionadas con el entorno financiero y de negocio de las compañías en Colombia
Resumo:
This dissertation is a collection of three economics essays on different aspects of carbon emission trading markets. The first essay analyzes the dynamic optimal emission control strategies of two nations. With a potential to become the largest buyer under the Kyoto Protocol, the US is assumed to be a monopsony, whereas with a large number of tradable permits on hand Russia is assumed to be a monopoly. Optimal costs of emission control programs are estimated for both the countries under four different market scenarios: non-cooperative no trade, US monopsony, Russia monopoly, and cooperative trading. The US monopsony scenario is found to be the most Pareto cost efficient. The Pareto efficient outcome, however, would require the US to make side payments to Russia, which will even out the differences in the cost savings from cooperative behavior. The second essay analyzes the price dynamics of the Chicago Climate Exchange (CCX), a voluntary emissions trading market. By examining the volatility in market returns using AR-GARCH and Markov switching models, the study associates the market price fluctuations with two different political regimes of the US government. Further, the study also identifies a high volatility in the returns few months before the market collapse. Three possible regulatory and market-based forces are identified as probable causes of market volatility and its ultimate collapse. Organizers of other voluntary markets in the US and worldwide may closely watch for these regime switching forces in order to overcome emission market crashes. The third essay compares excess skewness and kurtosis in carbon prices between CCX and EU ETS (European Union Emission Trading Scheme) Phase I and II markets, by examining the tail behavior when market expectations exceed the threshold level. Dynamic extreme value theory is used to find out the mean price exceedence of the threshold levels and estimate the risk loss. The calculated risk measures suggest that CCX and EU ETS Phase I are extremely immature markets for a risk investor, whereas EU ETS Phase II is a more stable market that could develop as a mature carbon market in future years.
Resumo:
Gumbel analyses were carried out on rainfall time-series at 151 locations in Switzerland for 4 different periods of 30 years in order to estimate daily extreme precipitation for a return period of 100 years. Those estimations were compared with maximal daily values measured during the last 100 years (1911-2010) to test the efficiency of these analyses. This comparison shows that these analyses provide good results for 50 to 60% locations in this country from rainfall time-series 1961-1990 and 1980-2010. On the other hand, daily precipitation with a return period of 100 years is underestimated at most locations from time-series 1931-1960 and especially 1911-1940. Such underestimation results from the increase of maximal daily precipitation recorded from 1911 to 2010 at 90% locations in Switzerland.
Resumo:
This paper introduces a State Space approach to explain the dynamics of rent growth, expected returns and Price-Rent ratio in housing markets. According to the present value model, movements in price to rent ratio should be matched by movements in expected returns and expected rent growth. The state space framework assume that both variables follow an autoregressive process of order one. The model is applied to the US and UK housing market, which yields series of the latent variables given the behaviour of the Price-Rent ratio. Resampling techniques and bootstrapped likelihood ratios show that expected returns tend to be highly persistent compared to rent growth. The Öltered expected returns is considered in a simple predictability of excess returns model with high statistical predictability evidenced for the UK. Overall, it is found that the present value model tends to have strong statistical predictability in the UK housing markets.
Resumo:
A method to estimate an extreme quantile that requires no distributional assumptions is presented. The approach is based on transformed kernel estimation of the cumulative distribution function (cdf). The proposed method consists of a double transformation kernel estimation. We derive optimal bandwidth selection methods that have a direct expression for the smoothing parameter. The bandwidth can accommodate to the given quantile level. The procedure is useful for large data sets and improves quantile estimation compared to other methods in heavy tailed distributions. Implementation is straightforward and R programs are available.
Resumo:
Extreme times techniques, generally applied to nonequilibrium statistical mechanical processes, are also useful for a better understanding of financial markets. We present a detailed study on the mean first-passage time for the volatility of return time series. The empirical results extracted from daily data of major indices seem to follow the same law regardless of the kind of index thus suggesting an universal pattern. The empirical mean first-passage time to a certain level L is fairly different from that of the Wiener process showing a dissimilar behavior depending on whether L is higher or lower than the average volatility. All of this indicates a more complex dynamics in which a reverting force drives volatility toward its mean value. We thus present the mean first-passage time expressions of the most common stochastic volatility models whose approach is comparable to the random diffusion description. We discuss asymptotic approximations of these models and confront them to empirical results with a good agreement with the exponential Ornstein-Uhlenbeck model.
Resumo:
In the 1920s, Ronald Fisher developed the theory behind the p value and Jerzy Neyman and Egon Pearson developed the theory of hypothesis testing. These distinct theories have provided researchers important quantitative tools to confirm or refute their hypotheses. The p value is the probability to obtain an effect equal to or more extreme than the one observed presuming the null hypothesis of no effect is true; it gives researchers a measure of the strength of evidence against the null hypothesis. As commonly used, investigators will select a threshold p value below which they will reject the null hypothesis. The theory of hypothesis testing allows researchers to reject a null hypothesis in favor of an alternative hypothesis of some effect. As commonly used, investigators choose Type I error (rejecting the null hypothesis when it is true) and Type II error (accepting the null hypothesis when it is false) levels and determine some critical region. If the test statistic falls into that critical region, the null hypothesis is rejected in favor of the alternative hypothesis. Despite similarities between the two, the p value and the theory of hypothesis testing are different theories that often are misunderstood and confused, leading researchers to improper conclusions. Perhaps the most common misconception is to consider the p value as the probability that the null hypothesis is true rather than the probability of obtaining the difference observed, or one that is more extreme, considering the null is true. Another concern is the risk that an important proportion of statistically significant results are falsely significant. Researchers should have a minimum understanding of these two theories so that they are better able to plan, conduct, interpret, and report scientific experiments.
Resumo:
Background: Pre-existing psychological factors can strongly influence coping with type 1 diabetes mellitus and interfere with self-monitoring. Psychiatric disorders seem to be positively associated with poor metabolic control. We present a case of extreme compulsive blood testing due to obsessive fear of hypoglycemia in an adolescent with type 1 diabetes mellitus. Case report: Type 1 diabetes mellitus (anti GAD-antibodies 2624 U/l, norm < 9.5) was diagnosed in a boy aged 14.3 years [170 cm (+ 0.93 SDS), weight 50.5 kg (+ 0.05 SDS)]. Laboratory work-up showed no evidence for other autoimmune disease. Family and past medical history were unremarkable. Growth and developmental milestones were normal. Insulin-analog based basal-bolus regime was initiated, associated to standard diabetic education. Routine psychological evaluation performed at the onset of diabetes revealed intermittent anxiety and obsessivecompulsive traits. Accordingly, a close psychiatric follow-up was initiated for the patient and his family. An adequate metabolic control (HbA1c drop from >14 to 8%) was achieved within 3 months, attributed to residual -cell function. In the following 6 months, HbA1c rose unexpectedly despite seemingly adequate adaptations of insulin doses. Obsessive fear of hypoglycemia leading to a severe compulsive behavior developed progressively with as many as 68 glycemia measurements per day (mean over 1 week). The patient reported that he could not bear leaving home with glycemia < 15 mmol/l, ending up with school eviction and severe intra-familial conflict. Despite intensive psychiatric outpatient support, HbA1c rose rapidly to >14% with glycemia-testing reaching peaks of 120 tests/day. The situation could only be discontinued through psychiatric hospitalization with intensive behavioral training. As a result, adequate metabolic balance was restored (HbA1c value: 7.1 %) with acceptable 10-15 daily glycemia measurements. Discussion: The association of overt psychiatric disorders to type 1 diabetes mellitus is very rare in the pediatric age group. It can lead to a pathological behavior with uncontrolled diabetes. Such exceptional situations require long-term admissions with specialized psychiatric care. Slow acceptation of a "less is better" principle in glycemia testing and amelioration of metabolic control are difficult to achieve.
Resumo:
Optimization of quantum measurement processes has a pivotal role in carrying out better, more accurate or less disrupting, measurements and experiments on a quantum system. Especially, convex optimization, i.e., identifying the extreme points of the convex sets and subsets of quantum measuring devices plays an important part in quantum optimization since the typical figures of merit for measuring processes are affine functionals. In this thesis, we discuss results determining the extreme quantum devices and their relevance, e.g., in quantum-compatibility-related questions. Especially, we see that a compatible device pair where one device is extreme can be joined into a single apparatus essentially in a unique way. Moreover, we show that the question whether a pair of quantum observables can be measured jointly can often be formulated in a weaker form when some of the observables involved are extreme. Another major line of research treated in this thesis deals with convex analysis of special restricted quantum device sets, covariance structures or, in particular, generalized imprimitivity systems. Some results on the structure ofcovariant observables and instruments are listed as well as results identifying the extreme points of covariance structures in quantum theory. As a special case study, not published anywhere before, we study the structure of Euclidean-covariant localization observables for spin-0-particles. We also discuss the general form of Weyl-covariant phase-space instruments. Finally, certain optimality measures originating from convex geometry are introduced for quantum devices, namely, boundariness measuring how ‘close’ to the algebraic boundary of the device set a quantum apparatus is and the robustness of incompatibility quantifying the level of incompatibility for a quantum device pair by measuring the highest amount of noise the pair tolerates without becoming compatible. Boundariness is further associated to minimum-error discrimination of quantum devices, and robustness of incompatibility is shown to behave monotonically under certain compatibility-non-decreasing operations. Moreover, the value of robustness of incompatibility is given for a few special device pairs.
Resumo:
The summer monsoon season is an important hydrometeorological feature of the Indian subcontinent and it has significant socioeconomic impacts. This study is aimed at understanding the processes associated with the occurrence of catastrophic flood events. The study has two novel features that add to the existing body of knowledge about the South Asian Monsoon: 1) combine traditional hydrometeorological observations (rain gauge measurements) with unconventional data (media and state historical records of reported flooding) to produce value-added century-long time-series of potential flood events, and 2) identify the larger regional synoptic conditions leading to days with flood potential in the time-series. The promise of mining unconventional data to extend hydrometeorological records is demonstrated in this study. The synoptic evolution of flooding events in the western-central coast of India and the densely populated Mumbai area are shown to correspond to active monsoon periods with embedded low-pressure centers and have far upstream influence from the western edge of the Indian Ocean basin. The coastal processes along the Arabian Peninsula where the currents interact with the continental shelf are found to be key features of extremes during the South Asian Monsoon
Resumo:
Although financial theory rests heavily upon the assumption that asset returns are normally distributed, value indices of commercial real estate display significant departures from normality. In this paper, we apply and compare the properties of two recently proposed regime switching models for value indices of commercial real estate in the US and the UK, both of which relax the assumption that observations are drawn from a single distribution with constant mean and variance. Statistical tests of the models' specification indicate that the Markov switching model is better able to capture the non-stationary features of the data than the threshold autoregressive model, although both represent superior descriptions of the data than the models that allow for only one state. Our results have several implications for theoretical models and empirical research in finance.
Resumo:
This work is an assessment of frequency of extreme values (EVs) of daily rainfall in the city of Sao Paulo. Brazil, over the period 1933-2005, based on the peaks-over-threshold (POT) and Generalized Pareto Distribution (GPD) approach. Usually. a GPD model is fitted to a sample of POT Values Selected With a constant threshold. However. in this work we use time-dependent thresholds, composed of relatively large p quantities (for example p of 0.97) of daily rainfall amounts computed from all available data. Samples of POT values were extracted with several Values of p. Four different GPD models (GPD-1, GPD-2, GPD-3. and GDP-4) were fitted to each one of these samples by the maximum likelihood (ML) method. The shape parameter was assumed constant for the four models, but time-varying covariates were incorporated into scale parameter of GPD-2. GPD-3, and GPD-4, describing annual cycle in GPD-2. linear trend in GPD-3, and both annual cycle and linear trend in GPD-4. The GPD-1 with constant scale and shape parameters is the simplest model. For identification of the best model among the four models WC used rescaled Akaike Information Criterion (AIC) with second-order bias correction. This criterion isolates GPD-3 as the best model, i.e. the one with positive linear trend in the scale parameter. The slope of this trend is significant compared to the null hypothesis of no trend, for about 98% confidence level. The non-parametric Mann-Kendall test also showed presence of positive trend in the annual frequency of excess over high thresholds. with p-value being virtually zero. Therefore. there is strong evidence that high quantiles of daily rainfall in the city of Sao Paulo have been increasing in magnitude and frequency over time. For example. 0.99 quantiles of daily rainfall amount have increased by about 40 mm between 1933 and 2005. Copyright (C) 2008 Royal Meteorological Society
Resumo:
This work is an assessment of frequency of extreme values (EVs) of daily rainfall in the city of São Paulo. Brazil, over the period 1933-2005, based on the peaks-over-threshold (POT) and Generalized Pareto Distribution (GPD) approach. Usually. a GPD model is fitted to a sample of POT Values Selected With a constant threshold. However. in this work we use time-dependent thresholds, composed of relatively large p quantities (for example p of 0.97) of daily rainfall amounts computed from all available data. Samples of POT values were extracted with several Values of p. Four different GPD models (GPD-1, GPD-2, GPD-3. and GDP-4) were fitted to each one of these samples by the maximum likelihood (ML) method. The shape parameter was assumed constant for the four models, but time-varying covariates were incorporated into scale parameter of GPD-2. GPD-3, and GPD-4, describing annual cycle in GPD-2. linear trend in GPD-3, and both annual cycle and linear trend in GPD-4. The GPD-1 with constant scale and shape parameters is the simplest model. For identification of the best model among the four models WC used rescaled Akaike Information Criterion (AIC) with second-order bias correction. This criterion isolates GPD-3 as the best model, i.e. the one with positive linear trend in the scale parameter. The slope of this trend is significant compared to the null hypothesis of no trend, for about 98% confidence level. The non-parametric Mann-Kendall test also showed presence of positive trend in the annual frequency of excess over high thresholds. with p-value being virtually zero. Therefore. there is strong evidence that high quantiles of daily rainfall in the city of São Paulo have been increasing in magnitude and frequency over time. For example. 0.99 quantiles of daily rainfall amount have increased by about 40 mm between 1933 and 2005. Copyright (C) 2008 Royal Meteorological Society