969 resultados para mean-variance portfolio optimization
Resumo:
A number of recent works have introduced statistical methods for detecting genetic loci that affect phenotypic variability, which we refer to as variability-controlling quantitative trait loci (vQTL). These are genetic variants whose allelic state predicts how much phenotype values will vary about their expected means. Such loci are of great potential interest in both human and non-human genetic studies, one reason being that a detected vQTL could represent a previously undetected interaction with other genes or environmental factors. The simultaneous publication of these new methods in different journals has in many cases precluded opportunity for comparison. We survey some of these methods, the respective trade-offs they imply, and the connections between them. The methods fall into three main groups: classical non-parametric, fully parametric, and semi-parametric two-stage approximations. Choosing between alternatives involves balancing the need for robustness, flexibility, and speed. For each method, we identify important assumptions and limitations, including those of practical importance, such as their scope for including covariates and random effects. We show in simulations that both parametric methods and their semi-parametric approximations can give elevated false positive rates when they ignore mean-variance relationships intrinsic to the data generation process. We conclude that choice of method depends on the trait distribution, the need to include non-genetic covariates, and the population size and structure, coupled with a critical evaluation of how these fit with the assumptions of the statistical model.
Resumo:
Despite the large size of the Brazilian debt market, as well the large diversity of its bonds, the picture that emerges is of a market that has not yet completed its transition from the role it performed during the megainflation years, namely that of providing a liquid asset that provided positive real returns. This unfinished transition is currently placing the market under severe stress, as fears of a possible default from the next administration grow larger. This paper analyzes several aspects pertaining to the management of the domestic public debt. The causes for the extremely large and fast growth ofthe domestic public debt during the seven-year period that President Cardoso are discussed in Section 2. Section 3 computes Value at Risk and Cash Flow at Risk measures for the domestic public debt. The rollover risk is introduced in a mean-variance framework in Section 4. Section 5 discusses a few issues pertaining to the overlap between debt management and monetary policy. Finally, Section 6 wraps up with policy discussion and policy recommendations.
Resumo:
In this paper, the optimal reactive power planning problem under risk is presented. The classical mixed-integer nonlinear model for reactive power planning is expanded into two stage stochastic model considering risk. This new model considers uncertainty on the demand load. The risk is quantified by a factor introduced into the objective function and is identified as the variance of the random variables. Finally numerical results illustrate the performance of the proposed model, that is applied to IEEE 30-bus test system to determine optimal amount and location for reactive power expansion.
Resumo:
The Markowitz's objective functions, Value-at-Risk and Conditional Value-at-Risk, are largely used tools in the financial Market for portfolio optimization. This paper tries to analyze these functions having as a target to adapt them for application in non-financial assets portfolios. The paper uses as an example the Electricity Market to analyze and optimize a fictitious investment portfolio of a possible electric power utility. Showing that, besides being possible, which considerations must be taken and which analysis must be made to apply the Modern Portfolio Theory in the non-financial universe
Resumo:
We propose a novel class of models for functional data exhibiting skewness or other shape characteristics that vary with spatial or temporal location. We use copulas so that the marginal distributions and the dependence structure can be modeled independently. Dependence is modeled with a Gaussian or t-copula, so that there is an underlying latent Gaussian process. We model the marginal distributions using the skew t family. The mean, variance, and shape parameters are modeled nonparametrically as functions of location. A computationally tractable inferential framework for estimating heterogeneous asymmetric or heavy-tailed marginal distributions is introduced. This framework provides a new set of tools for increasingly complex data collected in medical and public health studies. Our methods were motivated by and are illustrated with a state-of-the-art study of neuronal tracts in multiple sclerosis patients and healthy controls. Using the tools we have developed, we were able to find those locations along the tract most affected by the disease. However, our methods are general and highly relevant to many functional data sets. In addition to the application to one-dimensional tract profiles illustrated here, higher-dimensional extensions of the methodology could have direct applications to other biological data including functional and structural MRI.
Resumo:
In this paper, we show statistical analyses of several types of traffic sources in a 3G network, namely voice, video and data sources. For each traffic source type, measurements were collected in order to, on the one hand, gain better understanding of the statistical characteristics of the sources and, on the other hand, enable forecasting traffic behaviour in the network. The latter can be used to estimate service times and quality of service parameters. The probability density function, mean, variance, mean square deviation, skewness and kurtosis of the interarrival times are estimated by Wolfram Mathematica and Crystal Ball statistical tools. Based on evaluation of packet interarrival times, we show how the gamma distribution can be used in network simulations and in evaluation of available capacity in opportunistic systems. As a result, from our analyses, shape and scale parameters of gamma distribution are generated. Data can be applied also in dynamic network configuration in order to avoid potential network congestions or overflows. Copyright © 2013 John Wiley & Sons, Ltd.
Resumo:
The purpose of this study is to investigate the effects of predictor variable correlations and patterns of missingness with dichotomous and/or continuous data in small samples when missing data is multiply imputed. Missing data of predictor variables is multiply imputed under three different multivariate models: the multivariate normal model for continuous data, the multinomial model for dichotomous data and the general location model for mixed dichotomous and continuous data. Subsequent to the multiple imputation process, Type I error rates of the regression coefficients obtained with logistic regression analysis are estimated under various conditions of correlation structure, sample size, type of data and patterns of missing data. The distributional properties of average mean, variance and correlations among the predictor variables are assessed after the multiple imputation process. ^ For continuous predictor data under the multivariate normal model, Type I error rates are generally within the nominal values with samples of size n = 100. Smaller samples of size n = 50 resulted in more conservative estimates (i.e., lower than the nominal value). Correlation and variance estimates of the original data are retained after multiple imputation with less than 50% missing continuous predictor data. For dichotomous predictor data under the multinomial model, Type I error rates are generally conservative, which in part is due to the sparseness of the data. The correlation structure for the predictor variables is not well retained on multiply-imputed data from small samples with more than 50% missing data with this model. For mixed continuous and dichotomous predictor data, the results are similar to those found under the multivariate normal model for continuous data and under the multinomial model for dichotomous data. With all data types, a fully-observed variable included with variables subject to missingness in the multiple imputation process and subsequent statistical analysis provided liberal (larger than nominal values) Type I error rates under a specific pattern of missing data. It is suggested that future studies focus on the effects of multiple imputation in multivariate settings with more realistic data characteristics and a variety of multivariate analyses, assessing both Type I error and power. ^
Resumo:
El estudio de la fiabilidad de componentes y sistemas tiene gran importancia en diversos campos de la ingenieria, y muy concretamente en el de la informatica. Al analizar la duracion de los elementos de la muestra hay que tener en cuenta los elementos que no fallan en el tiempo que dure el experimento, o bien los que fallen por causas distintas a la que es objeto de estudio. Por ello surgen nuevos tipos de muestreo que contemplan estos casos. El mas general de ellos, el muestreo censurado, es el que consideramos en nuestro trabajo. En este muestreo tanto el tiempo hasta que falla el componente como el tiempo de censura son variables aleatorias. Con la hipotesis de que ambos tiempos se distribuyen exponencialmente, el profesor Hurt estudio el comportamiento asintotico del estimador de maxima verosimilitud de la funcion de fiabilidad. En principio parece interesante utilizar metodos Bayesianos en el estudio de la fiabilidad porque incorporan al analisis la informacion a priori de la que se dispone normalmente en problemas reales. Por ello hemos considerado dos estimadores Bayesianos de la fiabilidad de una distribucion exponencial que son la media y la moda de la distribucion a posteriori. Hemos calculado la expansion asint6tica de la media, varianza y error cuadratico medio de ambos estimadores cuando la distribuci6n de censura es exponencial. Hemos obtenido tambien la distribucion asintotica de los estimadores para el caso m3s general de que la distribucion de censura sea de Weibull. Dos tipos de intervalos de confianza para muestras grandes se han propuesto para cada estimador. Los resultados se han comparado con los del estimador de maxima verosimilitud, y con los de dos estimadores no parametricos: limite producto y Bayesiano, resultando un comportamiento superior por parte de uno de nuestros estimadores. Finalmente nemos comprobado mediante simulacion que nuestros estimadores son robustos frente a la supuesta distribuci6n de censura, y que uno de los intervalos de confianza propuestos es valido con muestras pequenas. Este estudio ha servido tambien para confirmar el mejor comportamiento de uno de nuestros estimadores. SETTING OUT AND SUMMARY OF THE THESIS When we study the lifetime of components it's necessary to take into account the elements that don't fail during the experiment, or those that fail by reasons which are desirable to exclude from consideration. The model of random censorship is very usefull for analysing these data. In this model the time to failure and the time censor are random variables. We obtain two Bayes estimators of the reliability function of an exponential distribution based on randomly censored data. We have calculated the asymptotic expansion of the mean, variance and mean square error of both estimators, when the censor's distribution is exponential. We have obtained also the asymptotic distribution of the estimators for the more general case of censor's Weibull distribution. Two large-sample confidence bands have been proposed for each estimator. The results have been compared with those of the maximum likelihood estimator, and with those of two non parametric estimators: Product-limit and Bayesian. One of our estimators has the best behaviour. Finally we have shown by simulation, that our estimators are robust against the assumed censor's distribution, and that one of our intervals does well in small sample situation.
Resumo:
In the current uncertain context that affects both the world economy and the energy sector, with the rapid increase in the prices of oil and gas and the very unstable political situation that affects some of the largest raw materials’ producers, there is a need for developing efficient and powerful quantitative tools that allow to model and forecast fossil fuel prices, CO2 emission allowances prices as well as electricity prices. This will improve decision making for all the agents involved in energy issues. Although there are papers focused on modelling fossil fuel prices, CO2 prices and electricity prices, the literature is scarce on attempts to consider all of them together. This paper focuses on both building a multivariate model for the aforementioned prices and comparing its results with those of univariate ones, in terms of prediction accuracy (univariate and multivariate models are compared for a large span of days, all in the first 4 months in 2011) as well as extracting common features in the volatilities of the prices of all these relevant magnitudes. The common features in volatility are extracted by means of a conditionally heteroskedastic dynamic factor model which allows to solve the curse of dimensionality problem that commonly arises when estimating multivariate GARCH models. Additionally, the common volatility factors obtained are useful for improving the forecasting intervals and have a nice economical interpretation. Besides, the results obtained and methodology proposed can be useful as a starting point for risk management or portfolio optimization under uncertainty in the current context of energy markets.
Resumo:
La duración del viaje vacacional es una decisión del turista con unas implicaciones fundamentales para las organizaciones turísticas, pero que ha recibido una escasa atención por la literatura. Además, los escasos estudios se han centrado en los destinos costeros, cuando el turismo de interior se está erigiendo como una alternativa importante en algunos países. El presente trabajo analiza los factores determinantes de la elección temporal del viaje turístico, distinguiendo el tipo de destino elegido -costa e interior-, y proponiendo varias hipótesis acerca de la influencia de las características de los individuos relacionadas con el destino, de las restricciones personales y de las características sociodemográficas. La metodología aplicada estima, como novedad en este tipo de decisiones, un Modelo Binomial Negativo Truncado que evita los sesgos de estimación de los modelos de regresión y el supuesto restrictivo de igualdad media-varianza del Modelo de Poisson. La aplicación empírica realizada en España sobre una muestra de 1.600 individuos permite concluir, por un lado, que el Modelo Binomial Negativo es más adecuado que el de Poisson para realizar este tipo de análisis. Por otro lado, las dimensiones determinantes de la duración del viaje vacacional son, para ambos destinos, el alojamiento en hotel y apartamento propio, las restricciones temporales, la edad del turista y la forma de organizar el viaje; mientras que el tamaño de la ciudad de residencia y el atributo “precios baratos” es un aspecto diferencial de la costa; y el alojamiento en apartamentos alquilados lo es de los destinos de interior.
Resumo:
During 1999 and 2000 a large number of articles appeared in the financial press which argued that the concentration of the FTSE 100 had increased. Many of these reports suggested that stock market volatility in the UK had risen, because the concentration of its stock markets had increased. This study undertakes a comprehensive measurement of stock market concentration using the FTSE 100 index. We find that during 1999, 2000 and 2001 stock market concentration was noticeably higher than at any other time since the index was introduced. When we measure the volatility of the FTSE 100 index we do not find an association between concentration and its volatility. When we examine the variances and covariance’s of the FTSE 100 constituents we find that security volatility appears to be positively related to concentration changes but concentration and the size of security covariances appear to be negatively related. We simulate the variance of four versions of the FTSE 100 index; in each version of the index the weighting structure reflects either an equally weighted index, or one with levels of low, intermediate or high concentration. We find that moving from low to high concentration has very little impact on the volatility of the index. To complete the study we estimate the minimum variance portfolio for the FTSE 100, we then compare concentration levels of this index to those formed on the basis of market weighting. We find that realised FTSE index weightings are higher than for the minimum variance index.
Resumo:
Using the risk measure CV aR in �nancial analysis has become more and more popular recently. In this paper we apply CV aR for portfolio optimization. The problem is formulated as a two-stage stochastic programming model, and the SRA algorithm, a recently developed heuristic algorithm, is applied for minimizing CV aR.
Resumo:
Az árhatásfüggvények azt mutatják meg, hogy egy adott értékű megbízás mekkora relatív árváltozást okoz. Az árhatásfüggvény ismerete a piaci szereplők számára fontos szerepet játszik a jövőben benyújtandó ajánlataikhoz kapcsolódó árhatás előrejelzésében, a kereskedés árváltozásból eredő többletköltségének becslésében, illetve az optimális kereskedési algoritmus kialakításában. Az általunk kidolgozott módszer révén a piaci szereplők a teljes ajánlati könyv ismerete nélkül egyszerűen és gyorsan tudnak virtuális árhatásfüggvényt meghatározni, ugyanis bemutatjuk az árhatásfüggvény és a likviditási mértékek kapcsolatát, valamint azt, hogy miként lehet a Budapesti Likviditási Mérték (BLM) idősorából ár ha tás függ vényt becsülni. A kidolgozott módszertant az OTP-részvény idősorán szemléltetjük, és a részvény BLM-adatsorából a 2007. január 1-je és 2011. június 3-a közötti időszakra virtuális árhatás függvényt becsülünk. Empirikus elemzésünk során az árhatás függ vény időbeli alakulásának és alapvető statisztikai tulajdonságainak vizsgálatát végezzük el, ami révén képet kaphatunk a likviditás hiányában fellépő tranzakciós költségek múltbeli viselkedéséről. Az így kapott információk például a dinamikus portfólióoptimalizálás során lehetnek a kereskedők segítségére. / === / Price-effect equations show what relative price change a commission of a given value will have. Knowledge of price-effect equations plays an important part in enabling market players to predict the price effect of their future commissions and to develop an optimal trading algorithm. The method devised by the authors allows a virtual price-effect equation to be defined simply and rapidly without knowledge of the whole offer book, by presenting the relation between the price-effect equation and degree of liquidity, and how to estimate the price-effect equation from the time line of the Budapest Liquidity Measure (BLM). The methodology is shown using the time line for OTP shares and the virtual price-effect equation estimated for the 1 January 2007 to 3 June 2011 period from the shares BML data set. During the empirical analysis the authors conducted an examination of the tendency of the price-effect equation over time and for its basic statistical attributes, to yield a picture of the past behaviour of the transaction costs arising in the absence of liquidity. The information obtained may, for instance, help traders in dynamic portfolio optimization.
Resumo:
A CV aR kockázati mérték egyre nagyobb jelentőségre tesz szert portfóliók kockázatának megítélésekor. A portfolió egészére a CVaR kockázati mérték minimalizálását meg lehet fogalmazni kétlépcsős sztochasztikus feladatként. Az SRA algoritmus egy mostanában kifejlesztett megoldó algoritmus sztochasztikus programozási feladatok optimalizálására. Ebben a cikkben az SRA algoritmussal oldottam meg CV aR kockázati mérték minimalizálást. ___________ The risk measure CVaR is becoming more and more popular in recent years. In this paper we use CVaR for portfolio optimization. We formulate the problem as a two-stage stochastic programming model. We apply the SRA algorithm, which is a recently developed heuristic algorithm, to minimizing CVaR.
Resumo:
The role of steroids hormones on the behavior of vertebrates have been described as organizational and activational effects. These actions occur in different periods of the ontogenetic development as fetal, early post natal and during puberty (organizational effect) or modifying the expression of behavioral patterns during time life (activational effects). Studies on brain lateralization in hand use in human and non-human primates have shown that sexual hormones seems to participate in the process of handedness strength that begins in the puberal period and is stabilized at the adult age. The aim of this study was to investigate in adult male Callithrix jacchus if the strength of use of the hand in common marmoset adult male is stable (organizational effect) or androgens variations could affect its stability (activational effect). The preferential use of one hand in 14 common marmoset (Callithrix jacchus was studied in two contexts: (1) spontaneous holding food and directing the food to mouth (feeding episodes), and (2) forced reaching food tests where the animal have to reach the food through a hole within a cover plate with a central hole that allow the use of one hand only to reach the food. The records were made during 5 sessions/20 bouts each during baseline totalizing 100 episodes before two treatments. Firstly it was used GnRH antagonist: a single subcutaneous injection of 100µg de Cetrotide – acetate of cetrorrelix (Baxter Oncology GmbH, Germany) (n=10). Secondly, a single GnRH injection of 0.2mg of GnRH (Sigma – Aldrich) (n= 8) was used. After injections 20 successful attempts of hand use episodes was recorded in the 1st , 2 nd, 7th, 15th and 30 th days, totalizing in the whole period 100 episodes for each context, after both treatments. Fecal sampling to measure extracted fecal androgens was performed in all days of data collection across the length of the basal and during the experimental periods. Statistical analysis by mixed model, Tukey test to compare mean values after the two treatments, and Levene test to compare mean variance were used, all for p-value < 0.05. In basal phase 6 animals used preferentially the right hand, 5 the left and 3 were ambidextrous. Mean handedness index in basal phase were different from that after both treatment starting at 7th day. Mean variance of handedness index for spontaneous and forced activities does not differs before and after both treatments but the mean values for GnRH index were higher than that observed for its antagonist. These findings suggested that androgens have an activational effect on handedness in adult male C. jacchus