974 resultados para mean-variance efficiency


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Este trabalho se dedica a analisar o desempenho de modelos de otimização de carteiras regularizadas, empregando ativos financeiros do mercado brasileiro. Em particular, regularizamos as carteiras através do uso de restrições sobre a norma dos pesos dos ativos, assim como DeMiguel et al. (2009). Adicionalmente, também analisamos o desempenho de carteiras que levam em consideração informações sobre a estrutura de grupos de ativos com características semelhantes, conforme proposto por Fernandes, Rocha e Souza (2011). Enquanto a matriz de covariância empregada nas análises é a estimada através dos dados amostrais, os retornos esperados são obtidos através da otimização reversa da carteira de equilíbrio de mercado proposta por Black e Litterman (1992). A análise empírica fora da amostra para o período entre janeiro de 2010 e outubro de 2014 sinaliza-nos que, em linha com estudos anteriores, a penalização das normas dos pesos pode levar (dependendo da norma escolhida e da intensidade da restrição) a melhores performances em termos de Sharpe e retorno médio, em relação a carteiras obtidas via o modelo tradicional de Markowitz. Além disso, a inclusão de informações sobre os grupos de ativos também pode trazer benefícios ao cálculo de portfolios ótimos, tanto em relação aos métodos tradicionais quanto em relação aos casos sem uso da estrutura de grupos.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The portfolio theory is a field of study devoted to investigate the decision-making by investors of resources. The purpose of this process is to reduce risk through diversification and thus guarantee a return. Nevertheless, the classical Mean-Variance has been criticized regarding its parameters and it is observed that the use of variance and covariance has sensitivity to the market and parameter estimation. In order to reduce the estimation errors, the Bayesian models have more flexibility in modeling, capable of insert quantitative and qualitative parameters about the behavior of the market as a way of reducing errors. Observing this, the present study aimed to formulate a new matrix model using Bayesian inference as a way to replace the covariance in the MV model, called MCB - Covariance Bayesian model. To evaluate the model, some hypotheses were analyzed using the method ex post facto and sensitivity analysis. The benchmarks used as reference were: (1) the classical Mean Variance, (2) the Bovespa index's market, and (3) in addition 94 investment funds. The returns earned during the period May 2002 to December 2009 demonstrated the superiority of MCB in relation to the classical model MV and the Bovespa Index, but taking a little more diversifiable risk that the MV. The robust analysis of the model, considering the time horizon, found returns near the Bovespa index, taking less risk than the market. Finally, in relation to the index of Mao, the model showed satisfactory, return and risk, especially in longer maturities. Some considerations were made, as well as suggestions for further work

Relevância:

80.00% 80.00%

Publicador:

Resumo:

O objetivo deste trabalho foi monitorar o desempenho de remoção de nitrogênio amoniacal no tratamento das águas residuárias da produção intensiva de tilápia nilótica em sistema com recirculação de água. O sistema foi constituído por um sedimentador convencional e um reator aeróbio de leito fluidizado trifásico com circulação, operados com tempos de detenção hidráulica de 176.4 e 11.9 minutos respectivamente. O meio suporte utilizado no reator foi o carvão ativado granular com densidade aparente de 1.64 g/cm3 e tamanho efetivo de 0.34 mm; a concentração do meio suporte no reator foi mantida constante em 80 g/L. A eficiência média de remoção do nitrogênio amoniacal total foi de 41.2%. O sistema avaliado é uma alternativa efetiva para o reuso da água em sistemas de recirculação para aqüicultura. Embora a variabilidade das concentrações do nitrogênio amoniacal afluente cujo valor médio foi de 0.136 mg/L, o efluente do reator conservou as características de qualidade da água estáveis, com concentrações médias de nitrogênio amoniacal de 0.079 mg/L e do oxigênio dissolvido de 6.70 mg/L, recomendáveis para a criação dos peixes e nas faixas de valores permitidos pela legislação Brasileira (Resolução CONAMA No. 357 de março 5 de 2005) para lançamento de efluentes finais nos corpos de água receptores.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, the optimal reactive power planning problem under risk is presented. The classical mixed-integer nonlinear model for reactive power planning is expanded into two stage stochastic model considering risk. This new model considers uncertainty on the demand load. The risk is quantified by a factor introduced into the objective function and is identified as the variance of the random variables. Finally numerical results illustrate the performance of the proposed model, that is applied to IEEE 30-bus test system to determine optimal amount and location for reactive power expansion.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We propose a novel class of models for functional data exhibiting skewness or other shape characteristics that vary with spatial or temporal location. We use copulas so that the marginal distributions and the dependence structure can be modeled independently. Dependence is modeled with a Gaussian or t-copula, so that there is an underlying latent Gaussian process. We model the marginal distributions using the skew t family. The mean, variance, and shape parameters are modeled nonparametrically as functions of location. A computationally tractable inferential framework for estimating heterogeneous asymmetric or heavy-tailed marginal distributions is introduced. This framework provides a new set of tools for increasingly complex data collected in medical and public health studies. Our methods were motivated by and are illustrated with a state-of-the-art study of neuronal tracts in multiple sclerosis patients and healthy controls. Using the tools we have developed, we were able to find those locations along the tract most affected by the disease. However, our methods are general and highly relevant to many functional data sets. In addition to the application to one-dimensional tract profiles illustrated here, higher-dimensional extensions of the methodology could have direct applications to other biological data including functional and structural MRI.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we show statistical analyses of several types of traffic sources in a 3G network, namely voice, video and data sources. For each traffic source type, measurements were collected in order to, on the one hand, gain better understanding of the statistical characteristics of the sources and, on the other hand, enable forecasting traffic behaviour in the network. The latter can be used to estimate service times and quality of service parameters. The probability density function, mean, variance, mean square deviation, skewness and kurtosis of the interarrival times are estimated by Wolfram Mathematica and Crystal Ball statistical tools. Based on evaluation of packet interarrival times, we show how the gamma distribution can be used in network simulations and in evaluation of available capacity in opportunistic systems. As a result, from our analyses, shape and scale parameters of gamma distribution are generated. Data can be applied also in dynamic network configuration in order to avoid potential network congestions or overflows. Copyright © 2013 John Wiley & Sons, Ltd.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The range of novel psychoactive substances (NPS) including phenethylamines, cathinones, piperazines, tryptamines, etc. is continuously growing. Therefore, fast and reliable screening methods for these compounds are essential and needed. The use of dried blood spots (DBS) for a fast straightforward approach helps to simplify and shorten sample preparation significantly. DBS were produced from 10 µl of whole blood and extracted offline with 500 µl methanol followed by evaporation and reconstitution in mobile phase. Reversed-phase chromatographic separation and mass spectrometric detection (RP-LC-MS/MS) was achieved within a run time of 10 min. The screening method was validated by evaluating the following parameters: limit of detection (LOD), matrix effect, selectivity and specificity, extraction efficiency, and short-term and long-term stability. Furthermore, the method was applied to authentic samples and results were compared with those obtained with a validated whole blood method used for Routine analysis of NPS. LOD was between 1 and 10 ng/ml. No interference from Matrix compounds was observed. The method was proven to be specific and selective for the analytes, although with limitations for 3-FMC/flephedrone and MDDMA/MDEA. Mean extraction efficiency was 84.6 %. All substances were stable in DBS for at least a week when cooled. Cooling was essential for the stability of cathinones. Prepared samples were stable for at least 3 days. Comparison to the validated whole blood method yielded similar results. DBS were shown to be useful in developing a rapid screening method for NPS with simplified sample preparation. Copyright © 2013 John Wiley & Sons, Ltd

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The purpose of this study is to investigate the effects of predictor variable correlations and patterns of missingness with dichotomous and/or continuous data in small samples when missing data is multiply imputed. Missing data of predictor variables is multiply imputed under three different multivariate models: the multivariate normal model for continuous data, the multinomial model for dichotomous data and the general location model for mixed dichotomous and continuous data. Subsequent to the multiple imputation process, Type I error rates of the regression coefficients obtained with logistic regression analysis are estimated under various conditions of correlation structure, sample size, type of data and patterns of missing data. The distributional properties of average mean, variance and correlations among the predictor variables are assessed after the multiple imputation process. ^ For continuous predictor data under the multivariate normal model, Type I error rates are generally within the nominal values with samples of size n = 100. Smaller samples of size n = 50 resulted in more conservative estimates (i.e., lower than the nominal value). Correlation and variance estimates of the original data are retained after multiple imputation with less than 50% missing continuous predictor data. For dichotomous predictor data under the multinomial model, Type I error rates are generally conservative, which in part is due to the sparseness of the data. The correlation structure for the predictor variables is not well retained on multiply-imputed data from small samples with more than 50% missing data with this model. For mixed continuous and dichotomous predictor data, the results are similar to those found under the multivariate normal model for continuous data and under the multinomial model for dichotomous data. With all data types, a fully-observed variable included with variables subject to missingness in the multiple imputation process and subsequent statistical analysis provided liberal (larger than nominal values) Type I error rates under a specific pattern of missing data. It is suggested that future studies focus on the effects of multiple imputation in multivariate settings with more realistic data characteristics and a variety of multivariate analyses, assessing both Type I error and power. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

El estudio de la fiabilidad de componentes y sistemas tiene gran importancia en diversos campos de la ingenieria, y muy concretamente en el de la informatica. Al analizar la duracion de los elementos de la muestra hay que tener en cuenta los elementos que no fallan en el tiempo que dure el experimento, o bien los que fallen por causas distintas a la que es objeto de estudio. Por ello surgen nuevos tipos de muestreo que contemplan estos casos. El mas general de ellos, el muestreo censurado, es el que consideramos en nuestro trabajo. En este muestreo tanto el tiempo hasta que falla el componente como el tiempo de censura son variables aleatorias. Con la hipotesis de que ambos tiempos se distribuyen exponencialmente, el profesor Hurt estudio el comportamiento asintotico del estimador de maxima verosimilitud de la funcion de fiabilidad. En principio parece interesante utilizar metodos Bayesianos en el estudio de la fiabilidad porque incorporan al analisis la informacion a priori de la que se dispone normalmente en problemas reales. Por ello hemos considerado dos estimadores Bayesianos de la fiabilidad de una distribucion exponencial que son la media y la moda de la distribucion a posteriori. Hemos calculado la expansion asint6tica de la media, varianza y error cuadratico medio de ambos estimadores cuando la distribuci6n de censura es exponencial. Hemos obtenido tambien la distribucion asintotica de los estimadores para el caso m3s general de que la distribucion de censura sea de Weibull. Dos tipos de intervalos de confianza para muestras grandes se han propuesto para cada estimador. Los resultados se han comparado con los del estimador de maxima verosimilitud, y con los de dos estimadores no parametricos: limite producto y Bayesiano, resultando un comportamiento superior por parte de uno de nuestros estimadores. Finalmente nemos comprobado mediante simulacion que nuestros estimadores son robustos frente a la supuesta distribuci6n de censura, y que uno de los intervalos de confianza propuestos es valido con muestras pequenas. Este estudio ha servido tambien para confirmar el mejor comportamiento de uno de nuestros estimadores. SETTING OUT AND SUMMARY OF THE THESIS When we study the lifetime of components it's necessary to take into account the elements that don't fail during the experiment, or those that fail by reasons which are desirable to exclude from consideration. The model of random censorship is very usefull for analysing these data. In this model the time to failure and the time censor are random variables. We obtain two Bayes estimators of the reliability function of an exponential distribution based on randomly censored data. We have calculated the asymptotic expansion of the mean, variance and mean square error of both estimators, when the censor's distribution is exponential. We have obtained also the asymptotic distribution of the estimators for the more general case of censor's Weibull distribution. Two large-sample confidence bands have been proposed for each estimator. The results have been compared with those of the maximum likelihood estimator, and with those of two non parametric estimators: Product-limit and Bayesian. One of our estimators has the best behaviour. Finally we have shown by simulation, that our estimators are robust against the assumed censor's distribution, and that one of our intervals does well in small sample situation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Neste trabalho, deriva-se uma política de escolha ótima baseada na análise de média-variância para o Erro de Rastreamento no cenário Multi-período - ERM -. Referindo-se ao ERM como a diferença entre o capital acumulado pela carteira escolhida e o acumulado pela carteira de um benchmark. Assim, foi aplicada a metodologia abordada por Li-Ng em [24] para a solução analítica, obtendo-se dessa maneira uma generalização do caso uniperíodo introduzido por Roll em [38]. Em seguida, selecionou-se um portfólio do mercado de ações brasileiro baseado no fator de orrelação, e adotou-se como benchmark o índice da bolsa de valores do estado de São Paulo IBOVESPA, além da taxa básica de juros SELIC como ativo de renda fixa. Dois casos foram abordados: carteira composta somente de ativos de risco, caso I, e carteira com um ativo sem risco indexado à SELIC - e ativos do caso I (caso II).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La duración del viaje vacacional es una decisión del turista con unas implicaciones fundamentales para las organizaciones turísticas, pero que ha recibido una escasa atención por la literatura. Además, los escasos estudios se han centrado en los destinos costeros, cuando el turismo de interior se está erigiendo como una alternativa importante en algunos países. El presente trabajo analiza los factores determinantes de la elección temporal del viaje turístico, distinguiendo el tipo de destino elegido -costa e interior-, y proponiendo varias hipótesis acerca de la influencia de las características de los individuos relacionadas con el destino, de las restricciones personales y de las características sociodemográficas. La metodología aplicada estima, como novedad en este tipo de decisiones, un Modelo Binomial Negativo Truncado que evita los sesgos de estimación de los modelos de regresión y el supuesto restrictivo de igualdad media-varianza del Modelo de Poisson. La aplicación empírica realizada en España sobre una muestra de 1.600 individuos permite concluir, por un lado, que el Modelo Binomial Negativo es más adecuado que el de Poisson para realizar este tipo de análisis. Por otro lado, las dimensiones determinantes de la duración del viaje vacacional son, para ambos destinos, el alojamiento en hotel y apartamento propio, las restricciones temporales, la edad del turista y la forma de organizar el viaje; mientras que el tamaño de la ciudad de residencia y el atributo “precios baratos” es un aspecto diferencial de la costa; y el alojamiento en apartamentos alquilados lo es de los destinos de interior.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Este trabalho tem com objetivo abordar o problema de alocação de ativos (análise de portfólio) sob uma ótica Bayesiana. Para isto foi necessário revisar toda a análise teórica do modelo clássico de média-variância e na sequencia identificar suas deficiências que comprometem sua eficácia em casos reais. Curiosamente, sua maior deficiência não esta relacionado com o próprio modelo e sim pelos seus dados de entrada em especial ao retorno esperado calculado com dados históricos. Para superar esta deficiência a abordagem Bayesiana (modelo de Black-Litterman) trata o retorno esperado como uma variável aleatória e na sequência constrói uma distribuição a priori (baseado no modelo de CAPM) e uma distribuição de verossimilhança (baseado na visão de mercado sob a ótica do investidor) para finalmente aplicar o teorema de Bayes tendo como resultado a distribuição a posteriori. O novo valor esperado do retorno, que emerge da distribuição a posteriori, é que substituirá a estimativa anterior do retorno esperado calculado com dados históricos. Os resultados obtidos mostraram que o modelo Bayesiano apresenta resultados conservadores e intuitivos em relação ao modelo clássico de média-variância.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this article we investigate the asymptotic and finite-sample properties of predictors of regression models with autocorrelated errors. We prove new theorems associated with the predictive efficiency of generalized least squares (GLS) and incorrectly structured GLS predictors. We also establish the form associated with their predictive mean squared errors as well as the magnitude of these errors relative to each other and to those generated from the ordinary least squares (OLS) predictor. A large simulation study is used to evaluate the finite-sample performance of forecasts generated from models using different corrections for the serial correlation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we study the performance of smallholders in a nucleus estate and smallholder (NES) scheme in oil palm production schemein West Sumatra by measuring their technical efficiency using a stochastic frontier production function. Our results indicate a mean technical efficiency of 66%, which is below what we would have expected given the uniformity of the climate, soils and plantation construction among the sample farmers. The use of progressive farmers as a means of disseminating extension advice does not appear to have been successful, and more rigorous farmer selection procedures need to be put in place for similar schemes and for general agricultural extension in future. No clear relationship was established between technical efficiency and the use of female labour, suggesting there is no need to target extension services specifically at female labourers in the household. Finally, education was found to have an unexpectedly negative impact on technical efficiency, indicating that farmers with primary education may be more important than those with secondary and tertiary education as targets of development schemes and extension programs entailing non-formal education. (C) 2003 Elsevier Ltd. All rights reserved.