936 resultados para mean and variance ratio
Resumo:
In this paper, we compare the performance of two statistical approaches for the analysis of data obtained from the social research area. In the first approach, we use normal models with joint regression modelling for the mean and for the variance heterogeneity. In the second approach, we use hierarchical models. In the first case, individual and social variables are included in the regression modelling for the mean and for the variance, as explanatory variables, while in the second case, the variance at level 1 of the hierarchical model depends on the individuals (age of the individuals), and in the level 2 of the hierarchical model, the variance is assumed to change according to socioeconomic stratum. Applying these methodologies, we analyze a Colombian tallness data set to find differences that can be explained by socioeconomic conditions. We also present some theoretical and empirical results concerning the two models. From this comparative study, we conclude that it is better to jointly modelling the mean and variance heterogeneity in all cases. We also observe that the convergence of the Gibbs sampling chain used in the Markov Chain Monte Carlo method for the jointly modeling the mean and variance heterogeneity is quickly achieved.
Resumo:
For many learning tasks the duration of the data collection can be greater than the time scale for changes of the underlying data distribution. The question we ask is how to include the information that data are aging. Ad hoc methods to achieve this include the use of validity windows that prevent the learning machine from making inferences based on old data. This introduces the problem of how to define the size of validity windows. In this brief, a new adaptive Bayesian inspired algorithm is presented for learning drifting concepts. It uses the analogy of validity windows in an adaptive Bayesian way to incorporate changes in the data distribution over time. We apply a theoretical approach based on information geometry to the classification problem and measure its performance in simulations. The uncertainty about the appropriate size of the memory windows is dealt with in a Bayesian manner by integrating over the distribution of the adaptive window size. Thus, the posterior distribution of the weights may develop algebraic tails. The learning algorithm results from tracking the mean and variance of the posterior distribution of the weights. It was found that the algebraic tails of this posterior distribution give the learning algorithm the ability to cope with an evolving environment by permitting the escape from local traps.
Resumo:
O objetivo do presente trabalho é verificar se, ao levar-se em consideração momentos de ordem superior (assimetria e curtose) na alocação de uma carteira de carry trade, há ganhos em relação à alocação tradicional que prioriza somente os dois primeiros momentos (média e variância). A hipótese da pesquisa é que moedas de carry trade apresentam retornos com distribuição não-Normal, e os momentos de ordem superior desta têm uma dinâmica, a qual pode ser modelada através de um modelo da família GARCH, neste caso IC-GARCHSK. Este modelo consiste em uma equação para cada momento condicional dos componentes independentes, explicitamente: o retorno, a variância, a assimetria, e a curtose. Outra hipótese é que um investidor com uma função utilidade do tipo CARA (constant absolute risk aversion), pode tê-la aproximada por uma expansão de Taylor de 4ª ordem. A estratégia do trabalho é modelar a dinâmica dos momentos da série dos logartimos neperianos dos retornos diários de algumas moedas de carry trade através do modelo IC-GARCHSK, e estimar a alocação ótima da carteira dinamicamente, de tal forma que se maximize a função utilidade do investidor. Os resultados mostram que há ganhos sim, ao levar-se em consideração os momentos de ordem superior, uma vez que o custo de oportunidade desta foi menor que o de uma carteira construída somente utilizando como critérios média e variância.
Resumo:
Com o objetivo de mostrar uma aplicação dos modelos da família GARCH a taxas de câmbio, foram utilizadas técnicas estatísticas englobando análise multivariada de componentes principais e análise de séries temporais com modelagem de média e variância (volatilidade), primeiro e segundo momentos respectivamente. A utilização de análise de componentes principais auxilia na redução da dimensão dos dados levando a estimação de um menor número de modelos, sem contudo perder informação do conjunto original desses dados. Já o uso dos modelos GARCH justifica-se pela presença de heterocedasticidade na variância dos retornos das séries de taxas de câmbio. Com base nos modelos estimados foram simuladas novas séries diárias, via método de Monte Carlo (MC), as quais serviram de base para a estimativa de intervalos de confiança para cenários futuros de taxas de câmbio. Para a aplicação proposta foram selecionadas taxas de câmbio com maior market share de acordo com estudo do BIS, divulgado a cada três anos.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Throughout this article, it is assumed that the no-central chi-square chart with two stage samplings (TSS Chisquare chart) is employed to monitor a process where the observations from the quality characteristic of interest X are independent and identically normally distributed with mean μ and variance σ2. The process is considered to start with the mean and the variance on target (μ = μ0; σ2 = σ0 2), but at some random time in the future an assignable cause shifts the mean from μ0 to μ1 = μ0 ± δσ0, δ >0 and/or increases the variance from σ0 2 to σ1 2 = γ2σ0 2, γ > 1. Before the assignable cause occurrence, the process is considered to be in a state of statistical control (defined by the in-control state). Similar to the Shewhart charts, samples of size n 0+ 1 are taken from the process at regular time intervals. The samplings are performed in two stages. At the first stage, the first item of the i-th sample is inspected. If its X value, say Xil, is close to the target value (|Xil-μ0|< w0σ 0, w0>0), then the sampling is interrupted. Otherwise, at the second stage, the remaining n0 items are inspected and the following statistic is computed. Wt = Σj=2n 0+1(Xij - μ0 + ξiσ 0)2 i = 1,2 Let d be a positive constant then ξ, =d if Xil > 0 ; otherwise ξi =-d. A signal is given at sample i if |Xil-μ0| > w0σ 0 and W1 > knia:tl, where kChi is the factor used in determining the upper control limit for the non-central chi-square chart. If devices such as go and no-go gauges can be considered, then measurements are not required except when the sampling goes to the second stage. Let P be the probability of deciding that the process is in control and P 1, i=1,2, be the probability of deciding that the process is in control at stage / of the sampling procedure. Thus P = P1 + P 2 - P1P2, P1 = Pr[μ0 - w0σ0 ≤ X ≤ μ0+ w 0σ0] P2=Pr[W ≤ kChi σ0 2], (3) During the in-control period, W / σ0 2 is distributed as a non-central chi-square distribution with n0 degrees of freedom and a non-centrality parameter λ0 = n0d2, i.e. W / σ0 2 - xn0 22 (λ0) During the out-of-control period, W / σ1 2 is distributed as a non-central chi-square distribution with n0 degrees of freedom and a non-centrality parameter λ1 = n0(δ + ξ)2 / γ2 The effectiveness of a control chart in detecting a process change can be measured by the average run length (ARL), which is the speed with which a control chart detects process shifts. The ARL for the proposed chart is easily determined because in this case, the number of samples before a signal is a geometrically distributed random variable with parameter 1-P, that is, ARL = I /(1-P). It is shown that the performance of the proposed chart is better than the joint X̄ and R charts, Furthermore, if the TSS Chi-square chart is used for monitoring diameters, volumes, weights, etc., then appropriate devices, such as go-no-go gauges can be used to decide if the sampling should go to the second stage or not. When the process is stable, and the joint X̄ and R charts are in use, the monitoring becomes monotonous because rarely an X̄ or R value fall outside the control limits. The natural consequence is the user to pay less and less attention to the steps required to obtain the X̄ and R value. In some cases, this lack of attention can result in serious mistakes. The TSS Chi-square chart has the advantage that most of the samplings are interrupted, consequently, most of the time the user will be working with attributes. Our experience shows that the inspection of one item by attribute is much less monotonous than measuring four or five items at each sampling.
Resumo:
In this paper we proposed a new two-parameters lifetime distribution with increasing failure rate. The new distribution arises on a latent complementary risk problem base. The properties of the proposed distribution are discussed, including a formal proof of its probability density function and explicit algebraic formulae for its reliability and failure rate functions, quantiles and moments, including the mean and variance. A simple EM-type algorithm for iteratively computing maximum likelihood estimates is presented. The Fisher information matrix is derived analytically in order to obtaining the asymptotic covariance matrix. The methodology is illustrated on a real data set. © 2010 Elsevier B.V. All rights reserved.
Resumo:
Pós-graduação em Agronomia (Agricultura) - FCA
Resumo:
The work consists of analyzing the risk management of investments by applying statistical concepts, economic and mathematical models considering the assets on the market on renowned financial institution. The assessment of these risks becomes increasingly interesting in view of minimizing your losses thus maximizing your chances of gains in both markets boom as extreme uncertainty, even with the sudden changes of scenery. Introducing concepts of investment funds, as well as the classification of the types of funds as funds management and equity, its guidelines, the concept of market investment funds. The types of assets comprising the investment funds, their taxation rules beyond the incidents that market widely used by investors and skilled people, both physical and legal, who keep their resources in this modality. With the historical data collected yields of investment funds of the Bank of Brazil, is an accomplished inflation adjustment and calculated the mean and variance for the verification of the model of Markowitz efficient frontier, a method used as investment analysis. This scan is used Matlab to obtain the set (or border) efficient portfolios. Once verified such data, there will be a critique of the Markowitz model as a quadratic programming and more coherent risk measures currently studied as VaR and CVaR minimizing the expected error, approaching our studies of current research. It is found that such studies have much to be explored, since there are many discussions about how effectively measure risk investments such as its characteristic and behavior, using a time series and volatility
Resumo:
Antarctic fur seals (Arctocephalus gazella) in the South Shetland Islands are recovering from 19th-century exploitation more slowly than the main population at South Georgia. To document demographic changes associated with the recovery in the South Shetlands, we monitored fur seal abundance and reproduction in the vicinity of Elephant Island during austral summers from 1986/1987 through 1994/1995. Total births, mean and variance of birth dates, and average daily mortality rates were estimated from daily live pup counts at North Cove (NC) and North Annex (NA) colonies on Seal Island. Sightings of leopard seals (Hydrurga leptonyx) and incidents of leopard seal predation on fur seal pups were recorded opportunistically during daily fur seal research at both sites. High mortality of fur seal pups, attributed to predation by leopard seals frequently observed at NC, caused pup numbers to decline rapidly between January and March (i.e., prior to weaning) each year and probably caused a long-term decline in the size of that colony. The NA colony, where leopard seals were never observed, increased in size during the study. Pup mortality from causes other than leopard seal predation appeared to be similar at the two sites. The number of pups counted at four locations in the Elephant Island vicinity increased slowly, at an annual rate of 3.8%, compared to rates as high as 11% at other locations in the South Shetland Islands. Several lines of circumstantial evidence are consistent with the hypothesis that leopard seal predators limit the growth of the fur seal population in the Elephant Island area and perhaps in the broader population in the South Shetland Islands. The sustained growth of this fur seal population over many decades rules out certain predator–prey models, allowing inference about the interaction between leopard seals and fur seals even though it is less thoroughly studied than predator–prey systems of terrestrial vertebrates of the northern hemisphere. Top-down forces should be included in hypotheses for future research on the factors shaping the recovery of the fur seal population in the South Shetland Islands.
Resumo:
Aim Estimates of geographic range size derived from natural history museum specimens are probably biased for many species. We aim to determine how bias in these estimates relates to range size. Location We conducted computer simulations based on herbarium specimen records from localities ranging from the southern United States to northern Argentina. Methods We used theory on the sampling distribution of the mean and variance to develop working hypotheses about how range size, defined as area of occupancy (AOO), was related to the inter-specific distribution of: (1) mean collection effort per area across the range of a species (MC); (2) variance in collection effort per area across the range of a species (VC); and (3) proportional bias in AOO estimates (PBias: the difference between the expected value of the estimate of AOO and true AOO, divided by true AOO). We tested predictions from these hypotheses using computer simulations based on a dataset of more than 29,000 herbarium specimen records documenting occurrences of 377 plant species in the tribe Bignonieae (Bignoniaceae). Results The working hypotheses predicted that the mean of the inter-specific distribution of MC, VC and PBias were independent of AOO, but that the respective variance and skewness decreased with increasing AOO. Computer simulations supported all but one prediction: the variance of the inter-specific distribution of VC did not decrease with increasing AOO. Main conclusions Our results suggest that, despite an invariant mean, the dispersion and symmetry of the inter-specific distribution of PBias decreases as AOO increases. As AOO increased, range size was less severely underestimated for a large proportion of simulated species. However, as AOO increased, range size estimates having extremely low bias were less common.
Resumo:
Monte Carlo simulation was used to evaluate properties of a simple Bayesian MCMC analysis of the random effects model for single group Cormack-Jolly-Seber capture-recapture data. The MCMC method is applied to the model via a logit link, so parameters p, S are on a logit scale, where logit(S) is assumed to have, and is generated from, a normal distribution with mean μ and variance σ2 . Marginal prior distributions on logit(p) and μ were independent normal with mean zero and standard deviation 1.75 for logit(p) and 100 for μ ; hence minimally informative. Marginal prior distribution on σ2 was placed on τ2=1/σ2 as a gamma distribution with α=β=0.001 . The study design has 432 points spread over 5 factors: occasions (t) , new releases per occasion (u), p, μ , and σ . At each design point 100 independent trials were completed (hence 43,200 trials in total), each with sample size n=10,000 from the parameter posterior distribution. At 128 of these design points comparisons are made to previously reported results from a method of moments procedure. We looked at properties of point and interval inference on μ , and σ based on the posterior mean, median, and mode and equal-tailed 95% credibility interval. Bayesian inference did very well for the parameter μ , but under the conditions used here, MCMC inference performance for σ was mixed: poor for sparse data (i.e., only 7 occasions) or σ=0 , but good when there were sufficient data and not small σ .
Resumo:
Recently, a lot of effort has been spent in the efficient computation of kriging predictors when observations are assimilated sequentially. In particular, kriging update formulae enabling significant computational savings were derived. Taking advantage of the previous kriging mean and variance computations helps avoiding a costly matrix inversion when adding one observation to the TeX already available ones. In addition to traditional update formulae taking into account a single new observation, Emery (2009) proposed formulae for the batch-sequential case, i.e. when TeX new observations are simultaneously assimilated. However, the kriging variance and covariance formulae given in Emery (2009) for the batch-sequential case are not correct. In this paper, we fix this issue and establish correct expressions for updated kriging variances and covariances when assimilating observations in parallel. An application in sequential conditional simulation finally shows that coupling update and residual substitution approaches may enable significant speed-ups.