901 resultados para residual variance classes
Resumo:
In this paper we propose exact likelihood-based mean-variance efficiency tests of the market portfolio in the context of Capital Asset Pricing Model (CAPM), allowing for a wide class of error distributions which include normality as a special case. These tests are developed in the frame-work of multivariate linear regressions (MLR). It is well known however that despite their simple statistical structure, standard asymptotically justified MLR-based tests are unreliable. In financial econometrics, exact tests have been proposed for a few specific hypotheses [Jobson and Korkie (Journal of Financial Economics, 1982), MacKinlay (Journal of Financial Economics, 1987), Gib-bons, Ross and Shanken (Econometrica, 1989), Zhou (Journal of Finance 1993)], most of which depend on normality. For the gaussian model, our tests correspond to Gibbons, Ross and Shanken’s mean-variance efficiency tests. In non-gaussian contexts, we reconsider mean-variance efficiency tests allowing for multivariate Student-t and gaussian mixture errors. Our framework allows to cast more evidence on whether the normality assumption is too restrictive when testing the CAPM. We also propose exact multivariate diagnostic checks (including tests for multivariate GARCH and mul-tivariate generalization of the well known variance ratio tests) and goodness of fit tests as well as a set estimate for the intervening nuisance parameters. Our results [over five-year subperiods] show the following: (i) multivariate normality is rejected in most subperiods, (ii) residual checks reveal no significant departures from the multivariate i.i.d. assumption, and (iii) mean-variance efficiency tests of the market portfolio is not rejected as frequently once it is allowed for the possibility of non-normal errors.
Resumo:
The variogram is essential for local estimation and mapping of any variable by kriging. The variogram itself must usually be estimated from sample data. The sampling density is a compromise between precision and cost, but it must be sufficiently dense to encompass the principal spatial sources of variance. A nested, multi-stage, sampling with separating distances increasing in geometric progression from stage to stage will do that. The data may then be analyzed by a hierarchical analysis of variance to estimate the components of variance for every stage, and hence lag. By accumulating the components starting from the shortest lag one obtains a rough variogram for modest effort. For balanced designs the analysis of variance is optimal; for unbalanced ones, however, these estimators are not necessarily the best, and the analysis by residual maximum likelihood (REML) will usually be preferable. The paper summarizes the underlying theory and illustrates its application with data from three surveys, one in which the design had four stages and was balanced and two implemented with unbalanced designs to economize when there were more stages. A Fortran program is available for the analysis of variance, and code for the REML analysis is listed in the paper. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
An unbalanced nested sampling design was used to investigate the spatial scale of soil and herbicide interactions at the field scale. A hierarchical analysis of variance based on residual maximum likelihood (REML) was used to analyse the data and provide a first estimate of the variogram. Soil samples were taken at 108 locations at a range of separating distances in a 9 ha field to explore small and medium scale spatial variation. Soil organic matter content, pH, particle size distribution, microbial biomass and the degradation and sorption of the herbicide, isoproturon, were determined for each soil sample. A large proportion of the spatial variation in isoproturon degradation and sorption occurred at sampling intervals less than 60 m, however, the sampling design did not resolve the variation present at scales greater than this. A sampling interval of 20-25 m should ensure that the main spatial structures are identified for isoproturon degradation rate and sorption without too great a loss of information in this field.
Resumo:
The variogram is essential for local estimation and mapping of any variable by kriging. The variogram itself must usually be estimated from sample data. The sampling density is a compromise between precision and cost, but it must be sufficiently dense to encompass the principal spatial sources of variance. A nested, multi-stage, sampling with separating distances increasing in geometric progression from stage to stage will do that. The data may then be analyzed by a hierarchical analysis of variance to estimate the components of variance for every stage, and hence lag. By accumulating the components starting from the shortest lag one obtains a rough variogram for modest effort. For balanced designs the analysis of variance is optimal; for unbalanced ones, however, these estimators are not necessarily the best, and the analysis by residual maximum likelihood (REML) will usually be preferable. The paper summarizes the underlying theory and illustrates its application with data from three surveys, one in which the design had four stages and was balanced and two implemented with unbalanced designs to economize when there were more stages. A Fortran program is available for the analysis of variance, and code for the REML analysis is listed in the paper. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
The performance of an adaptive filter may be studied through the behaviour of the optimal and adaptive coefficients in a given environment. This thesis investigates the performance of finite impulse response adaptive lattice filters for two classes of input signals: (a) frequency modulated signals with polynomial phases of order p in complex Gaussian white noise (as nonstationary signals), and (b) the impulsive autoregressive processes with alpha-stable distributions (as non-Gaussian signals). Initially, an overview is given for linear prediction and adaptive filtering. The convergence and tracking properties of the stochastic gradient algorithms are discussed for stationary and nonstationary input signals. It is explained that the stochastic gradient lattice algorithm has many advantages over the least-mean square algorithm. Some of these advantages are having a modular structure, easy-guaranteed stability, less sensitivity to the eigenvalue spread of the input autocorrelation matrix, and easy quantization of filter coefficients (normally called reflection coefficients). We then characterize the performance of the stochastic gradient lattice algorithm for the frequency modulated signals through the optimal and adaptive lattice reflection coefficients. This is a difficult task due to the nonlinear dependence of the adaptive reflection coefficients on the preceding stages and the input signal. To ease the derivations, we assume that reflection coefficients of each stage are independent of the inputs to that stage. Then the optimal lattice filter is derived for the frequency modulated signals. This is performed by computing the optimal values of residual errors, reflection coefficients, and recovery errors. Next, we show the tracking behaviour of adaptive reflection coefficients for frequency modulated signals. This is carried out by computing the tracking model of these coefficients for the stochastic gradient lattice algorithm in average. The second-order convergence of the adaptive coefficients is investigated by modeling the theoretical asymptotic variance of the gradient noise at each stage. The accuracy of the analytical results is verified by computer simulations. Using the previous analytical results, we show a new property, the polynomial order reducing property of adaptive lattice filters. This property may be used to reduce the order of the polynomial phase of input frequency modulated signals. Considering two examples, we show how this property may be used in processing frequency modulated signals. In the first example, a detection procedure in carried out on a frequency modulated signal with a second-order polynomial phase in complex Gaussian white noise. We showed that using this technique a better probability of detection is obtained for the reduced-order phase signals compared to that of the traditional energy detector. Also, it is empirically shown that the distribution of the gradient noise in the first adaptive reflection coefficients approximates the Gaussian law. In the second example, the instantaneous frequency of the same observed signal is estimated. We show that by using this technique a lower mean square error is achieved for the estimated frequencies at high signal-to-noise ratios in comparison to that of the adaptive line enhancer. The performance of adaptive lattice filters is then investigated for the second type of input signals, i.e., impulsive autoregressive processes with alpha-stable distributions . The concept of alpha-stable distributions is first introduced. We discuss that the stochastic gradient algorithm which performs desirable results for finite variance input signals (like frequency modulated signals in noise) does not perform a fast convergence for infinite variance stable processes (due to using the minimum mean-square error criterion). To deal with such problems, the concept of minimum dispersion criterion, fractional lower order moments, and recently-developed algorithms for stable processes are introduced. We then study the possibility of using the lattice structure for impulsive stable processes. Accordingly, two new algorithms including the least-mean P-norm lattice algorithm and its normalized version are proposed for lattice filters based on the fractional lower order moments. Simulation results show that using the proposed algorithms, faster convergence speeds are achieved for parameters estimation of autoregressive stable processes with low to moderate degrees of impulsiveness in comparison to many other algorithms. Also, we discuss the effect of impulsiveness of stable processes on generating some misalignment between the estimated parameters and the true values. Due to the infinite variance of stable processes, the performance of the proposed algorithms is only investigated using extensive computer simulations.
Resumo:
Les modèles de séries chronologiques avec variances conditionnellement hétéroscédastiques sont devenus quasi incontournables afin de modéliser les séries chronologiques dans le contexte des données financières. Dans beaucoup d'applications, vérifier l'existence d'une relation entre deux séries chronologiques représente un enjeu important. Dans ce mémoire, nous généralisons dans plusieurs directions et dans un cadre multivarié, la procédure dévéloppée par Cheung et Ng (1996) conçue pour examiner la causalité en variance dans le cas de deux séries univariées. Reposant sur le travail de El Himdi et Roy (1997) et Duchesne (2004), nous proposons un test basé sur les matrices de corrélation croisée des résidus standardisés carrés et des produits croisés de ces résidus. Sous l'hypothèse nulle de l'absence de causalité en variance, nous établissons que les statistiques de test convergent en distribution vers des variables aléatoires khi-carrées. Dans une deuxième approche, nous définissons comme dans Ling et Li (1997) une transformation des résidus pour chaque série résiduelle vectorielle. Les statistiques de test sont construites à partir des corrélations croisées de ces résidus transformés. Dans les deux approches, des statistiques de test pour les délais individuels sont proposées ainsi que des tests de type portemanteau. Cette méthodologie est également utilisée pour déterminer la direction de la causalité en variance. Les résultats de simulation montrent que les tests proposés offrent des propriétés empiriques satisfaisantes. Une application avec des données réelles est également présentée afin d'illustrer les méthodes
Resumo:
The present study on the characterization of probability distributions using the residual entropy function. The concept of entropy is extensively used in literature as a quantitative measure of uncertainty associated with a random phenomenon. The commonly used life time models in reliability Theory are exponential distribution, Pareto distribution, Beta distribution, Weibull distribution and gamma distribution. Several characterization theorems are obtained for the above models using reliability concepts such as failure rate, mean residual life function, vitality function, variance residual life function etc. Most of the works on characterization of distributions in the reliability context centers around the failure rate or the residual life function. The important aspect of interest in the study of entropy is that of locating distributions for which the shannon’s entropy is maximum subject to certain restrictions on the underlying random variable. The geometric vitality function and examine its properties. It is established that the geometric vitality function determines the distribution uniquely. The problem of averaging the residual entropy function is examined, and also the truncated form version of entropies of higher order are defined. In this study it is established that the residual entropy function determines the distribution uniquely and that the constancy of the same is characteristics to the geometric distribution
Resumo:
The Southern Ocean circulation consists of a complicated mixture of processes and phenomena that arise at different time and spatial scales which need to be parametrized in the state-of-the-art climate models. The temporal and spatial scales that give rise to the present-day residual mean circulation are here investigated by calculating the Meridional Overturning Circulation (MOC) in density coordinates from an eddy-permitting global model. The region sensitive to the temporal decomposition is located between 38°S and 63°S, associated with the eddy-induced transport. The ‘‘Bolus’’ component of the residual circulation corresponds to the eddy-induced transport. It is dominated by timescales between 1 month and 1 year. The temporal behavior of the transient eddies is examined in splitting the ‘‘Bolus’’ component into a ‘‘Seasonal’’, an ‘‘Eddy’’ and an ‘‘Inter-monthly’’ component, respectively representing the correlation between density and velocity fluctuations due to the average seasonal cycle, due to mesoscale eddies and due to large-scale motion on timescales longer than one month that is not due to the seasonal cycle. The ‘‘Seasonal’’ bolus cell is important at all latitudes near the surface. The ‘‘Eddy’’ bolus cell is dominant in the thermocline between 50°S and 35°S and over the whole ocean depth at the latitude of the Drake Passage. The ‘‘Inter-monthly’’ bolus cell is important in all density classes and is maximal in the Brazil–Malvinas Confluence and the Agulhas Return Current. The spatial decomposition indicates that a large part of the Eulerian mean circulation is recovered for spatial scales larger than 11.25°, implying that small-scale meanders in the Antarctic Circumpolar Current (ACC), near the Subantarctic and Polar Fronts, and near the Subtropical Front are important in the compensation of the Eulerian mean flow.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Objetivou-se avaliar a melhor modelagem para as variâncias genética aditiva, de ambiente permanente e residual da produção de leite no dia do controle (PLDC) de caprinos. Utilizaram-se modelos de regressão aleatória sobre polinômios ortogonais de Legendre com diferentes ordens de ajuste e variância residual heterogênea. Consideraram-se como efeitos fixos os efeitos de grupo de contemporâneos, a idade da cabra ao parto (co-variável) e a regressão fixa da PLDC sobre polinômios de Legendre, para modelar a trajetória média da população; e, como efeitos aleatórios, os efeitos genético aditivo e de ambiente permanente. O modelo com quatro classes de variâncias residuais foi o que proporcionou melhor ajuste. Os valores do logaritmo da função de verossimilhança, de AIC e BIC apontaram para seleção de modelos com ordens mais altas (cinco para o efeito genético e sete para o efeito de ambiente permanente). Entretanto, os autovalores associados às matrizes de co-variâncias entre os coeficientes de regressão indicaram a possibilidade de redução da dimensionalidade. As altas ordens de ajuste proporcionaram estimativas de variâncias genéticas e correlações genéticas e de ambiente permanente que não condizem com o fenômeno biológico estudado. O modelo de quinta ordem para a variância genética aditiva e de sétima ordem para o ambiente permanente foi indicado. Entretanto, um modelo mais parcimonioso, de quarta ordem para o efeito genético aditivo e de sexta ordem para o efeito de ambiente permanente, foi suficiente para ajustar as variâncias nos dados.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
A total of 15,901 scrotal circumference (SC) records from 5300 Nelore bulls, ranging from 229 to 560 days of age, were used with the objective of estimating (co)variance functions for SC, using random regression models. Models included the fixed effects of contemporary group and age of dam at calving as covariable (linear and quadratic effects). To model the population mean trend, a third order Legendre polynomial on animal age was utilized. The direct additive genetic and animal permanent environmental random effects were modeled by Legendre polynomials on animal age, with orders of fit ranging from 1 to 5. Residual variances were modeled considering 1 (homogeneity of variance) or 4 age classes. Results obtained with the random regression models were compared to multi-trait analysis. (Co)variance estimates using multi-trait and random regression models were similar. The model considering a third- and fifth-order Legendre polynomials for additive genetic and animal permanent environmental effects, respectively, was the most adequate to model changes in variance of SC with age. Heritability estimates for SC ranged from 0.24 (229 days of age) to 0.47 (300 days of age), remained almost constant until 500 days of age (0.52), decreasing thereafter (0.44). In general, the genetic correlations between measures of scrotal circumference obtained from 229 to 560 days of age decreased with increasing distance between ages. For genetic evaluation scrotal circumference could be measured between 400 and 500 days of age. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Este trabalho teve como objetivo principal avaliar a importância da inclusão dos efeitos genético materno, comum de leitegada e de ambiente permanente no modelo de estimação de componentes de variância para a característica intervalo de parto em fêmeas suínas. Foram utilizados dados que consistiam de 1.013 observações de fêmeas Dalland (C-40), registradas em dois rebanhos. As estimativas dos componentes de variância foram realizadas pelo método da máxima verossimilhança restrita livre de derivadas. Foram testados oito modelos, que continham os efeitos fixos (grupos de contemporâneo e covariáveis) e os efeitos genético aditivo direto e residual, mas variavam quanto à inclusão dos efeitos aleatórios genético materno, ambiental comum de leitegada e ambiental permanente. O teste da razão de verossimilhança (LR) indicou a não necessidade da inclusão desses efeitos no modelo. No entanto observou-se que o efeito ambiental permanente causou mudança nas estimativas de herdabilidade, que variaram de 0,00 a 0,03. Conclui-se que os valores de herdabilidade obtidos indicam que esta característica não apresentaria ganho genético como resposta à seleção. O efeito ambiental comum de leitegada e o genético materno não apresentaram influência sobre esta característica. Já o ambiental permanente, mesmo sem ter sido significativo o seu efeito pelo LR, deve ser considerado nos modelos genéticos para essa característica, pois sua presença causou mudança nas estimativas da variância genética aditiva.