970 resultados para Covariance estimate
Resumo:
In moment structure analysis with nonnormal data, asymptotic valid inferences require the computation of a consistent (under general distributional assumptions) estimate of the matrix $\Gamma$ of asymptotic variances of sample second--order moments. Such a consistent estimate involves the fourth--order sample moments of the data. In practice, the use of fourth--order moments leads to computational burden and lack of robustness against small samples. In this paper we show that, under certain assumptions, correct asymptotic inferences can be attained when $\Gamma$ is replaced by a matrix $\Omega$ that involves only the second--order moments of the data. The present paper extends to the context of multi--sample analysis of second--order moment structures, results derived in the context of (simple--sample) covariance structure analysis (Satorra and Bentler, 1990). The results apply to a variety of estimation methods and general type of statistics. An example involving a test of equality of means under covariance restrictions illustrates theoretical aspects of the paper.
Resumo:
This paper analyzes whether standard covariance matrix tests work whendimensionality is large, and in particular larger than sample size. Inthe latter case, the singularity of the sample covariance matrix makeslikelihood ratio tests degenerate, but other tests based on quadraticforms of sample covariance matrix eigenvalues remain well-defined. Westudy the consistency property and limiting distribution of these testsas dimensionality and sample size go to infinity together, with theirratio converging to a finite non-zero limit. We find that the existingtest for sphericity is robust against high dimensionality, but not thetest for equality of the covariance matrix to a given matrix. For thelatter test, we develop a new correction to the existing test statisticthat makes it robust against high dimensionality.
Resumo:
Despite the advancement of phylogenetic methods to estimate speciation and extinction rates, their power can be limited under variable rates, in particular for clades with high extinction rates and small number of extant species. Fossil data can provide a powerful alternative source of information to investigate diversification processes. Here, we present PyRate, a computer program to estimate speciation and extinction rates and their temporal dynamics from fossil occurrence data. The rates are inferred in a Bayesian framework and are comparable to those estimated from phylogenetic trees. We describe how PyRate can be used to explore different models of diversification. In addition to the diversification rates, it provides estimates of the parameters of the preservation process (fossilization and sampling) and the times of speciation and extinction of each species in the data set. Moreover, we develop a new birth-death model to correlate the variation of speciation/extinction rates with changes of a continuous trait. Finally, we demonstrate the use of Bayes factors for model selection and show how the posterior estimates of a PyRate analysis can be used to generate calibration densities for Bayesian molecular clock analysis. PyRate is an open-source command-line Python program available at http://sourceforge.net/projects/pyrate/.
Resumo:
The central message of this paper is that nobody should be using the samplecovariance matrix for the purpose of portfolio optimization. It containsestimation error of the kind most likely to perturb a mean-varianceoptimizer. In its place, we suggest using the matrix obtained from thesample covariance matrix through a transformation called shrinkage. Thistends to pull the most extreme coefficients towards more central values,thereby systematically reducing estimation error where it matters most.Statistically, the challenge is to know the optimal shrinkage intensity,and we give the formula for that. Without changing any other step in theportfolio optimization process, we show on actual stock market data thatshrinkage reduces tracking error relative to a benchmark index, andsubstantially increases the realized information ratio of the activeportfolio manager.
Resumo:
The goal of this paper is to estimate time-varying covariance matrices.Since the covariance matrix of financial returns is known to changethrough time and is an essential ingredient in risk measurement, portfolioselection, and tests of asset pricing models, this is a very importantproblem in practice. Our model of choice is the Diagonal-Vech version ofthe Multivariate GARCH(1,1) model. The problem is that the estimation ofthe general Diagonal-Vech model model is numerically infeasible indimensions higher than 5. The common approach is to estimate more restrictive models which are tractable but may not conform to the data. Our contributionis to propose an alternative estimation method that is numerically feasible,produces positive semi-definite conditional covariance matrices, and doesnot impose unrealistic a priori restrictions. We provide an empiricalapplication in the context of international stock markets, comparing thenew estimator to a number of existing ones.
Resumo:
INTRODUCTION: Interindividual variations in regional structural properties covary across the brain, thus forming networks that change as a result of aging and accompanying neurological conditions. The alterations of superficial white matter (SWM) in Alzheimer's disease (AD) are of special interest, since they follow the AD-specific pattern characterized by the strongest neurodegeneration of the medial temporal lobe and association cortices. METHODS: Here, we present an SWM network analysis in comparison with SWM topography based on the myelin content quantified with magnetization transfer ratio (MTR) for 39 areas in each hemisphere in 15 AD patients and 15 controls. The networks are represented by graphs, in which nodes correspond to the areas, and edges denote statistical associations between them. RESULTS: In both groups, the networks were characterized by asymmetrically distributed edges (predominantly in the left hemisphere). The AD-related differences were also leftward. The edges lost due to AD tended to connect nodes in the temporal lobe to other lobes or nodes within or between the latter lobes. The newly gained edges were mostly confined to the temporal and paralimbic regions, which manifest demyelination of SWM already in mild AD. CONCLUSION: This pattern suggests that the AD pathological process coordinates SWM demyelination in the temporal and paralimbic regions, but not elsewhere. A comparison of the MTR maps with MTR-based networks shows that although, in general, the changes in network architecture in AD recapitulate the topography of (de)myelination, some aspects of structural covariance (including the interhemispheric asymmetry of networks) have no immediate reflection in the myelination pattern.
Resumo:
ABSTRACT. Chrysomya albiceps (Wiedemann) and Hemilucilia segmentaria (Fabricius) (Diptera, Calliphoridae) used to estimate the postmortem interval in a forensic case in Minas Gerais, Brazil. The corpse of a man was found in a Brazilian highland savanna (cerrado) in the state of Minas Gerais. Fly larvae were collected at the crime scene and arrived at the laboratory three days afterwards. From the eight pre-pupae, seven adults of Chrysomya albiceps (Wiedemann, 1819) emerged and, from the two larvae, two adults of Hemilucilia segmentaria (Fabricius, 1805) were obtained. As necrophagous insects use corpses as a feeding resource, their development rate can be used as a tool to estimate the postmortem interval. The post-embryonary development stage of the immature collected on the body was estimated as the difference between the total development time and the time required for them to become adults in the lab. The estimated age of the maggots from both species and the minimum postmortem interval were four days. This is the first time that H. segmentaria is used to estimate the postmortem interval in a forensic case.
Resumo:
The classical binary classification problem is investigatedwhen it is known in advance that the posterior probability function(or regression function) belongs to some class of functions. We introduceand analyze a method which effectively exploits this knowledge. The methodis based on minimizing the empirical risk over a carefully selected``skeleton'' of the class of regression functions. The skeleton is acovering of the class based on a data--dependent metric, especiallyfitted for classification. A new scale--sensitive dimension isintroduced which is more useful for the studied classification problemthan other, previously defined, dimension measures. This fact isdemonstrated by performance bounds for the skeleton estimate in termsof the new dimension.
Resumo:
ABSTRACTThis study reviewed the data on the Brazilian Ephemeroptera, based on the studies published before July, 2013, estimated the number of species still to be described, and identified which regions of the country have been the subject of least research. More than half the species are known from the description of only one developmental stage, with imagoes being described more frequently than nymphs. The Brazilian Northeast is the region with the weakest database. Body size affected description rates, with a strong tendency for the larger species to be described first. The estimated number of unknown Brazilian species was accentuated by the fact that so few species have been described so far. The steep slope of the asymptote and the considerable confidence interval of the estimate reinforce the conclusion that a large number of species are still to be described. This emphasizes the need for investments in the training of specialists in systematics and ecology for all regions of Brazil to correct these deficiencies, given the role of published papers as a primary source of information, and the fundamental importance of taxonomic knowledge for the development of effective measures for the conservation of ephemeropteran and the aquatic ecosystems they depend on.
Resumo:
En este artículo, a partir de la inversa de la matriz de varianzas y covarianzas se obtiene el modelo Esperanza-Varianza de Markowitz siguiendo un camino más corto y matemáticamente riguroso. También se obtiene la ecuación de equilibrio del CAPM de Sharpe.
Resumo:
In this paper we analyse, using Monte Carlo simulation, the possible consequences of incorrect assumptions on the true structure of the random effects covariance matrix and the true correlation pattern of residuals, over the performance of an estimation method for nonlinear mixed models. The procedure under study is the well known linearization method due to Lindstrom and Bates (1990), implemented in the nlme library of S-Plus and R. Its performance is studied in terms of bias, mean square error (MSE), and true coverage of the associated asymptotic confidence intervals. Ignoring other criteria like the convenience of avoiding over parameterised models, it seems worst to erroneously assume some structure than do not assume any structure when this would be adequate.
Resumo:
The Proctor test is time-consuming and requires sampling of several kilograms of soil. Proctor test parameters were predicted in Mollisols, Entisols and Vertisols of the Pampean region of Argentina under different management systems. They were estimated from a minimum number of readily available soil properties (soil texture, total organic C) and management (training data set; n = 73). The results were used to generate a soil compaction susceptibility model, which was subsequently validated using a second group of independent data (test data set; n = 24). Soil maximum bulk density was estimated as follows: Maximum bulk density (Mg m-3) = 1.4756 - 0.00599 total organic C (g kg-1) + 0.0000275 sand (g kg-1) + 0.0539 management. Management was equal to 0 for uncropped and untilled soils and 1 for conventionally tilled soils. The established models predicted the Proctor test parameters reasonably well, based on readily available soil properties. Tillage systems induced changes in the maximum bulk density regardless of total organic matter content or soil texture. The lower maximum apparent bulk density values under no-tillage require a revision of the relative compaction thresholds for different no-tillage crops.
Resumo:
Macroporosity is often used in the determination of soil compaction. Reduced macroporosity can lead to poor drainage, low root aeration and soil degradation. The aim of this study was to develop and test different models to estimate macro and microporosity efficiently, using multiple regression. Ten soils were selected within a large range of textures: sand (Sa) 0.07-0.84; silt 0.03-0.24; clay 0.13-0.78 kg kg-1 and subjected to three compaction levels (three bulk densities, BD). Two models with similar accuracy were selected, with a mean error of about 0.02 m³ m-3 (2 %). The model y = a + b.BD + c.Sa, named model 2, was selected for its simplicity to estimate Macro (Ma), Micro (Mi) or total porosity (TP): Ma = 0.693 - 0.465 BD + 0.212 Sa; Mi = 0.337 + 0.120 BD - 0.294 Sa; TP = 1.030 - 0.345 BD 0.082 Sa; porosity values were expressed in m³ m-3; BD in kg dm-3; and Sa in kg kg-1. The model was tested with 76 datum set of several other authors. An error of about 0.04 m³ m-3 (4 %) was observed. Simulations of variations in BD as a function of Sa are presented for Ma = 0 and Ma = 0.10 (10 %). The macroporosity equation was remodeled to obtain other compaction indexes: a) to simulate maximum bulk density (MBD) as a function of Sa (Equation 11), in agreement with literature data; b) to simulate relative bulk density (RBD) as a function of BD and Sa (Equation 13); c) another model to simulate RBD as a function of Ma and Sa (Equation 16), confirming the independence of this variable in relation to Sa for a fixed value of macroporosity and, also, proving the hypothesis of Hakansson & Lipiec that RBD = 0.87 corresponds approximately to 10 % macroporosity (Ma = 0.10 m³ m-3).