1000 resultados para Test de matrices progresivas
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
"Supported in part by the Atomic Energy Commission under Grant U.S. AEC AT(11-1)1469."
Resumo:
This article presents maximum likelihood estimators (MLEs) and log-likelihood ratio (LLR) tests for the eigenvalues and eigenvectors of Gaussian random symmetric matrices of arbitrary dimension, where the observations are independent repeated samples from one or two populations. These inference problems are relevant in the analysis of diffusion tensor imaging data and polarized cosmic background radiation data, where the observations are, respectively, 3 x 3 and 2 x 2 symmetric positive definite matrices. The parameter sets involved in the inference problems for eigenvalues and eigenvectors are subsets of Euclidean space that are either affine subspaces, embedded submanifolds that are invariant under orthogonal transformations or polyhedral convex cones. We show that for a class of sets that includes the ones considered in this paper, the MLEs of the mean parameter do not depend on the covariance parameters if and only if the covariance structure is orthogonally invariant. Closed-form expressions for the MLEs and the associated LLRs are derived for this covariance structure.
Resumo:
Unraveling the effect of selection vs. drift on the evolution of quantitative traits is commonly achieved by one of two methods. Either one contrasts population differentiation estimates for genetic markers and quantitative traits (the Q(st)-F(st) contrast) or multivariate methods are used to study the covariance between sets of traits. In particular, many studies have focused on the genetic variance-covariance matrix (the G matrix). However, both drift and selection can cause changes in G. To understand their joint effects, we recently combined the two methods into a single test (accompanying article by Martin et al.), which we apply here to a network of 16 natural populations of the freshwater snail Galba truncatula. Using this new neutrality test, extended to hierarchical population structures, we studied the multivariate equivalent of the Q(st)-F(st) contrast for several life-history traits of G. truncatula. We found strong evidence of selection acting on multivariate phenotypes. Selection was homogeneous among populations within each habitat and heterogeneous between habitats. We found that the G matrices were relatively stable within each habitat, with proportionality between the among-populations (D) and the within-populations (G) covariance matrices. The effect of habitat heterogeneity is to break this proportionality because of selection for habitat-dependent optima. Individual-based simulations mimicking our empirical system confirmed that these patterns are expected under the selective regime inferred. We show that homogenizing selection can mimic some effect of drift on the G matrix (G and D almost proportional), but that incorporating information from molecular markers (multivariate Q(st)-F(st)) allows disentangling the two effects.
Resumo:
Neutrality tests in quantitative genetics provide a statistical framework for the detection of selection on polygenic traits in wild populations. However, the existing method based on comparisons of divergence at neutral markers and quantitative traits (Q(st)-F(st)) suffers from several limitations that hinder a clear interpretation of the results with typical empirical designs. In this article, we propose a multivariate extension of this neutrality test based on empirical estimates of the among-populations (D) and within-populations (G) covariance matrices by MANOVA. A simple pattern is expected under neutrality: D = 2F(st)/(1 - F(st))G, so that neutrality implies both proportionality of the two matrices and a specific value of the proportionality coefficient. This pattern is tested using Flury's framework for matrix comparison [common principal-component (CPC) analysis], a well-known tool in G matrix evolution studies. We show the importance of using a Bartlett adjustment of the test for the small sample sizes typically found in empirical studies. We propose a dual test: (i) that the proportionality coefficient is not different from its neutral expectation [2F(st)/(1 - F(st))] and (ii) that the MANOVA estimates of mean square matrices between and among populations are proportional. These two tests combined provide a more stringent test for neutrality than the classic Q(st)-F(st) comparison and avoid several statistical problems. Extensive simulations of realistic empirical designs suggest that these tests correctly detect the expected pattern under neutrality and have enough power to efficiently detect mild to strong selection (homogeneous, heterogeneous, or mixed) when it is occurring on a set of traits. This method also provides a rigorous and quantitative framework for disentangling the effects of different selection regimes and of drift on the evolution of the G matrix. We discuss practical requirements for the proper application of our test in empirical studies and potential extensions.
Resumo:
This review paper reports the consensus of a technical workshop hosted by the European network, NanoImpactNet (NIN). The workshop aimed to review the collective experience of working at the bench with manufactured nanomaterials (MNMs), and to recommend modifications to existing experimental methods and OECD protocols. Current procedures for cleaning glassware are appropriate for most MNMs, although interference with electrodes may occur. Maintaining exposure is more difficult with MNMs compared to conventional chemicals. A metal salt control is recommended for experiments with metallic MNMs that may release free metal ions. Dispersing agents should be avoided, but if they must be used, then natural or synthetic dispersing agents are possible, and dispersion controls essential. Time constraints and technology gaps indicate that full characterisation of test media during ecotoxicity tests is currently not practical. Details of electron microscopy, dark-field microscopy, a range of spectroscopic methods (EDX, XRD, XANES, EXAFS), light scattering techniques (DLS, SLS) and chromatography are discussed. The development of user-friendly software to predict particle behaviour in test media according to DLVO theory is in progress, and simple optical methods are available to estimate the settling behaviour of suspensions during experiments. However, for soil matrices such simple approaches may not be applicable. Alternatively, a Critical Body Residue approach may be taken in which body concentrations in organisms are related to effects, and toxicity thresholds derived. For microbial assays, the cell wall is a formidable barrier to MNMs and end points that rely on the test substance penetrating the cell may be insensitive. Instead assays based on the cell envelope should be developed for MNMs. In algal growth tests, the abiotic factors that promote particle aggregation in the media (e.g. ionic strength) are also important in providing nutrients, and manipulation of the media to control the dispersion may also inhibit growth. Controls to quantify shading effects, and precise details of lighting regimes, shaking or mixing should be reported in algal tests. Photosynthesis may be more sensitive than traditional growth end points for algae and plants. Tests with invertebrates should consider non-chemical toxicity from particle adherence to the organisms. The use of semi-static exposure methods with fish can reduce the logistical issues of waste water disposal and facilitate aspects of animal husbandry relevant to MMNs. There are concerns that the existing bioaccumulation tests are conceptually flawed for MNMs and that new test(s) are required. In vitro testing strategies, as exemplified by genotoxicity assays, can be modified for MNMs, but the risk of false negatives in some assays is highlighted. In conclusion, most protocols will require some modifications and recommendations are made to aid the researcher at the bench. [Authors]
Resumo:
Hierarchical clustering is a popular method for finding structure in multivariate data,resulting in a binary tree constructed on the particular objects of the study, usually samplingunits. The user faces the decision where to cut the binary tree in order to determine the numberof clusters to interpret and there are various ad hoc rules for arriving at a decision. A simplepermutation test is presented that diagnoses whether non-random levels of clustering are presentin the set of objects and, if so, indicates the specific level at which the tree can be cut. The test isvalidated against random matrices to verify the type I error probability and a power study isperformed on data sets with known clusteredness to study the type II error.
Resumo:
In a weighted spatial network, as specified by an exchange matrix, the variances of the spatial values are inversely proportional to the size of the regions. Spatial values are no more exchangeable under independence, thus weakening the rationale for ordinary permutation and bootstrap tests of spatial autocorrelation. We propose an alternative permutation test for spatial autocorrelation, based upon exchangeable spatial modes, constructed as linear orthogonal combinations of spatial values. The coefficients obtain as eigenvectors of the standardised exchange matrix appearing in spectral clustering, and generalise to the weighted case the concept of spatial filtering for connectivity matrices. Also, two proposals aimed at transforming an acessibility matrix into a exchange matrix with with a priori fixed margins are presented. Two examples (inter-regional migratory flows and binary adjacency networks) illustrate the formalism, rooted in the theory of spectral decomposition for reversible Markov chains.
Resumo:
In a weighted spatial network, as specified by an exchange matrix, the variances of the spatial values are inversely proportional to the size of the regions. Spatial values are no more exchangeable under independence, thus weakening the rationale for ordinary permutation and bootstrap tests of spatial autocorrelation. We propose an alternative permutation test for spatial autocorrelation, based upon exchangeable spatial modes, constructed as linear orthogonal combinations of spatial values. The coefficients obtain as eigenvectors of the standardised exchange matrix appearing in spectral clustering, and generalise to the weighted case the concept of spatial filtering for connectivity matrices. Also, two proposals aimed at transforming an acessibility matrix into a exchange matrix with with a priori fixed margins are presented. Two examples (inter-regional migratory flows and binary adjacency networks) illustrate the formalism, rooted in the theory of spectral decomposition for reversible Markov chains.
Resumo:
The safe and responsible development of engineered nanomaterials (ENM), nanotechnology-based materials and products, together with the definition of regulatory measures and implementation of "nano"-legislation in Europe require a widely supported scientific basis and sufficient high quality data upon which to base decisions. At the very core of such a scientific basis is a general agreement on key issues related to risk assessment of ENMs which encompass the key parameters to characterise ENMs, appropriate methods of analysis and best approach to express the effect of ENMs in widely accepted dose response toxicity tests. The following major conclusions were drawn: Due to high batch variability of ENMs characteristics of commercially available and to a lesser degree laboratory made ENMs it is not possible to make general statements regarding the toxicity resulting from exposure to ENMs. 1) Concomitant with using the OECD priority list of ENMs, other criteria for selection of ENMs like relevance for mechanistic (scientific) studies or risk assessment-based studies, widespread availability (and thus high expected volumes of use) or consumer concern (route of consumer exposure depending on application) could be helpful. The OECD priority list is focussing on validity of OECD tests. Therefore source material will be first in scope for testing. However for risk assessment it is much more relevant to have toxicity data from material as present in products/matrices to which men and environment are be exposed. 2) For most, if not all characteristics of ENMs, standardized methods analytical methods, though not necessarily validated, are available. Generally these methods are only able to determine one single characteristic and some of them can be rather expensive. Practically, it is currently not feasible to fully characterise ENMs. Many techniques that are available to measure the same nanomaterial characteristic produce contrasting results (e.g. reported sizes of ENMs). It was recommended that at least two complementary techniques should be employed to determine a metric of ENMs. The first great challenge is to prioritise metrics which are relevant in the assessment of biological dose response relations and to develop analytical methods for characterising ENMs in biological matrices. It was generally agreed that one metric is not sufficient to describe fully ENMs. 3) Characterisation of ENMs in biological matrices starts with sample preparation. It was concluded that there currently is no standard approach/protocol for sample preparation to control agglomeration/aggregation and (re)dispersion. It was recommended harmonization should be initiated and that exchange of protocols should take place. The precise methods used to disperse ENMs should be specifically, yet succinctly described within the experimental section of a publication. 4) ENMs need to be characterised in the matrix as it is presented to the test system (in vitro/ in vivo). 5) Alternative approaches (e.g. biological or in silico systems) for the characterisation of ENMS are simply not possible with the current knowledge. Contributors: Iseult Lynch, Hans Marvin, Kenneth Dawson, Markus Berges, Diane Braguer, Hugh J. Byrne, Alan Casey, Gordon Chambers, Martin Clift, Giuliano Elia1, Teresa F. Fernandes, Lise Fjellsbø, Peter Hatto, Lucienne Juillerat, Christoph Klein, Wolfgang Kreyling, Carmen Nickel1, and Vicki Stone.
Resumo:
Työn tarkoituksena on kerätä yhteen tiedot kaikista maailmalta löytyvistä ison LOCA:n ulospuhallusvaiheen tutkimiseen käytetyistä koelaitteistoista. Työn tarkoituksena on myös antaa pohjaa päätökselle, onko tarpeellista rakentaa uusi koelaitteisto nesterakenne-vuorovaikutuskoodien laskennan validoimista varten. Ennen varsinaisen koelaitteiston rakentamista olisi tarkoituksenmukaista myös rakentaa pienempi pilottikoelaitteisto, jolla voitaisiin testata käytettäviä mittausmenetelmiä. Sopivaa mittausdataa tarvitaan uusien CFD-koodien ja rakenneanalyysikoodien kytketyn laskennan validoimisessa. Näitä koodeja voidaan käyttää esimerkiksi arvioitaessa reaktorin sisäosien rakenteellista kestävyyttä ison LOCA:n ulospuhallusvaiheen aikana. Raportti keskittyy maailmalta löytyviin koelaitteistoihin, uuden koelaitteiston suunnitteluperusteisiin sekä aiheeseen liittyviin yleisiin asioihin. Raportti ei korvaa olemassa olevia validointimatriiseja, mutta sitä voi käyttää apuna etsittäessä validointitarkoituksiin sopivaa ison LOCA:n ulospuhallusvaiheen koelaitteistoa.
Resumo:
In the analysis of stability of a variant of the Crank-Nicolson (C-N) method for the heat equation on a staggered grid a class of non-symmetric matrices appear that have an interesting property: their eigenvalues are all real and lie within the unit circle. In this note we shall show how this class of matrices is derived from the C-N method and prove that their eigenvalues are inside [-1, 1] for all values of m (the order of the matrix) and all values of a positive parameter a, the stability parameter sigma. As the order of the matrix is general, and the parameter sigma lies on the positive real line this class of matrices turns out to be quite general and could be of interest as a test set for eigenvalue solvers, especially as examples of very large matrices. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
A MATHEMATICA notebook to compute the elements of the matrices which arise in the solution of the Helmholtz equation by the finite element method (nodal approximation) for tetrahedral elements of any approximation order is presented. The results of the notebook enable a fast computational implementation of finite element codes for high order simplex 3D elements reducing the overheads due to implementation and test of the complex mathematical expressions obtained from the analytical integrations. These matrices can be used in a large number of applications related to physical phenomena described by the Poisson, Laplace and Schrodinger equations with anisotropic physical properties.
Resumo:
The mineral phase of dentin is located primarily within collagen fibrils. During development, bone or dentin collagen fibrils are formed first and then water within the fibril is replaced with apatite crystallites. Mineralized collagen contains very little water. During dentin bonding, acid-etching of mineralized dentin solubilizes the mineral crystallites and replaces them with water. During the infiltration phase of dentin bonding, adhesive comonomers are supposed to replace all of the collagen water with adhesive monomers that are then polymerized into copolymers. The authors of a recently published review suggested that dental monomers were too large to enter and displace water from collagen fibrils. If that were true, the endogenous proteases bound to dentin collagen could be responsible for unimpeded collagen degradation that is responsible for the poor durability of resin-dentin bonds. The current work studied the size-exclusion characteristics of dentin collagen, using a gel-filtration-like column chromatography technique, using dentin powder instead of Sephadex. The elution volumes of test molecules, including adhesive monomers, revealed that adhesive monomers smaller than ∼1000 Da can freely diffuse into collagen water, while molecules of 10,000 Da begin to be excluded, and bovine serum albumin (66,000 Da) was fully excluded. These results validate the concept that dental monomers can permeate between collagen molecules during infiltration by etch-and-rinse adhesives in water-saturated matrices. © 2013 Acta Materialia Inc.
Resumo:
We analyzed 46,161 monthly test-day records of milk production from 7453 first lactations of crossbred dairy Gyr (Bos indicus) x Holstein cows. The following seven models were compared: standard multivariate model (M10), three reduced rank models fitting the first 2, 3, or 4 genetic principal components, and three models considering a 2-, 3-, or 4-factor structure for the genetic covariance matrix. Full rank residual covariance matrices were considered for all models. The model fitting the first two principal components (PC2) was the best according to the model selection criteria. Similar phenotypic, genetic, and residual variances were obtained with models M10 and PC2. The heritability estimates ranged from 0.14 to 0.21 and from 0.13 to 0.21 for models M10 and PC2, respectively. The genetic correlations obtained with model PC2 were slightly higher than those estimated with model M10. PC2 markedly reduced the number of parameters estimated and the time spent to reach convergence. We concluded that two principal components are sufficient to model the structure of genetic covariances between test-day milk yields. © FUNPEC-RP.