103 resultados para sample covariance matrix
Resumo:
The applicability of the protein phosphatase inhibition assay (PPIA) to the determination of okadaic acid (OA) and its acyl derivatives in shellfish samples has been investigated, using a recombinant PP2A and a commercial one. Mediterranean mussel, wedge clam, Pacific oyster and flat oyster have been chosen as model species. Shellfish matrix loading limits for the PPIA have been established, according to the shellfish species and the enzyme source. A synergistic inhibitory effect has been observed in the presence of OA and shellfish matrix, which has been overcome by the application of a correction factor (0.48). Finally, Mediterranean mussel samples obtained from Rı´a de Arousa during a DSP closure associated to Dinophysis acuminata, determined as positive by the mouse bioassay, have been analysed with the PPIAs. The OA equivalent contents provided by the PPIAs correlate satisfactorily with those obtained by liquid chromatography–tandem mass spectrometry (LC–MS/MS).
Resumo:
We characterize the capacity-achieving input covariance for multi-antenna channels known instantaneously at the receiver and in distribution at the transmitter. Our characterization, valid for arbitrary numbers of antennas, encompasses both the eigenvectors and the eigenvalues. The eigenvectors are found for zero-mean channels with arbitrary fading profiles and a wide range of correlation and keyhole structures. For the eigenvalues, in turn, we present necessary and sufficient conditions as well as an iterative algorithm that exhibits remarkable properties: universal applicability, robustness and rapid convergence. In addition, we identify channel structures for which an isotropic input achieves capacity.
Resumo:
Biplots are graphical displays of data matrices based on the decomposition of a matrix as the product of two matrices. Elements of these two matrices are used as coordinates for the rows and columns of the data matrix, with an interpretation of the joint presentation that relies on the properties of the scalar product. Because the decomposition is not unique, there are several alternative ways to scale the row and column points of the biplot, which can cause confusion amongst users, especially when software packages are not united in their approach to this issue. We propose a new scaling of the solution, called the standard biplot, which applies equally well to a wide variety of analyses such as correspondence analysis, principal component analysis, log-ratio analysis and the graphical results of a discriminant analysis/MANOVA, in fact to any method based on the singular-value decomposition. The standard biplot also handles data matrices with widely different levels of inherent variance. Two concepts taken from correspondence analysis are important to this idea: the weighting of row and column points, and the contributions made by the points to the solution. In the standard biplot one set of points, usually the rows of the data matrix, optimally represent the positions of the cases or sample units, which are weighted and usually standardized in some way unless the matrix contains values that are comparable in their raw form. The other set of points, usually the columns, is represented in accordance with their contributions to the low-dimensional solution. As for any biplot, the projections of the row points onto vectors defined by the column points approximate the centred and (optionally) standardized data. The method is illustrated with several examples to demonstrate how the standard biplot copes in different situations to give a joint map which needs only one common scale on the principal axes, thus avoiding the problem of enlarging or contracting the scale of one set of points to make the biplot readable. The proposal also solves the problem in correspondence analysis of low-frequency categories that are located on the periphery of the map, giving the false impression that they are important.
Resumo:
The use of simple and multiple correspondence analysis is well-established in socialscience research for understanding relationships between two or more categorical variables.By contrast, canonical correspondence analysis, which is a correspondence analysis with linearrestrictions on the solution, has become one of the most popular multivariate techniques inecological research. Multivariate ecological data typically consist of frequencies of observedspecies across a set of sampling locations, as well as a set of observed environmental variablesat the same locations. In this context the principal dimensions of the biological variables aresought in a space that is constrained to be related to the environmental variables. Thisrestricted form of correspondence analysis has many uses in social science research as well,as is demonstrated in this paper. We first illustrate the result that canonical correspondenceanalysis of an indicator matrix, restricted to be related an external categorical variable, reducesto a simple correspondence analysis of a set of concatenated (or stacked ) tables. Then weshow how canonical correspondence analysis can be used to focus on, or partial out, aparticular set of response categories in sample survey data. For example, the method can beused to partial out the influence of missing responses, which usually dominate the results of amultiple correspondence analysis.
Resumo:
We introduce several exact nonparametric tests for finite sample multivariatelinear regressions, and compare their powers. This fills an important gap inthe literature where the only known nonparametric tests are either asymptotic,or assume one covariate only.
Resumo:
We extend to score, Wald and difference test statistics the scaled and adjusted corrections to goodness-of-fit test statistics developed in Satorra and Bentler (1988a,b). The theory is framed in the general context of multisample analysis of moment structures, under general conditions on the distribution of observable variables. Computational issues, as well as the relation of the scaled and corrected statistics to the asymptotic robust ones, is discussed. A Monte Carlo study illustrates thecomparative performance in finite samples of corrected score test statistics.
Resumo:
Small sample properties are of fundamental interest when only limited data is avail-able. Exact inference is limited by constraints imposed by speci.c nonrandomizedtests and of course also by lack of more data. These e¤ects can be separated as we propose to evaluate a test by comparing its type II error to the minimal type II error among all tests for the given sample. Game theory is used to establish this minimal type II error, the associated randomized test is characterized as part of a Nash equilibrium of a .ctitious game against nature.We use this method to investigate sequential tests for the di¤erence between twomeans when outcomes are constrained to belong to a given bounded set. Tests ofinequality and of noninferiority are included. We .nd that inference in terms oftype II error based on a balanced sample cannot be improved by sequential sampling or even by observing counter factual evidence providing there is a reasonable gap between the hypotheses.
Resumo:
In order to interpret the biplot it is necessary to know which points usually variables are the ones that are important contributors to the solution, and this information is available separately as part of the biplot s numerical results. We propose a new scaling of the display, called the contribution biplot, which incorporates this diagnostic directly into the graphical display, showing visually the important contributors and thus facilitating the biplot interpretation and often simplifying the graphical representation considerably. The contribution biplot can be applied to a wide variety of analyses such as correspondence analysis, principal component analysis, log-ratio analysis and the graphical results of a discriminant analysis/MANOVA, in fact to any method based on the singular-value decomposition. In the contribution biplot one set of points, usually the rows of the data matrix, optimally represent the spatial positions of the cases or sample units, according to some distance measure that usually incorporates some form of standardization unless all data are comparable in scale. The other set of points, usually the columns, is represented by vectors that are related to their contributions to the low-dimensional solution. A fringe benefit is that usually only one common scale for row and column points is needed on the principal axes, thus avoiding the problem of enlarging or contracting the scale of one set of points to make the biplot legible. Furthermore, this version of the biplot also solves the problem in correspondence analysis of low-frequency categories that are located on the periphery of the map, giving the false impression that they are important, when they are in fact contributing minimally to the solution.
Resumo:
In this paper I explore the issue of nonlinearity (both in the datageneration process and in the functional form that establishes therelationship between the parameters and the data) regarding the poorperformance of the Generalized Method of Moments (GMM) in small samples.To this purpose I build a sequence of models starting with a simple linearmodel and enlarging it progressively until I approximate a standard (nonlinear)neoclassical growth model. I then use simulation techniques to find the smallsample distribution of the GMM estimators in each of the models.
Resumo:
We derive a new inequality for uniform deviations of averages from their means. The inequality is a common generalization of previous results of Vapnik and Chervonenkis (1974) and Pollard (1986). Usingthe new inequality we obtain tight bounds for empirical loss minimization learning.
Resumo:
In Duchenne muscular dystrophy (DMD), a persistently altered and reorganizing extracellular matrix (ECM) within inflamed muscle promotes damage and dysfunction. However, the molecular determinants of the ECM that mediate inflammatory changes and faulty tissue reorganization remain poorly defined. Here, we show that fibrin deposition is a conspicuous consequence of muscle-vascular damage in dystrophic muscles of DMD patients and mdx mice and that elimination of fibrin(ogen) attenuated dystrophy progression in mdx mice. These benefits appear to be tied to: (i) a decrease in leukocyte integrin α(M)β(2)-mediated proinflammatory programs, thereby attenuating counterproductive inflammation and muscle degeneration; and (ii) a release of satellite cells from persistent inhibitory signals, thereby promoting regeneration. Remarkably, Fib-gamma(390-396A) (Fibγ(390-396A)) mice expressing a mutant form of fibrinogen with normal clotting function, but lacking the α(M)β(2) binding motif, ameliorated dystrophic pathology. Delivery of a fibrinogen/α(M)β(2) blocking peptide was similarly beneficial. Conversely, intramuscular fibrinogen delivery sufficed to induce inflammation and degeneration in fibrinogen-null mice. Thus, local fibrin(ogen) deposition drives dystrophic muscle inflammation and dysfunction, and disruption of fibrin(ogen)-α(M)β(2) interactions may provide a novel strategy for DMD treatment.
Resumo:
This work proposes novel network analysis techniques for multivariate time series.We define the network of a multivariate time series as a graph where verticesdenote the components of the process and edges denote non zero long run partialcorrelations. We then introduce a two step LASSO procedure, called NETS, toestimate high dimensional sparse Long Run Partial Correlation networks. This approachis based on a VAR approximation of the process and allows to decomposethe long run linkages into the contribution of the dynamic and contemporaneousdependence relations of the system. The large sample properties of the estimatorare analysed and we establish conditions for consistent selection and estimation ofthe non zero long run partial correlations. The methodology is illustrated with anapplication to a panel of U.S. bluechips.
Resumo:
The mechanical properties of the living cell are intimately related to cell signaling biology through cytoskeletal tension. The tension borne by the cytoskeleton (CSK) is in part generated internally by the actomyosin machinery and externally by stretch. Here we studied how cytoskeletal tension is modified during stretch and the tensional changes undergone by the sites of cell-matrix interaction. To this end we developed a novel technique to map cell-matrix stresses during application of stretch. We found that cell-matrix stresses increased with imposition of stretch but dropped below baseline levels on stretch release. Inhibition of the actomyosin machinery resulted in a larger relative increase in CSK tension with stretch and in a smaller drop in tension after stretch release. Cell-matrix stress maps showed that the loci of cell adhesion initially bearing greater stress also exhibited larger drops in traction forces after stretch removal. Our results suggest that stretch partially disrupts the actin-myosin apparatus and the cytoskeletal structures that support the largest CSK tension. These findings indicate that cells use the mechanical energy injected by stretch to rapidly reorganize their structure and redistribute tension.
Resumo:
Commuting consists in the fact that an important fraction of workers in developed countries do not reside close to their workplaces but at long distances from them, so they have to travel to their jobs and then back home daily. Although most workers hold a job in the same municipality where they live or in a neighbouring one, an important fraction of workers face long daily trips to get to their workplace and then back home.Even if we divide Catalonia (Spain) in small aggregations of municipalities, trying to make them as close to local labour markets as possible, we will find out that some of them have a positive commuting balance, attracting many workers from other areas and providing local jobs for almost all their resident workers. On the other side, other zones seem to be mostly residential, so an important fraction of their resident workers hold jobs in different local labour markets. Which variables influence an area¿s role as an attraction pole or a residential zone? In previous papers (Artís et al, 1998a, 2000; Romaní, 1999) we have brought out the main individual variables that influence commuting by analysing a sample of Catalan workers and their commuting decisions. In this paper we perform an analysis of the territorial variables that influence commuting, using data for aggregate commuting flows in Catalonia from the 1991 and 1996 Spanish Population Censuses.These variables influence commuting in two different ways: a zone with a dense, welldeveloped economical structure will have a high density of jobs. Work demand cannot be fulfilled with resident workers, so it spills over local boundaries. On the other side, this economical activity has a series of side-effects like pollution, congestion or high land prices which make these areas less desirable to live in. Workers who can afford it may prefer to live in less populated, less congested zones, where they can find cheaper land, larger homes and a better quality of life. The penalty of this decision is an increased commuting time. Our aim in this paper is to highlight the influence of local economical structure and amenities endowment in the workplace-residence location decision. A place-to-place logit commuting models is estimated for 1991 and 1996 in order to find the economical and amenities variables with higher influence in commuting decisions. From these models, we can outline a first approximation to the evolution of these variables in the 1986-1996 period. Data have been obtained from aggregate flow travel-matrix from the 1986, 1991 and 1996 Spanish Population Censuses
Resumo:
This research provides a description of the process followed in order to assemble a "Social Accounting Matrix" for Spain corresponding to the year 2000 (SAMSP00). As argued in the paper, this process attempts to reconcile ESA95 conventions with requirements of applied general equilibrium modelling. Particularly, problems related to the level of aggregation of net taxation data, and to the valuation system used for expressing the monetary value of input-output transactions have deserved special attention. Since the adoption of ESA95 conventions, input-output transactions have been preferably valued at basic prices, which impose additional difficulties on modellers interested in computing applied general equilibrium models. This paper addresses these difficulties by developing a procedure that allows SAM-builders to change the valuation system of input-output transactions conveniently. In addition, this procedure produces new data related to net taxation information.