933 resultados para Covariance matrices


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Four algorithms, all variants of Simultaneous Perturbation Stochastic Approximation (SPSA), are proposed. The original one-measurement SPSA uses an estimate of the gradient of objective function L containing an additional bias term not seen in two-measurement SPSA. As a result, the asymptotic covariance matrix of the iterate convergence process has a bias term. We propose a one-measurement algorithm that eliminates this bias, and has asymptotic convergence properties making for easier comparison with the two-measurement SPSA. The algorithm, under certain conditions, outperforms both forms of SPSA with the only overhead being the storage of a single measurement. We also propose a similar algorithm that uses perturbations obtained from normalized Hadamard matrices. The convergence w.p. 1 of both algorithms is established. We extend measurement reuse to design two second-order SPSA algorithms and sketch the convergence analysis. Finally, we present simulation results on an illustrative minimization problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Polymer nanocomposites containing different concentrations of Au nanoparticles have been investigated by small angle X-ray scattering and electronic absorption spectroscopy. The variation in the surface plasmon resonance (SPR) band of Au nanoparticles with concentration is described by a scaling law. The variation in the plasmon band of ReO3 nanoparticles embedded in polymers also follows a similar scaling law. Sistance dependence of plasmon coupling in polymer composites f metal nanoparticles. (C) 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Given an n x n complex matrix A, let mu(A)(x, y) := 1/n vertical bar{1 <= i <= n, Re lambda(i) <= x, Im lambda(i) <= y}vertical bar be the empirical spectral distribution (ESD) of its eigenvalues lambda(i) is an element of C, i = l, ... , n. We consider the limiting distribution (both in probability and in the almost sure convergence sense) of the normalized ESD mu(1/root n An) of a random matrix A(n) = (a(ij))(1 <= i, j <= n), where the random variables a(ij) - E(a(ij)) are i.i.d. copies of a fixed random variable x with unit variance. We prove a universality principle for such ensembles, namely, that the limit distribution in question is independent of the actual choice of x. In particular, in order to compute this distribution, one can assume that x is real or complex Gaussian. As a related result, we show how laws for this ESD follow from laws for the singular value distribution of 1/root n A(n) - zI for complex z. As a corollary, we establish the circular law conjecture (both almost surely and in probability), which asserts that mu(1/root n An) converges to the uniform measure on the unit disc when the a(ij) have zero mean.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this article we introduce and evaluate testing procedures for specifying the number k of nearest neighbours in the weights matrix of spatial econometric models. The spatial J-test is used for specification search. Two testing procedures are suggested: an increasing neighbours testing procedure and a decreasing neighbours testing procedure. Simulations show that the increasing neighbours testing procedures can be used in large samples to determine k. The decreasing neighbours testing procedure is found to have low power, and is not recommended for use in practice. An empirical example involving house price data is provided to show how to use the testing procedures with real data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Reorganizing a dataset so that its hidden structure can be observed is useful in any data analysis task. For example, detecting a regularity in a dataset helps us to interpret the data, compress the data, and explain the processes behind the data. We study datasets that come in the form of binary matrices (tables with 0s and 1s). Our goal is to develop automatic methods that bring out certain patterns by permuting the rows and columns. We concentrate on the following patterns in binary matrices: consecutive-ones (C1P), simultaneous consecutive-ones (SC1P), nestedness, k-nestedness, and bandedness. These patterns reflect specific types of interplay and variation between the rows and columns, such as continuity and hierarchies. Furthermore, their combinatorial properties are interlinked, which helps us to develop the theory of binary matrices and efficient algorithms. Indeed, we can detect all these patterns in a binary matrix efficiently, that is, in polynomial time in the size of the matrix. Since real-world datasets often contain noise and errors, we rarely witness perfect patterns. Therefore we also need to assess how far an input matrix is from a pattern: we count the number of flips (from 0s to 1s or vice versa) needed to bring out the perfect pattern in the matrix. Unfortunately, for most patterns it is an NP-complete problem to find the minimum distance to a matrix that has the perfect pattern, which means that the existence of a polynomial-time algorithm is unlikely. To find patterns in datasets with noise, we need methods that are noise-tolerant and work in practical time with large datasets. The theory of binary matrices gives rise to robust heuristics that have good performance with synthetic data and discover easily interpretable structures in real-world datasets: dialectical variation in the spoken Finnish language, division of European locations by the hierarchies found in mammal occurrences, and co-occuring groups in network data. In addition to determining the distance from a dataset to a pattern, we need to determine whether the pattern is significant or a mere occurrence of a random chance. To this end, we use significance testing: we deem a dataset significant if it appears exceptional when compared to datasets generated from a certain null hypothesis. After detecting a significant pattern in a dataset, it is up to domain experts to interpret the results in the terms of the application.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Eddy covariance (EC)-flux measurement technique is based on measurement of turbulent motions of air with accurate and fast measurement devices. For instance, in order to measure methane flux a fast methane gas analyser is needed which measures methane concentration at least ten times in a second in addition to a sonic anemometer, which measures the three wind components with the same sampling interval. Previously measurement of methane flux was almost impossible to carry out with EC-technique due to lack of fast enough gas analysers. However during the last decade new instruments have been developed and thus methane EC-flux measurements have become more common. Performance of four methane gas analysers suitable for eddy covariance measurements are assessed in this thesis. The assessment and comparison was performed by analysing EC-data obtained during summer 2010 (1.4.-26.10.) at Siikaneva fen. The four participating methane gas analysers are TGA-100A (Campbell Scientific Inc., USA), RMT-200 (Los Gatos Research, USA), G1301-f (Picarro Inc., USA) and Prototype-7700 (LI-COR Biosciences, USA). RMT-200 functioned most reliably throughout the measurement campaign and the corresponding methane flux data had the smallest random error. In addition, methane fluxes calculated from data obtained from G1301-f and RMT-200 agree remarkably well throughout the measurement campaign. The calculated cospectra and power spectra agree well with corresponding temperature spectra. Prototype-7700 functioned only slightly over one month in the beginning of the measurement campaign and thus its accuracy and long-term performance is difficult to assess.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A symmetrizer of the matrix A is a symmetric solution X that satisfies the matrix equation XA=AprimeX. An exact matrix symmetrizer is computed by obtaining a general algorithm and superimposing a modified multiple modulus residue arithmetic on this algorithm. A procedure based on computing a symmetrizer to obtain a symmetric matrix, called here an equivalent symmetric matrix, whose eigenvalues are the same as those of a given real nonsymmetric matrix is presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The effect of using a spatially smoothed forward-backward covariance matrix on the performance of weighted eigen-based state space methods/ESPRIT, and weighted MUSIC for direction-of-arrival (DOA) estimation is analyzed. Expressions for the mean-squared error in the estimates of the signal zeros and the DOA estimates, along with some general properties of the estimates and optimal weighting matrices, are derived. A key result is that optimally weighted MUSIC and weighted state-space methods/ESPRIT have identical asymptotic performance. Moreover, by properly choosing the number of subarrays, the performance of unweighted state space methods can be significantly improved. It is also shown that the mean-squared error in the DOA estimates is independent of the exact distribution of the source amplitudes. This results in a unified framework for dealing with DOA estimation using a uniformly spaced linear sensor array and the time series frequency estimation problems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Presented here is a stable algorithm that uses Zohar's formulation of Trench's algorithm and computes the inverse of a symmetric Toeplitz matrix including those with vanishing or nearvanishing leading minors. The algorithm is based on a diagonal modification of the matrix, and exploits symmetry and persymmetry properties of the inverse matrix.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We examine quark flavour mixing matrices for three and four generations using the recursive parametrization of U(n) and SU(n) matrices developed earlier. After a brief summary of the recursive parametrization, we obtain expressions for the independent rephasing invariants and also the constraints on them that arise from the requirement of mod symmetry of the flavour mixing matrix.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We give an elementary treatment of the defining representation and Lie algebra of the three-dimensional unitary unimodular group SU(3). The geometrical properties of the Lie algebra, which is an eight dimensional real Linear vector space, are developed in an SU(3) covariant manner. The f and d symbols of SU(3) lead to two ways of 'multiplying' two vectors to produce a third, and several useful geometric and algebraic identities are derived. The axis-angle parametrization of SU(3) is developed as a generalization of that for SU(2), and the specifically new features are brought out. Application to the dynamics of three-level systems is outlined.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study the problem of uncertainty in the entries of the Kernel matrix, arising in SVM formulation. Using Chance Constraint Programming and a novel large deviation inequality we derive a formulation which is robust to such noise. The resulting formulation applies when the noise is Gaussian, or has finite support. The formulation in general is non-convex, but in several cases of interest it reduces to a convex program. The problem of uncertainty in kernel matrix is motivated from the real world problem of classifying proteins when the structures are provided with some uncertainty. The formulation derived here naturally incorporates such uncertainty in a principled manner leading to significant improvements over the state of the art. 1.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we consider the problem of learning an n × n kernel matrix from m(1) similarity matrices under general convex loss. Past research have extensively studied the m = 1 case and have derived several algorithms which require sophisticated techniques like ACCP, SOCP, etc. The existing algorithms do not apply if one uses arbitrary losses and often can not handle m > 1 case. We present several provably convergent iterative algorithms, where each iteration requires either an SVM or a Multiple Kernel Learning (MKL) solver for m > 1 case. One of the major contributions of the paper is to extend the well knownMirror Descent(MD) framework to handle Cartesian product of psd matrices. This novel extension leads to an algorithm, called EMKL, which solves the problem in O(m2 log n 2) iterations; in each iteration one solves an MKL involving m kernels and m eigen-decomposition of n × n matrices. By suitably defining a restriction on the objective function, a faster version of EMKL is proposed, called REKL,which avoids the eigen-decomposition. An alternative to both EMKL and REKL is also suggested which requires only an SVMsolver. Experimental results on real world protein data set involving several similarity matrices illustrate the efficacy of the proposed algorithms.