109 resultados para Covariance matrix decomposition
Resumo:
In economic literature, information deficiencies and computational complexities have traditionally been solved through the aggregation of agents and institutions. In inputoutput modelling, researchers have been interested in the aggregation problem since the beginning of 1950s. Extending the conventional input-output aggregation approach to the social accounting matrix (SAM) models may help to identify the effects caused by the information problems and data deficiencies that usually appear in the SAM framework. This paper develops the theory of aggregation and applies it to the social accounting matrix model of multipliers. First, we define the concept of linear aggregation in a SAM database context. Second, we define the aggregated partitioned matrices of multipliers which are characteristic of the SAM approach. Third, we extend the analysis to other related concepts, such as aggregation bias and consistency in aggregation. Finally, we provide an illustrative example that shows the effects of aggregating a social accounting matrix model.
Resumo:
Report for the scientific sojourn at the the Philipps-Universität Marburg, Germany, from september to december 2007. For the first, we employed the Energy-Decomposition Analysis (EDA) to investigate aromaticity on Fischer carbenes as it is related through all the reaction mechanisms studied in my PhD thesis. This powerful tool, compared with other well-known aromaticity indices in the literature like NICS, is useful not only for quantitative results but also to measure the degree of conjugation or hyperconjugation in molecules. Our results showed for the annelated benzenoid systems studied here, that electron density is more concentrated on the outer rings than in the central one. The strain-induced bond localization plays a major role as a driven force to keep the more substituted ring as the less aromatic. The discussion presented in this work was contrasted at different levels of theory to calibrate the method and ensure the consistency of our results. We think these conclusions can also be extended to arene chemistry for explaining aromaticity and regioselectivity reactions found in those systems.In the second work, we have employed the Turbomole program package and density-functionals of the best performance in the state of art, to explore reaction mechanisms in the noble gas chemistry. Particularly, we were interested in compounds of the form H--Ng--Ng--F (where Ng (Noble Gas) = Ar, Kr and Xe) and we investigated the relative stability of these species. Our quantum chemical calculations predict that the dixenon compound HXeXeF has an activation barrier for decomposition of 11 kcal/mol which should be large enough to identify the molecule in a low-temperature matrix. The other noble gases present lower activation barriers and therefore are more labile and difficult to be observable systems experimentally.
Resumo:
Epipolar geometry is a key point in computer vision and the fundamental matrix estimation is the only way to compute it. This article surveys several methods of fundamental matrix estimation which have been classified into linear methods, iterative methods and robust methods. All of these methods have been programmed and their accuracy analysed using real images. A summary, accompanied with experimental results, is given
Resumo:
Evolution of compositions in time, space, temperature or other covariates is frequentin practice. For instance, the radioactive decomposition of a sample changes its composition with time. Some of the involved isotopes decompose into other isotopes of thesample, thus producing a transfer of mass from some components to other ones, butpreserving the total mass present in the system. This evolution is traditionally modelledas a system of ordinary di erential equations of the mass of each component. However,this kind of evolution can be decomposed into a compositional change, expressed interms of simplicial derivatives, and a mass evolution (constant in this example). A rst result is that the simplicial system of di erential equations is non-linear, despiteof some subcompositions behaving linearly.The goal is to study the characteristics of such simplicial systems of di erential equa-tions such as linearity and stability. This is performed extracting the compositional differential equations from the mass equations. Then, simplicial derivatives are expressedin coordinates of the simplex, thus reducing the problem to the standard theory ofsystems of di erential equations, including stability. The characterisation of stabilityof these non-linear systems relays on the linearisation of the system of di erential equations at the stationary point, if any. The eigenvelues of the linearised matrix and theassociated behaviour of the orbits are the main tools. For a three component system,these orbits can be plotted both in coordinates of the simplex or in a ternary diagram.A characterisation of processes with transfer of mass in closed systems in terms of stability is thus concluded. Two examples are presented for illustration, one of them is aradioactive decay
Resumo:
A novel technique for estimating the rank of the trajectory matrix in the local subspace affinity (LSA) motion segmentation framework is presented. This new rank estimation is based on the relationship between the estimated rank of the trajectory matrix and the affinity matrix built with LSA. The result is an enhanced model selection technique for trajectory matrix rank estimation by which it is possible to automate LSA, without requiring any a priori knowledge, and to improve the final segmentation
Resumo:
Piecewise linear models systems arise as mathematical models of systems in many practical applications, often from linearization for nonlinear systems. There are two main approaches of dealing with these systems according to their continuous or discrete-time aspects. We propose an approach which is based on the state transformation, more particularly the partition of the phase portrait in different regions where each subregion is modeled as a two-dimensional linear time invariant system. Then the Takagi-Sugeno model, which is a combination of local model is calculated. The simulation results show that the Alpha partition is well-suited for dealing with such a system
Resumo:
The applicability of the protein phosphatase inhibition assay (PPIA) to the determination of okadaic acid (OA) and its acyl derivatives in shellfish samples has been investigated, using a recombinant PP2A and a commercial one. Mediterranean mussel, wedge clam, Pacific oyster and flat oyster have been chosen as model species. Shellfish matrix loading limits for the PPIA have been established, according to the shellfish species and the enzyme source. A synergistic inhibitory effect has been observed in the presence of OA and shellfish matrix, which has been overcome by the application of a correction factor (0.48). Finally, Mediterranean mussel samples obtained from Rı´a de Arousa during a DSP closure associated to Dinophysis acuminata, determined as positive by the mouse bioassay, have been analysed with the PPIAs. The OA equivalent contents provided by the PPIAs correlate satisfactorily with those obtained by liquid chromatography–tandem mass spectrometry (LC–MS/MS).
Resumo:
This paper performs an empirical Decomposition of International Inequality in Ecological Footprint in order to quantify to what extent explanatory variables such as a country’s affluence, economic structure, demographic characteristics, climate and technology contributed to international differences in terms of natural resource consumption during the period 1993-2007. We use a Regression-Based Inequality Decomposition approach. As a result, the methodology extends qualitatively the results obtained in standard environmental impact regressions as it comprehends further social dimensions of the Sustainable Development concept, i.e. equity within generations. The results obtained point to prioritizing policies that take into account both future and present generations.
Resumo:
In the present paper we discuss and compare two different energy decomposition schemes: Mayer's Hartree-Fock energy decomposition into diatomic and monoatomic contributions [Chem. Phys. Lett. 382, 265 (2003)], and the Ziegler-Rauk dissociation energy decomposition [Inorg. Chem. 18, 1558 (1979)]. The Ziegler-Rauk scheme is based on a separation of a molecule into fragments, while Mayer's scheme can be used in the cases where a fragmentation of the system in clearly separable parts is not possible. In the Mayer scheme, the density of a free atom is deformed to give the one-atom Mulliken density that subsequently interacts to give rise to the diatomic interaction energy. We give a detailed analysis of the diatomic energy contributions in the Mayer scheme and a close look onto the one-atom Mulliken densities. The Mulliken density ρA has a single large maximum around the nuclear position of the atom A, but exhibits slightly negative values in the vicinity of neighboring atoms. The main connecting point between both analysis schemes is the electrostatic energy. Both decomposition schemes utilize the same electrostatic energy expression, but differ in how fragment densities are defined. In the Mayer scheme, the electrostatic component originates from the interaction of the Mulliken densities, while in the Ziegler-Rauk scheme, the undisturbed fragment densities interact. The values of the electrostatic energy resulting from the two schemes differ significantly but typically have the same order of magnitude. Both methods are useful and complementary since Mayer's decomposition focuses on the energy of the finally formed molecule, whereas the Ziegler-Rauk scheme describes the bond formation starting from undeformed fragment densities
Resumo:
The present work provides a generalization of Mayer's energy decomposition for the density-functional theory (DFT) case. It is shown that one- and two-atom Hartree-Fock energy components in Mayer's approach can be represented as an action of a one-atom potential VA on a one-atom density ρ A or ρ B. To treat the exchange-correlation term in the DFT energy expression in a similar way, the exchange-correlation energy density per electron is expanded into a linear combination of basis functions. Calculations carried out for a number of density functionals demonstrate that the DFT and Hartree-Fock two-atom energies agree to a reasonable extent with each other. The two-atom energies for strong covalent bonds are within the range of typical bond dissociation energies and are therefore a convenient computational tool for assessment of individual bond strength in polyatomic molecules. For nonspecific nonbonding interactions, the two-atom energies are low. They can be either repulsive or slightly attractive, but the DFT results more frequently yield small attractive values compared to the Hartree-Fock case. The hydrogen bond in the water dimer is calculated to be between the strong covalent and nonbonding interactions on the energy scale
Resumo:
This paper performs an empirical Decomposition of International Inequality in Ecological Footprint in order to quantify to what extent explanatory variables such as a country’s affluence, economic structure, demographic characteristics, climate and technology contributed to international differences in terms of natural resource consumption during the period 1993-2007. We use a Regression- Based Inequality Decomposition approach. As a result, the methodology extends qualitatively the results obtained in standard environmental impact regressions as it comprehends further social dimensions of the Sustainable Development concept, i.e. equity within generations. The results obtained point to prioritizing policies that take into account both future and present generations. Keywords: Ecological Footprint Inequality, Regression-Based Inequality Decomposition, Intragenerational equity, Sustainable development.
Resumo:
We characterize the capacity-achieving input covariance for multi-antenna channels known instantaneously at the receiver and in distribution at the transmitter. Our characterization, valid for arbitrary numbers of antennas, encompasses both the eigenvectors and the eigenvalues. The eigenvectors are found for zero-mean channels with arbitrary fading profiles and a wide range of correlation and keyhole structures. For the eigenvalues, in turn, we present necessary and sufficient conditions as well as an iterative algorithm that exhibits remarkable properties: universal applicability, robustness and rapid convergence. In addition, we identify channel structures for which an isotropic input achieves capacity.
Resumo:
Biplots are graphical displays of data matrices based on the decomposition of a matrix as the product of two matrices. Elements of these two matrices are used as coordinates for the rows and columns of the data matrix, with an interpretation of the joint presentation that relies on the properties of the scalar product. Because the decomposition is not unique, there are several alternative ways to scale the row and column points of the biplot, which can cause confusion amongst users, especially when software packages are not united in their approach to this issue. We propose a new scaling of the solution, called the standard biplot, which applies equally well to a wide variety of analyses such as correspondence analysis, principal component analysis, log-ratio analysis and the graphical results of a discriminant analysis/MANOVA, in fact to any method based on the singular-value decomposition. The standard biplot also handles data matrices with widely different levels of inherent variance. Two concepts taken from correspondence analysis are important to this idea: the weighting of row and column points, and the contributions made by the points to the solution. In the standard biplot one set of points, usually the rows of the data matrix, optimally represent the positions of the cases or sample units, which are weighted and usually standardized in some way unless the matrix contains values that are comparable in their raw form. The other set of points, usually the columns, is represented in accordance with their contributions to the low-dimensional solution. As for any biplot, the projections of the row points onto vectors defined by the column points approximate the centred and (optionally) standardized data. The method is illustrated with several examples to demonstrate how the standard biplot copes in different situations to give a joint map which needs only one common scale on the principal axes, thus avoiding the problem of enlarging or contracting the scale of one set of points to make the biplot readable. The proposal also solves the problem in correspondence analysis of low-frequency categories that are located on the periphery of the map, giving the false impression that they are important.
Resumo:
When continuous data are coded to categorical variables, two types of coding are possible: crisp coding in the form of indicator, or dummy, variables with values either 0 or 1; or fuzzy coding where each observation is transformed to a set of "degrees of membership" between 0 and 1, using co-called membership functions. It is well known that the correspondence analysis of crisp coded data, namely multiple correspondence analysis, yields principal inertias (eigenvalues) that considerably underestimate the quality of the solution in a low-dimensional space. Since the crisp data only code the categories to which each individual case belongs, an alternative measure of fit is simply to count how well these categories are predicted by the solution. Another approach is to consider multiple correspondence analysis equivalently as the analysis of the Burt matrix (i.e., the matrix of all two-way cross-tabulations of the categorical variables), and then perform a joint correspondence analysis to fit just the off-diagonal tables of the Burt matrix - the measure of fit is then computed as the quality of explaining these tables only. The correspondence analysis of fuzzy coded data, called "fuzzy multiple correspondence analysis", suffers from the same problem, albeit attenuated. Again, one can count how many correct predictions are made of the categories which have highest degree of membership. But here one can also defuzzify the results of the analysis to obtain estimated values of the original data, and then calculate a measure of fit in the familiar percentage form, thanks to the resultant orthogonal decomposition of variance. Furthermore, if one thinks of fuzzy multiple correspondence analysis as explaining the two-way associations between variables, a fuzzy Burt matrix can be computed and the same strategy as in the crisp case can be applied to analyse the off-diagonal part of this matrix. In this paper these alternative measures of fit are defined and applied to a data set of continuous meteorological variables, which are coded crisply and fuzzily into three categories. Measuring the fit is further discussed when the data set consists of a mixture of discrete and continuous variables.