871 resultados para Matrix decomposition
Resumo:
In this work is reported the sensitization effect by polymer matrices on the photoluminescence properties of diaquatris(thenoyltrifluoroacetonate)europium(III), [Eu(tta)(3)(H(2)O)(2)], doped into poly-beta-hydroxybutyrate (PHB) with doping percentage at 1, 3, 5, 7 and 10% (mass) in film form. TGA results indicated that the Eu(3+) complex precursor was immobilized in the polymer matrix by the interaction between the Eu(3+) complex and the oxygen atoms of the PHB polymer when the rare earth complex was incorporated in the polymeric host. The thermal behaviour of these luminescent systems is similar to that of the undoped polymer, however, the T(onset) temperature of decomposition decreases with increase of the complex doping concentration. The emission spectra of the Eu(3+) complex doped PHB films recorded at 298 K exhibited the five characteristic bands arising from the (5)D(0) -> (7)F(J) intraconfigurational transitions (J = 0-4). The fact that the quantum efficiencies eta of the doped film increased significantly revealed that the polymer matrix acts as an efficient co-sensitizer for Eu(3+) luminescent centres and therefore enhances the quantum efficiency of the emitter (5)D(0) level. The luminescence intensity decreases, however, with increasing precursor concentration in the doped polymer to greater than 5% where a saturation effect is observed at this specific doping percentage, indicating that changes in the polymeric matrix improve the absorption property of the film, consequently quenching the luminescent effect.
Resumo:
A Fe-22.5%Cr-4.53%Ni-3.0%Mo duplex stainless steel was solution treated at 1,325 A degrees C for 1 h, quenched in water and isothermally treated at 900 A degrees C for 5,000 s. The crystallography of austenite was studied using EBSD technique. Intragranular austenite particles formed from delta ferrite are shown to nucleate on inclusions, and to be subdivided in twin-related sub-particles. Intragranular austenite appears to have planar-only orientation relationships with the ferrite matrix, close to Kurdjumov-Sachs and Nishyiama-Wassermann, but not related to a conjugate direction. Samples treated at 900 A degrees C underwent sparse formation of sigma phase and pronounced growth of elongated austenite particles, very similar to acicular ferrite.
Resumo:
A multilayer organic film containing poly(acrylic acid) and chitosan was fabricated on a metallic support by means of the layer-by-layer technique. This film was used as a template for calcium carbonate crystallization and presents two possible binding sites where the nucleation may be initiated, either calcium ions acting as counterions of the polyelectrolyte or those trapped in the template gel network formed by the polyelectrolyte chains. Calcium carbonate formation was carried out by carbon dioxide diffusion, where CO, was generated from ammonium carbonate decomposition. The CaCO3 nanocrystals obtained, formed a dense, homogeneous, and continuous film. Vaterite and calcite CaCO3 crystalline forms were detected. (c) 2007 Elsevier B.V All rights reserved.
Resumo:
Distribution systems, eigenvalue analysis, nodal admittance matrix, power quality, spectral decomposition
Resumo:
This paper introduces a new neurofuzzy model construction and parameter estimation algorithm from observed finite data sets, based on a Takagi and Sugeno (T-S) inference mechanism and a new extended Gram-Schmidt orthogonal decomposition algorithm, for the modeling of a priori unknown dynamical systems in the form of a set of fuzzy rules. The first contribution of the paper is the introduction of a one to one mapping between a fuzzy rule-base and a model matrix feature subspace using the T-S inference mechanism. This link enables the numerical properties associated with a rule-based matrix subspace, the relationships amongst these matrix subspaces, and the correlation between the output vector and a rule-base matrix subspace, to be investigated and extracted as rule-based knowledge to enhance model transparency. The matrix subspace spanned by a fuzzy rule is initially derived as the input regression matrix multiplied by a weighting matrix that consists of the corresponding fuzzy membership functions over the training data set. Model transparency is explored by the derivation of an equivalence between an A-optimality experimental design criterion of the weighting matrix and the average model output sensitivity to the fuzzy rule, so that rule-bases can be effectively measured by their identifiability via the A-optimality experimental design criterion. The A-optimality experimental design criterion of the weighting matrices of fuzzy rules is used to construct an initial model rule-base. An extended Gram-Schmidt algorithm is then developed to estimate the parameter vector for each rule. This new algorithm decomposes the model rule-bases via an orthogonal subspace decomposition approach, so as to enhance model transparency with the capability of interpreting the derived rule-base energy level. This new approach is computationally simpler than the conventional Gram-Schmidt algorithm for resolving high dimensional regression problems, whereby it is computationally desirable to decompose complex models into a few submodels rather than a single model with large number of input variables and the associated curse of dimensionality problem. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.
Resumo:
A new robust neurofuzzy model construction algorithm has been introduced for the modeling of a priori unknown dynamical systems from observed finite data sets in the form of a set of fuzzy rules. Based on a Takagi-Sugeno (T-S) inference mechanism a one to one mapping between a fuzzy rule base and a model matrix feature subspace is established. This link enables rule based knowledge to be extracted from matrix subspace to enhance model transparency. In order to achieve maximized model robustness and sparsity, a new robust extended Gram-Schmidt (G-S) method has been introduced via two effective and complementary approaches of regularization and D-optimality experimental design. Model rule bases are decomposed into orthogonal subspaces, so as to enhance model transparency with the capability of interpreting the derived rule base energy level. A locally regularized orthogonal least squares algorithm, combined with a D-optimality used for subspace based rule selection, has been extended for fuzzy rule regularization and subspace based information extraction. By using a weighting for the D-optimality cost function, the entire model construction procedure becomes automatic. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.
Resumo:
Increasing efforts exist in integrating different levels of detail in models of the cardiovascular system. For instance, one-dimensional representations are employed to model the systemic circulation. In this context, effective and black-box-type decomposition strategies for one-dimensional networks are needed, so as to: (i) employ domain decomposition strategies for large systemic models (1D-1D coupling) and (ii) provide the conceptual basis for dimensionally-heterogeneous representations (1D-3D coupling, among various possibilities). The strategy proposed in this article works for both of these two scenarios, though the several applications shown to illustrate its performance focus on the 1D-1D coupling case. A one-dimensional network is decomposed in such a way that each coupling point connects two (and not more) of the sub-networks. At each of the M connection points two unknowns are defined: the flow rate and pressure. These 2M unknowns are determined by 2M equations, since each sub-network provides one (non-linear) equation per coupling point. It is shown how to build the 2M x 2M non-linear system with arbitrary and independent choice of boundary conditions for each of the sub-networks. The idea is then to solve this non-linear system until convergence, which guarantees strong coupling of the complete network. In other words, if the non-linear solver converges at each time step, the solution coincides with what would be obtained by monolithically modeling the whole network. The decomposition thus imposes no stability restriction on the choice of the time step size. Effective iterative strategies for the non-linear system that preserve the black-box character of the decomposition are then explored. Several variants of matrix-free Broyden`s and Newton-GMRES algorithms are assessed as numerical solvers by comparing their performance on sub-critical wave propagation problems which range from academic test cases to realistic cardiovascular applications. A specific variant of Broyden`s algorithm is identified and recommended on the basis of its computer cost and reliability. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The eigenvalue densities of two random matrix ensembles, the Wigner Gaussian matrices and the Wishart covariant matrices, are decomposed in the contributions of each individual eigenvalue distribution. It is shown that the fluctuations of all eigenvalues, for medium matrix sizes, are described with a good precision by nearly normal distributions.
Resumo:
The organic fraction of urban solid residues disposed of in sanitary landfills during the decomposition yields biogas and leachate, which are sources of pollution. Leachate is a resultant liquid from the decomposition of substances contained in solid residues and it contains in its composition organic and inorganic substances. Literature shows an increase in the use of thermoanalytical techniques to study the samples with environmental interest, this way thermogravimetry is used in this research. Thermogravimetric studies (TG curves) carried out on leachate and residues shows similarities in the thermal behavior, although presenting complex composition. Residue samples were collected from landfills, composting plants, sewage treatment stations, leachate, which after treatment, were submitted for thermal analysis. Kinetic parameters were determined using the Flynn-Wall-Ozawa method. In this case they show little divergence between the kinetic parameter that can be attributed to different decomposition reaction and presence of organic compounds in different phases of the decomposition with structures modified during degradation process and also due to experimental conditions of analysis.
Resumo:
This paper presents an analyze of numeric conditioning of the Hessian matrix of Lagrangian of modified barrier function Lagrangian method (MBFL) and primal-dual logarithmic barrier method (PDLB), which are obtained in the process of solution of an optimal power flow problem (OPF). This analyze is done by a comparative study through the singular values decomposition (SVD) of those matrixes. In the MBLF method the inequality constraints are treated by the modified barrier and PDLB methods. The inequality constraints are transformed into equalities by introducing positive auxiliary variables and are perturbed by the barrier parameter. The first-order necessary conditions of the Lagrangian function are solved by Newton's method. The perturbation of the auxiliary variables results in an expansion of the feasible set of the original problem, allowing the limits of the inequality constraints to be reached. The electric systems IEEE 14, 162 and 300 buses were used in the comparative analysis. ©2007 IEEE.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
[EN]A natural generalization of the classical Moore-Penrose inverse is presented. The so-called S-Moore-Penrose inverse of a m x n complex matrix A, denoted by As, is defined for any linear subspace S of the matrix vector space Cnxm. The S-Moore-Penrose inverse As is characterized using either the singular value decomposition or (for the nonsingular square case) the orthogonal complements with respect to the Frobenius inner product. These results are applied to the preconditioning of linear systems based on Frobenius norm minimization and to the linearly constrained linear least squares problem.
Resumo:
A basic approach to study a NVH problem is to break down the system in three basic elements – source, path and receiver. While the receiver (response) and the transfer path can be measured, it is difficult to measure the source (forces) acting on the system. It becomes necessary to predict these forces to know how they influence the responses. This requires inverting the transfer path. Singular Value Decomposition (SVD) method is used to decompose the transfer path matrix into its principle components which is required for the inversion. The usual approach to force prediction requires rejecting the small singular values obtained during SVD by setting a threshold, as these small values dominate the inverse matrix. This assumption of the threshold may be subjected to rejecting important singular values severely affecting force prediction. The new approach discussed in this report looks at the column space of the transfer path matrix which is the basis for the predicted response. The response participation is an indication of how the small singular values influence the force participation. The ability to accurately reconstruct the response vector is important to establish a confidence in force vector prediction. The goal of this report is to suggest a solution that is mathematically feasible, physically meaningful, and numerically more efficient through examples. This understanding adds new insight to the effects of current code and how to apply algorithms and understanding to new codes.
Resumo:
This thesis develops high performance real-time signal processing modules for direction of arrival (DOA) estimation for localization systems. It proposes highly parallel algorithms for performing subspace decomposition and polynomial rooting, which are otherwise traditionally implemented using sequential algorithms. The proposed algorithms address the emerging need for real-time localization for a wide range of applications. As the antenna array size increases, the complexity of signal processing algorithms increases, making it increasingly difficult to satisfy the real-time constraints. This thesis addresses real-time implementation by proposing parallel algorithms, that maintain considerable improvement over traditional algorithms, especially for systems with larger number of antenna array elements. Singular value decomposition (SVD) and polynomial rooting are two computationally complex steps and act as the bottleneck to achieving real-time performance. The proposed algorithms are suitable for implementation on field programmable gated arrays (FPGAs), single instruction multiple data (SIMD) hardware or application specific integrated chips (ASICs), which offer large number of processing elements that can be exploited for parallel processing. The designs proposed in this thesis are modular, easily expandable and easy to implement. Firstly, this thesis proposes a fast converging SVD algorithm. The proposed method reduces the number of iterations it takes to converge to correct singular values, thus achieving closer to real-time performance. A general algorithm and a modular system design are provided making it easy for designers to replicate and extend the design to larger matrix sizes. Moreover, the method is highly parallel, which can be exploited in various hardware platforms mentioned earlier. A fixed point implementation of proposed SVD algorithm is presented. The FPGA design is pipelined to the maximum extent to increase the maximum achievable frequency of operation. The system was developed with the objective of achieving high throughput. Various modern cores available in FPGAs were used to maximize the performance and details of these modules are presented in detail. Finally, a parallel polynomial rooting technique based on Newton’s method applicable exclusively to root-MUSIC polynomials is proposed. Unique characteristics of root-MUSIC polynomial’s complex dynamics were exploited to derive this polynomial rooting method. The technique exhibits parallelism and converges to the desired root within fixed number of iterations, making this suitable for polynomial rooting of large degree polynomials. We believe this is the first time that complex dynamics of root-MUSIC polynomial were analyzed to propose an algorithm. In all, the thesis addresses two major bottlenecks in a direction of arrival estimation system, by providing simple, high throughput, parallel algorithms.