974 resultados para R-MATRIX METHOD
Resumo:
The process involves encapsulation or immobilization of the active solid substance in a cellulose framework by regenerating cellulose dissolved in an ionic liq. solvent in a regenerating soln. The active substance can be initially present in the ionic liq. or in the regenerating solvent either as a soln. or dispersion. The invention is applicable to mol. encapsulation and to entrapping of larger particles including enzymes, nanoparticles and macroscopic components, and to the formation of bulk materials with a wide range of morphol. forms. Thus, carbamoylmethylphosphine oxide (I) encapsulated in a cellulose matrix was realized by adding I to a 10% soln. of cellulose in 1-butyl-3-methylimidazolium chloride (ionic liq.) under vigorous stirring and then removing the ionic liq. with water. [on SciFinder(R)]
Resumo:
Nickel nanoparticles into silica-carbon matrix composites were prepared by using the polymeric precursor method. The effects of the polyester type and the time of pyrolysis on the mesoporosity and nickel particle dispersion into non-aqueous amorphous silica-carbon matrix were investigated by thermogravimetric analysis, adsorption/desorption isotherms and TEM. A well-dispersed metallic phase could be only obtained by using ethylene glycol. Weightier polyesters affected the pyrolysis process due to a combination of more amounts of carbonaceous residues and delaying of pyrolysis process. The post-pyrolyzed composites were successfully cleaned at 200 degrees C for I h in oxygen atmosphere leading to an increase in the surface area and without the occurrence of carbon combustion or nickel nanoparticles oxidation. The matrix composites presented predominantly mesoporous with pore size well defined in 38 angstrom, mainly when tetraethylene glycol was used as polymerizing agent. (C) 2007 Elsevier B.V. All rights reserved.
Measurement of the top quark mass in the lepton plus jets final state with the matrix element method
Resumo:
We present a measurement of the top quark mass with the matrix element method in the lepton+jets final state. As the energy scale for calorimeter jets represents the dominant source of systematic uncertainty, the matrix element likelihood is extended by an additional parameter, which is defined as a global multiplicative factor applied to the standard energy scale. The top quark mass is obtained from a fit that yields the combined statistical and systematic jet energy scale uncertainty. Using a data set of 0.4 fb(-1) taken with the D0 experiment at Run II of the Fermilab Tevatron Collider, the mass of the top quark is measured using topological information to be: m(top)(center dot+jets)(topo)=169.2(-7.4)(+5.0)(stat+JES)(-1.4)(+1.5)(syst) GeV, and when information about identified b jets is included: m(top)(center dot+jets)(b-tag)=170.3(-4.5)(+4.1)(stat+ JES)(-1.8)(+1.2)(syst) GeV. The measurements yield a jet energy scale consistent with the reference scale.
Resumo:
The increasing amount of sequences stored in genomic databases has become unfeasible to the sequential analysis. Then, the parallel computing brought its power to the Bioinformatics through parallel algorithms to align and analyze the sequences, providing improvements mainly in the running time of these algorithms. In many situations, the parallel strategy contributes to reducing the computational complexity of the big problems. This work shows some results obtained by an implementation of a parallel score estimating technique for the score matrix calculation stage, which is the first stage of a progressive multiple sequence alignment. The performance and quality of the parallel score estimating are compared with the results of a dynamic programming approach also implemented in parallel. This comparison shows a significant reduction of running time. Moreover, the quality of the final alignment, using the new strategy, is analyzed and compared with the quality of the approach with dynamic programming.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The previous investigations have shown that the modal strain energy correlation method, MSEC, could successfully identify the damage of truss bridge structures. However, it has to incorporate the sensitivity matrix to estimate damage and is not reliable in certain damage detection cases. This paper presents an improved MSEC method where the prediction of modal strain energy change vector is differently obtained by running the eigensolutions on-line in optimisation iterations. The particular trail damage treatment group maximising the fitness function close to unity is identified as the detected damage location. This improvement is then compared with the original MSEC method along with other typical correlation-based methods on the finite element model of a simple truss bridge. The contributions to damage detection accuracy of each considered mode is also weighed and discussed. The iterative searching process is operated by using genetic algorithm. The results demonstrate that the improved MSEC method suffices the demand in detecting the damage of truss bridge structures, even when noised measurement is considered.
Resumo:
Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive semidefinite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space - classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semidefinite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm -using the labeled part of the data one can learn an embedding also for the unlabeled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method for learning the 2-norm soft margin parameter in support vector machines, solving an important open problem.
Resumo:
Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive definite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space -- classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semi-definite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm -- using the labelled part of the data one can learn an embedding also for the unlabelled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method to learn the 2-norm soft margin parameter in support vector machines, solving another important open problem. Finally, the novel approach presented in the paper is supported by positive empirical results.
Resumo:
The R statistical environment and language has demonstrated particular strengths for interactive development of statistical algorithms, as well as data modelling and visualisation. Its current implementation has an interpreter at its core which may result in a performance penalty in comparison to directly executing user algorithms in the native machine code of the host CPU. In contrast, the C++ language has no built-in visualisation capabilities, handling of linear algebra or even basic statistical algorithms; however, user programs are converted to high-performance machine code, ahead of execution. A new method avoids possible speed penalties in R by using the Rcpp extension package in conjunction with the Armadillo C++ matrix library. In addition to the inherent performance advantages of compiled code, Armadillo provides an easy-to-use template-based meta-programming framework, allowing the automatic pooling of several linear algebra operations into one, which in turn can lead to further speedups. With the aid of Rcpp and Armadillo, conversion of linear algebra centered algorithms from R to C++ becomes straightforward. The algorithms retains the overall structure as well as readability, all while maintaining a bidirectional link with the host R environment. Empirical timing comparisons of R and C++ implementations of a Kalman filtering algorithm indicate a speedup of several orders of magnitude.
Resumo:
A method for forming a material comprising a metal oxide supported on a support particle comprising the steps of: (a) providing a precursor mixt. comprising a soln. contg. one or more metal cations and (i) a surfactant; or (ii) a hydrophilic polymer; said precursor mixt. further including support particles; and (b) treating the precursor mixt. from (a) above by heating to remove the surfactant or hydrophilic polymer and form metal oxide having nanosized grains, wherein at least some of the metal oxide formed in step (b) is deposited on or supported by the support particles and the metal oxide has an oxide matrix that includes metal atoms derived solely from sources other than the support particles. The disclosure and examples pertain to emission control catalysts. [on SciFinder(R)]
Resumo:
Trees are capable of portraying the semi-structured data which is common in web domain. Finding similarities between trees is mandatory for several applications that deal with semi-structured data. Existing similarity methods examine a pair of trees by comparing through nodes and paths of two trees, and find the similarity between them. However, these methods provide unfavorable results for unordered tree data and result in yielding NP-hard or MAX-SNP hard complexity. In this paper, we present a novel method that encodes a tree with an optimal traversing approach first, and then, utilizes it to model the tree with its equivalent matrix representation for finding similarity between unordered trees efficiently. Empirical analysis shows that the proposed method is able to achieve high accuracy even on the large data sets.
Resumo:
Articular cartilage is the load-bearing tissue that consists of proteoglycan macromolecules entrapped between collagen fibrils in a three-dimensional architecture. To date, the drudgery of searching for mathematical models to represent the biomechanics of such a system continues without providing a fitting description of its functional response to load at micro-scale level. We believe that the major complication arose when cartilage was first envisaged as a multiphasic model with distinguishable components and that quantifying those and searching for the laws that govern their interaction is inadequate. To the thesis of this paper, cartilage as a bulk is as much continuum as is the response of its components to the external stimuli. For this reason, we framed the fundamental question as to what would be the mechano-structural functionality of such a system in the total absence of one of its key constituents-proteoglycans. To answer this, hydrated normal and proteoglycan depleted samples were tested under confined compression while finite element models were reproduced, for the first time, based on the structural microarchitecture of the cross-sectional profile of the matrices. These micro-porous in silico models served as virtual transducers to produce an internal noninvasive probing mechanism beyond experimental capabilities to render the matrices micromechanics and several others properties like permeability, orientation etc. The results demonstrated that load transfer was closely related to the microarchitecture of the hyperelastic models that represent solid skeleton stress and fluid response based on the state of the collagen network with and without the swollen proteoglycans. In other words, the stress gradient during deformation was a function of the structural pattern of the network and acted in concert with the position-dependent compositional state of the matrix. This reveals that the interaction between indistinguishable components in real cartilage is superimposed by its microarchitectural state which directly influences macromechanical behavior.
Resumo:
The generation of a correlation matrix for set of genomic sequences is a common requirement in many bioinformatics problems such as phylogenetic analysis. Each sequence may be millions of bases long and there may be thousands of such sequences which we wish to compare, so not all sequences may fit into main memory at the same time. Each sequence needs to be compared with every other sequence, so we will generally need to page some sequences in and out more than once. In order to minimize execution time we need to minimize this I/O. This paper develops an approach for faster and scalable computing of large-size correlation matrices through the maximal exploitation of available memory and reducing the number of I/O operations. The approach is scalable in the sense that the same algorithms can be executed on different computing platforms with different amounts of memory and can be applied to different bioinformatics problems with different correlation matrix sizes. The significant performance improvement of the approach over previous work is demonstrated through benchmark examples.
Inverse Sensitivity Analysis of Singular Solutions of FRF matrix in Structural System Identification
Resumo:
The problem of structural damage detection based on measured frequency response functions of the structure in its damaged and undamaged states is considered. A novel procedure that is based on inverse sensitivity of the singular solutions of the system FRF matrix is proposed. The treatment of possibly ill-conditioned set of equations via regularization scheme and questions on spatial incompleteness of measurements are considered. The application of the method in dealing with systems with repeated natural frequencies and (or) packets of closely spaced modes is demonstrated. The relationship between the proposed method and the methods based on inverse sensitivity of eigensolutions and frequency response functions is noted. The numerical examples on a 5-degree of freedom system, a one span free-free beam and a spatially periodic multi-span beam demonstrate the efficacy of the proposed method and its superior performance vis-a-vis methods based on inverse eigensensitivity.