196 resultados para decompositions


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The simplest multiplicative systems in which arithmetical ideas can be defined are semigroups. For such systems irreducible (prime) elements can be introduced and conditions under which the fundamental theorem of arithmetic holds have been investigated (Clifford (3)). After identifying associates, the elements of the semigroup form a partially ordered set with respect to the ordinary division relation. This suggests the possibility of an analogous arithmetical result for abstract partially ordered sets. Although nothing corresponding to product exists in a partially ordered set, there is a notion similar to g.c.d. This is the meet operation, defined as greatest lower bound. Thus irreducible elements, namely those elements not expressible as meets of proper divisors can be introduced. The assumption of the ascending chain condition then implies that each element is representable as a reduced meet of irreducibles. The central problem of this thesis is to determine conditions on the structure of the partially ordered set in order that each element have a unique such representation.

Part I contains preliminary results and introduces the principal tools of the investigation. In the second part, basic properties of the lattice of ideals and the connection between its structure and the irreducible decompositions of elements are developed. The proofs of these results are identical with the corresponding ones for the lattice case (Dilworth (2)). The last part contains those results whose proofs are peculiar to partially ordered sets and also contains the proof of the main theorem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Métodos estocásticos oferecem uma poderosa ferramenta para a execução da compressão de dados e decomposições de matrizes. O método estocástico para decomposição de matrizes estudado utiliza amostragem aleatória para identificar um subespaço que captura a imagem de uma matriz de forma aproximada, preservando uma parte de sua informação essencial. Estas aproximações compactam a informação possibilitando a resolução de problemas práticos de maneira eficiente. Nesta dissertação é calculada uma decomposição em valores singulares (SVD) utilizando técnicas estocásticas. Esta SVD aleatória é empregada na tarefa de reconhecimento de faces. O reconhecimento de faces funciona de forma a projetar imagens de faces sobre um espaço de características que melhor descreve a variação de imagens de faces conhecidas. Estas características significantes são conhecidas como autofaces, pois são os autovetores de uma matriz associada a um conjunto de faces. Essa projeção caracteriza aproximadamente a face de um indivíduo por uma soma ponderada das autofaces características. Assim, a tarefa de reconhecimento de uma nova face consiste em comparar os pesos de sua projeção com os pesos da projeção de indivíduos conhecidos. A análise de componentes principais (PCA) é um método muito utilizado para determinar as autofaces características, este fornece as autofaces que representam maior variabilidade de informação de um conjunto de faces. Nesta dissertação verificamos a qualidade das autofaces obtidas pela SVD aleatória (que são os vetores singulares à esquerda de uma matriz contendo as imagens) por comparação de similaridade com as autofaces obtidas pela PCA. Para tanto, foram utilizados dois bancos de imagens, com tamanhos diferentes, e aplicadas diversas amostragens aleatórias sobre a matriz contendo as imagens.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There are many methods for decomposing signals into a sum of amplitude and frequency modulated sinusoids. In this paper we take a new estimation based approach. Identifying the problem as ill-posed, we show how to regularize the solution by imposing soft constraints on the amplitude and phase variables of the sinusoids. Estimation proceeds using a version of Kalman smoothing. We evaluate the method on synthetic and natural, clean and noisy signals, showing that it outperforms previous decompositions, but at a higher computational cost. © 2012 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The acid-base stabilities of Al-13 and Al-30 in polyaluminum coagulants during aging and after dosing into water were studied systematically using batch and flow-through acid-base titration experiments. The acid decomposition rates of both Al-13 and Al-30 increase rapidly with the decrease in solution pH. The acid decompositions of Al-13 and Al-30 with respect to H+ concentration are composed of two parallel first-order and second-order reactions, and the reaction orders are 1.169 and 1.005, respectively. The acid decomposition rates of Al-13 and Al-30 increase slightly when the temperature increases from 20 to ca. 35 A degrees C, but decrease when the temperature increases further. Al-30 is more stable than Al-13 in acidic solution, and the stability difference increases as the pH decreases. Al-30 is more possible to become the dominant species in polyaluminum coagulants than Al-13. The acid catalyzed decomposition and followed by recrystallization to form bayerite is one of the main processes that are responsible for the decrease of Al-13 and Al-30 in polyaluminum coagulants during storage. The deprotonation and polymerization of Al-13 and Al-30 depend on solution pH. The hydrolysis products are positively charged, and consist mainly of repeated Al-13 and Al-30 units rather than amorphous Al(OH)(3) precipitates. Al-30 is less stable than Al-13 upon alkaline hydrolysis. Al-13 is stable at pH < 5.9, while Al-30 lose one proton at the pH 4.6-5.75. Al-13 and Al-30 lose respective 5 and 10 protons and form [Al-13] (n) and [Al-30] (n) clusters within the pH region of 5.9-6.25 and 5.75-6.65, respectively. This indicates that Al-30 is easier to aggregate than Al-13 at the acidic side, but [Al-13] (n) is much easier to convert to Alsol-gel than [Al-30] (n) . Al-30 possesses better characteristics than Al-13 when used as coagulant because the hydrolysis products of Al-30 possess higher charges than that of Al-13, and [Al-30] (n) clusters exist within a wider pH range.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The unimolecular charge separations and neutral loss decompositions of the doubly charged ions [C7H7Cl](2+), [C7H6Cl](2+) and [C7H5Cl](2+) produced in the ion source by 70 eV electron impact from 3 chloro-toluenes and benzyl chloride isomers were studied

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ionospheric parameter M(3000)F2 (the so-called transmission factor or the propagation factor) is important not only in practical applications such as frequency planning for radio-communication but also in ionospheric modeling. This parameter is strongly anti-correlated with the ionospheric F2-layer peak height hmF2,a parameter often used as a key anchor point in some widely used empirical models of the ionospheric electron density profile (e.g., in IRI and NeQuick models). Since hmF2 is not easy to obtain from measurements and M(3000)F2 can be routinely scaled from ionograms recorded by ionosonde/digisonde stations distributed globally and its data has been accumulated for a long history, usually the value of hmF2 is calculated from M(3000)F2 using the empirical formula connecting them. In practice, CCIR M(3000)F2 model is widely used to obtain M(3000)F2 value. However, recently some authors found that the CCIR M(3000)F2 model has remarkable discrepancies with the measured M(3000)F2, especially in low-latitude and equatorial regions. For this reason, the International Reference Ionosphere (IRI) research community proposes to improve or update the currently used CCIR M(3000)F2 model. Any efforts toward the improvement and updating of the current M(3000)F2 model or newly development of a global hmF2 model are encouraged. In this dissertation, an effort is made to construct the empirical models of M(3000)F2 and hmF2 based on the empirical orthogonal function (EOF) analysis combined with regression analysis method. The main results are as follows: 1. A single station model is constructed using monthly median hourly values of M(3000)F2 data observed at Wuhan Ionospheric Observatory during the years of 1957–1991 and compared with the IRI model. The result shows that EOF method is possible to use only a few orders of EOF components to represent most of the variance of the original data set. It is a powerful method for ionospheric modeling. 2. Using the values of M(3000)F2 observed by ionosondes distributed globally, data at grids uniformly distributed globally were obtained by using the Kriging interpolation method. Then the gridded data were decomposed into EOF components using two different coordinates: (1) geographical longitude and latitude; (2) modified dip (Modip) and local time. Based on the EOF decompositions of the gridded data under these two coordinates systems, two types of the global M(3000)F2 model are constructed. Statistical analysis showed that the two types of the constructed M(3000)F2 model have better agreement with the observational M(3000)F2 than the M(3000)F2 model currently used by IRI. The constructed models can represent the global variations of M(3000)F2 better. 3. The hmF2 data used to construct the hmF2 model were converted from the observed M(3000)F2 based on the empirical formula connecting them. We also constructed two types of the global hmF2 model using the similar method of modeling M(3000)F2. Statistical analysis showed that the prediction of our models is more accurate than the model of IRI. This demonstrated that using EOF analysis method to construct global model of hmF2 directly is feasible. The results in this thesis indicate that the modeling technique based on EOF expansion combined with regression analysis is very promising when used to construct the global models of M(3000)F2 and hmF2. It is worthwhile to investigate further and has the potential to be used to the global modeling of other ionospheric parameters.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Seismic wave field numerical modeling and seismic migration imaging based on wave equation have become useful and absolutely necessarily tools for imaging of complex geological objects. An important task for numerical modeling is to deal with the matrix exponential approximation in wave field extrapolation. For small value size matrix exponential, we can approximate the square root operator in exponential using different splitting algorithms. Splitting algorithms are usually used on the order or the dimension of one-way wave equation to reduce the complexity of the question. In this paper, we achieve approximate equation of 2-D Helmholtz operator inversion using multi-way splitting operation. Analysis on Gauss integral and coefficient of optimized partial fraction show that dispersion may accumulate by splitting algorithms for steep dipping imaging. High-order symplectic Pade approximation may deal with this problem, However, approximation of square root operator in exponential using splitting algorithm cannot solve dispersion problem during one-way wave field migration imaging. We try to implement exact approximation through eigenfunction expansion in matrix. Fast Fourier Transformation (FFT) method is selected because of its lowest computation. An 8-order Laplace matrix splitting is performed to achieve a assemblage of small matrixes using FFT method. Along with the introduction of Lie group and symplectic method into seismic wave-field extrapolation, accurate approximation of matrix exponential based on Lie group and symplectic method becomes the hot research field. To solve matrix exponential approximation problem, the Second-kind Coordinates (SKC) method and Generalized Polar Decompositions (GPD) method of Lie group are of choice. SKC method utilizes generalized Strang-splitting algorithm. While GPD method utilizes polar-type splitting and symmetric polar-type splitting algorithm. Comparing to Pade approximation, these two methods are less in computation, but they can both assure the Lie group structure. We think SKC and GPD methods are prospective and attractive in research and practice.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Wydział Chemii: Zakład Biochemii

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose the development of a world wide web image search engine that crawls the web collecting information about the images it finds, computes the appropriate image decompositions and indices, and stores this extracted information for searches based on image content. Indexing and searching images need not require solving the image understanding problem. Instead, the general approach should be to provide an arsenal of image decompositions and discriminants that can be precomputed for images. At search time, users can select a weighted subset of these decompositions to be used for computing image similarity measurements. While this approach avoids the search-time-dependent problem of labeling what is important in images, it still holds several important problems that require further research in the area of query by image content. We briefly explore some of these problems as they pertain to shape.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

ImageRover is a search by image content navigation tool for the world wide web. To gather images expediently, the image collection subsystem utilizes a distributed fleet of WWW robots running on different computers. The image robots gather information about the images they find, computing the appropriate image decompositions and indices, and store this extracted information in vector form for searches based on image content. At search time, users can iteratively guide the search through the selection of relevant examples. Search performance is made efficient through the use of an approximate, optimized k-d tree algorithm. The system employs a novel relevance feedback algorithm that selects the distance metrics appropriate for a particular query.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

ImageRover is a search by image content navigation tool for the world wide web. The staggering size of the WWW dictates certain strategies and algorithms for image collection, digestion, indexing, and user interface. This paper describes two key components of the ImageRover strategy: image digestion and relevance feedback. Image digestion occurs during image collection; robots digest the images they find, computing image decompositions and indices, and storing this extracted information in vector form for searches based on image content. Relevance feedback occurs during index search; users can iteratively guide the search through the selection of relevant examples. ImageRover employs a novel relevance feedback algorithm to determine the weighted combination of image similarity metrics appropriate for a particular query. ImageRover is available and running on the web site.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Chapter 1 of this thesis is a brief introduction to the preparation and reactions of α-diazocarbonyl compounds, with particular emphasis on the areas relating to the research undertaken: C-H insertion, addition to aromatics, and oxonium ylide generation and rearrangement. A short summary of catalyst development illustrates the importance of rhodium(II)carboxylates for α-diazocarbonyl decomposition. Chapter 2 describes intramolecular C-H insertion reactions of α-diazo-β-keto sulphones to form substituted cyclopentanones. Rhodium(II) carboxylates derived from homochiral carboxylic acids were used as catalysts in these reactions and enantioselection achieved through their use is discussed. Chapter 3 describes intramolecular Buchner cyclisation of aryl diazoketones with emphasis on the stereochemical aspects of the cyclisation and subsequent reaction of the bicyclo[5.3.0]decatrienones produced. The partial asymmetric synthesis achieved through use of chiral rhodium(II) carboxylates as catalysts is discussed. The application of the intramolecular Buchner reaction to the synthesis of hydroazulene lactones is illustrated. Chapter 4 demonstrates oxonium ylide formation and rearrangement in the decomposition of an α-diazoketone. The consequences of the use of chiral rhodium(II) carboxylates as catalysts are described. Particularly significant was the discovery that rhodium(II) (S)-mandelate acts as a very efficient catalyst for α-diazoketone decompositions, in general. Moderate asymmetric induction was possible in the decomposition of α-diazoketones with chiral rhodium(II) carboxylates, with rhodium(II) (S)-mandelate being one of the more enantioselective catalysts investigated. However, the asymmetric induction obtained was very dependent on the exact structure of the α-diazoketone, the catalyst, and the nature of the reaction. Chapter 5 contains the experimental details, and the spectral and analytical data for all new compounds reported.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Let A be a self-adjoint operator on a Hilbert space. It is well known that A admits a unique decomposition into a direct sum of three self-adjoint operators A(p), A(ac) and A(sc) such that there exists an orthonormal basis of eigenvectors for the operator A(p) the operator A(ac) has purely absolutely continuous spectrum and the operator A(sc) has purely singular continuous spectrum. We show the existence of a natural further decomposition of the singular continuous component A c into a direct sum of two self-adjoint operators A(sc)(D) and A(sc)(ND). The corresponding subspaces and spectra are called decaying and purely non-decaying singular subspaces and spectra. Similar decompositions are also shown for unitary operators and for general normal operators.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Matrix algorithms are important in many types of applications including image and signal processing. A close examination of the algorithms used in these, and related, applications reveals that many of the fundamental actions involve matrix algorithms such as matrix multiplication. This paper presents an investigation into the design and implementation of different matrix algorithms such as matrix operations, matrix transforms and matrix decompositions using a novel custom coprocessor system for MATrix algorithms based on Reconfigurable Computing (RCMAT). The proposed RCMAT architectures are scalable, modular and require less area and time complexity with reduced latency when compared with existing structures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper explores relationships between classical and parametric measures of graph (or network) complexity. Classical measures are based on vertex decompositions induced by equivalence relations. Parametric measures, on the other hand, are constructed by using information functions to assign probabilities to the vertices. The inequalities established in this paper relating classical and parametric measures lay a foundation for systematic classification of entropy-based measures of graph complexity.