940 resultados para Graph Decomposition
Resumo:
Los hipergrafos dirigidos se han empleado en problemas relacionados con lógica proposicional, bases de datos relacionales, linguística computacional y aprendizaje automático. Los hipergrafos dirigidos han sido también utilizados como alternativa a los grafos (bipartitos) dirigidos para facilitar el estudio de las interacciones entre componentes de sistemas complejos que no pueden ser fácilmente modelados usando exclusivamente relaciones binarias. En este contexto, este tipo de representación es conocida como hiper-redes. Un hipergrafo dirigido es una generalización de un grafo dirigido especialmente adecuado para la representación de relaciones de muchos a muchos. Mientras que una arista en un grafo dirigido define una relación entre dos de sus nodos, una hiperarista en un hipergrafo dirigido define una relación entre dos conjuntos de sus nodos. La conexión fuerte es una relación de equivalencia que divide el conjunto de nodos de un hipergrafo dirigido en particiones y cada partición define una clase de equivalencia conocida como componente fuertemente conexo. El estudio de los componentes fuertemente conexos de un hipergrafo dirigido puede ayudar a conseguir una mejor comprensión de la estructura de este tipo de hipergrafos cuando su tamaño es considerable. En el caso de grafo dirigidos, existen algoritmos muy eficientes para el cálculo de los componentes fuertemente conexos en grafos de gran tamaño. Gracias a estos algoritmos, se ha podido averiguar que la estructura de la WWW tiene forma de “pajarita”, donde más del 70% del los nodos están distribuidos en tres grandes conjuntos y uno de ellos es un componente fuertemente conexo. Este tipo de estructura ha sido también observada en redes complejas en otras áreas como la biología. Estudios de naturaleza similar no han podido ser realizados en hipergrafos dirigidos porque no existe algoritmos capaces de calcular los componentes fuertemente conexos de este tipo de hipergrafos. En esta tesis doctoral, hemos investigado como calcular los componentes fuertemente conexos de un hipergrafo dirigido. En concreto, hemos desarrollado dos algoritmos para este problema y hemos determinado que son correctos y cuál es su complejidad computacional. Ambos algoritmos han sido evaluados empíricamente para comparar sus tiempos de ejecución. Para la evaluación, hemos producido una selección de hipergrafos dirigidos generados de forma aleatoria inspirados en modelos muy conocidos de grafos aleatorios como Erdos-Renyi, Newman-Watts-Strogatz and Barabasi-Albert. Varias optimizaciones para ambos algoritmos han sido implementadas y analizadas en la tesis. En concreto, colapsar los componentes fuertemente conexos del grafo dirigido que se puede construir eliminando ciertas hiperaristas complejas del hipergrafo dirigido original, mejora notablemente los tiempos de ejecucion de los algoritmos para varios de los hipergrafos utilizados en la evaluación. Aparte de los ejemplos de aplicación mencionados anteriormente, los hipergrafos dirigidos han sido también empleados en el área de representación de conocimiento. En concreto, este tipo de hipergrafos se han usado para el cálculo de módulos de ontologías. Una ontología puede ser definida como un conjunto de axiomas que especifican formalmente un conjunto de símbolos y sus relaciones, mientras que un modulo puede ser entendido como un subconjunto de axiomas de la ontología que recoge todo el conocimiento que almacena la ontología sobre un conjunto especifico de símbolos y sus relaciones. En la tesis nos hemos centrado solamente en módulos que han sido calculados usando la técnica de localidad sintáctica. Debido a que las ontologías pueden ser muy grandes, el cálculo de módulos puede facilitar las tareas de re-utilización y mantenimiento de dichas ontologías. Sin embargo, analizar todos los posibles módulos de una ontología es, en general, muy costoso porque el numero de módulos crece de forma exponencial con respecto al número de símbolos y de axiomas de la ontología. Afortunadamente, los axiomas de una ontología pueden ser divididos en particiones conocidas como átomos. Cada átomo representa un conjunto máximo de axiomas que siempre aparecen juntos en un modulo. La decomposición atómica de una ontología es definida como un grafo dirigido de tal forma que cada nodo del grafo corresponde con un átomo y cada arista define una dependencia entre una pareja de átomos. En esta tesis introducimos el concepto de“axiom dependency hypergraph” que generaliza el concepto de descomposición atómica de una ontología. Un modulo en una ontología correspondería con un componente conexo en este tipo de hipergrafos y un átomo de una ontología con un componente fuertemente conexo. Hemos adaptado la implementación de nuestros algoritmos para que funcionen también con axiom dependency hypergraphs y poder de esa forma calcular los átomos de una ontología. Para demostrar la viabilidad de esta idea, hemos incorporado nuestros algoritmos en una aplicación que hemos desarrollado para la extracción de módulos y la descomposición atómica de ontologías. A la aplicación la hemos llamado HyS y hemos estudiado sus tiempos de ejecución usando una selección de ontologías muy conocidas del área biomédica, la mayoría disponibles en el portal de Internet NCBO. Los resultados de la evaluación muestran que los tiempos de ejecución de HyS son mucho mejores que las aplicaciones más rápidas conocidas. ABSTRACT Directed hypergraphs are an intuitive modelling formalism that have been used in problems related to propositional logic, relational databases, computational linguistic and machine learning. Directed hypergraphs are also presented as an alternative to directed (bipartite) graphs to facilitate the study of the interactions between components of complex systems that cannot naturally be modelled as binary relations. In this context, they are known as hyper-networks. A directed hypergraph is a generalization of a directed graph suitable for representing many-to-many relationships. While an edge in a directed graph defines a relation between two nodes of the graph, a hyperedge in a directed hypergraph defines a relation between two sets of nodes. Strong-connectivity is an equivalence relation that induces a partition of the set of nodes of a directed hypergraph into strongly-connected components. These components can be collapsed into single nodes. As result, the size of the original hypergraph can significantly be reduced if the strongly-connected components have many nodes. This approach might contribute to better understand how the nodes of a hypergraph are connected, in particular when the hypergraphs are large. In the case of directed graphs, there are efficient algorithms that can be used to compute the strongly-connected components of large graphs. For instance, it has been shown that the macroscopic structure of the World Wide Web can be represented as a “bow-tie” diagram where more than 70% of the nodes are distributed into three large sets and one of these sets is a large strongly-connected component. This particular structure has been also observed in complex networks in other fields such as, e.g., biology. Similar studies cannot be conducted in a directed hypergraph because there does not exist any algorithm for computing the strongly-connected components of the hypergraph. In this thesis, we investigate ways to compute the strongly-connected components of directed hypergraphs. We present two new algorithms and we show their correctness and computational complexity. One of these algorithms is inspired by Tarjan’s algorithm for directed graphs. The second algorithm follows a simple approach to compute the stronglyconnected components. This approach is based on the fact that two nodes of a graph that are strongly-connected can also reach the same nodes. In other words, the connected component of each node is the same. Both algorithms are empirically evaluated to compare their performances. To this end, we have produced a selection of random directed hypergraphs inspired by existent and well-known random graphs models like Erd˝os-Renyi and Newman-Watts-Strogatz. Besides the application examples that we mentioned earlier, directed hypergraphs have also been employed in the field of knowledge representation. In particular, they have been used to compute the modules of an ontology. An ontology is defined as a collection of axioms that provides a formal specification of a set of terms and their relationships; and a module is a subset of an ontology that completely captures the meaning of certain terms as defined in the ontology. In particular, we focus on the modules computed using the notion of syntactic locality. As ontologies can be very large, the computation of modules facilitates the reuse and maintenance of these ontologies. Analysing all modules of an ontology, however, is in general not feasible as the number of modules grows exponentially in the number of terms and axioms of the ontology. Nevertheless, the modules can succinctly be represented using the Atomic Decomposition of an ontology. Using this representation, an ontology can be partitioned into atoms, which are maximal sets of axioms that co-occur in every module. The Atomic Decomposition is then defined as a directed graph such that each node correspond to an atom and each edge represents a dependency relation between two atoms. In this thesis, we introduce the notion of an axiom dependency hypergraph which is a generalization of the atomic decomposition of an ontology. A module in the ontology corresponds to a connected component in the hypergraph, and the atoms of the ontology to the strongly-connected components. We apply our algorithms for directed hypergraphs to axiom dependency hypergraphs and in this manner, we compute the atoms of an ontology. To demonstrate the viability of this approach, we have implemented the algorithms in the application HyS which computes the modules of ontologies and calculate their atomic decomposition. In the thesis, we provide an experimental evaluation of HyS with a selection of large and prominent biomedical ontologies, most of which are available in the NCBO Bioportal. HyS outperforms state-of-the-art implementations in the tasks of extracting modules and computing the atomic decomposition of these ontologies.
Resumo:
"UILU-ENG 77 1715."
Resumo:
Necessary conditions for the complete graph on n vertices to have a decomposition into 5-cubes are that 5 divides it - 1 and 80 divides it (it - 1)/2. These are known to be sufficient when n is odd. We prove them also sufficient for it even, thus completing the spectrum problem for the 5-cube and lending further weight to a long-standing conjecture of Kotzig. (c) 2005 Wiley Periodicals, Inc.
Resumo:
The circulant graph Sn, where S ⊆ Zn \ {0}, has vertex set Zn and edge set {{x, x + s}|x ∈ Zn, s ∈ S}. It is shown that there is a Hamilton cycle decomposition of every 6-regular circulant graph Sn in which S has an element of order n.
Resumo:
As the efficiency of parallel software increases it is becoming common to measure near linear speedup for many applications. For a problem size N on P processors then with software running at O(N=P ) the performance restrictions due to file i/o systems and mesh decomposition running at O(N) become increasingly apparent especially for large P . For distributed memory parallel systems an additional limit to scalability results from the finite memory size available for i/o scatter/gather operations. Simple strategies developed to address the scalability of scatter/gather operations for unstructured mesh based applications have been extended to provide scalable mesh decomposition through the development of a parallel graph partitioning code, JOSTLE [8]. The focus of this work is directed towards the development of generic strategies that can be incorporated into the Computer Aided Parallelisation Tools (CAPTools) project.
Resumo:
This dissertation investigates the connection between spectral analysis and frame theory. When considering the spectral properties of a frame, we present a few novel results relating to the spectral decomposition. We first show that scalable frames have the property that the inner product of the scaling coefficients and the eigenvectors must equal the inverse eigenvalues. From this, we prove a similar result when an approximate scaling is obtained. We then focus on the optimization problems inherent to the scalable frames by first showing that there is an equivalence between scaling a frame and optimization problems with a non-restrictive objective function. Various objective functions are considered, and an analysis of the solution type is presented. For linear objectives, we can encourage sparse scalings, and with barrier objective functions, we force dense solutions. We further consider frames in high dimensions, and derive various solution techniques. From here, we restrict ourselves to various frame classes, to add more specificity to the results. Using frames generated from distributions allows for the placement of probabilistic bounds on scalability. For discrete distributions (Bernoulli and Rademacher), we bound the probability of encountering an ONB, and for continuous symmetric distributions (Uniform and Gaussian), we show that symmetry is retained in the transformed domain. We also prove several hyperplane-separation results. With the theory developed, we discuss graph applications of the scalability framework. We make a connection with graph conditioning, and show the in-feasibility of the problem in the general case. After a modification, we show that any complete graph can be conditioned. We then present a modification of standard PCA (robust PCA) developed by Cand\`es, and give some background into Electron Energy-Loss Spectroscopy (EELS). We design a novel scheme for the processing of EELS through robust PCA and least-squares regression, and test this scheme on biological samples. Finally, we take the idea of robust PCA and apply the technique of kernel PCA to perform robust manifold learning. We derive the problem and present an algorithm for its solution. There is also discussion of the differences with RPCA that make theoretical guarantees difficult.
Resumo:
A temperature pause introduced in a simple single-step thermal decomposition of iron, with the presence of silver seeds formed in the same reaction mixture, gives rise to novel compact heterostructures: brick-like Ag@Fe3O4 core-shell nanoparticles. This novel method is relatively easy to implement, and could contribute to overcome the challenge of obtaining a multifunctional heteroparticle in which a noble metal is surrounded by magnetite. Structural analyses of the samples show 4 nm silver nanoparticles wrapped within compact cubic external structures of Fe oxide, with curious rectangular shape. The magnetic properties indicate a near superparamagnetic like behavior with a weak hysteresis at room temperature. The value of the anisotropy involved makes these particles candidates to potential applications in nanomedicine.
Resumo:
Cellulose acetates with different degrees of substitution (DS, from 0.6 to 1.9) were prepared from previously mercerized linter cellulose, in a homogeneous medium, using N,N-dimethylacetamide/lithium chloride as a solvent system. The influence of different degrees of substitution on the properties of cellulose acetates was investigated using thermogravimetric analyses (TGA). Quantitative methods were applied to the thermogravimetric curves in order to determine the apparent activation energy (Ea) related to the thermal decomposition of untreated and mercerized celluloses and cellulose acetates. Ea values were calculated using Broido's method and considering dynamic conditions. Ea values of 158 and 187 kJ mol-1 were obtained for untreated and mercerized cellulose, respectively. A previous study showed that C6OH is the most reactive site for acetylation, probably due to the steric hindrance of C2 and C3. The C6OH takes part in the first step of cellulose decomposition, leading to the formation of levoglucosan and, when it is changed to C6OCOCH3, the results indicate that the mechanism of thermal decomposition changes to one with a lower Ea. A linear correlation between Ea and the DS of the acetates prepared in the present work was identified.
Resumo:
The thermal behavior of two polymorphic forms of rifampicin was studied by DSC and TG/DTG. The thermoanalytical results clearly showed the differences between the two crystalline forms. Polymorph I was the most thermally stable form, the DSC curve showed no fusion for this species and the thermal decomposition process occurred around 245 ºC. The DSC curve of polymorph II showed two consecutive events, an endothermic event (Tpeak = 193.9 ºC) and one exothermic event (Tpeak = 209.4 ºC), due to a melting process followed by recrystallization, which was attributed to the conversion of form II to form I. Isothermal and non-isothermal thermogravimetric methods were used to determine the kinetic parameters of the thermal decomposition process. For non-isothermal experiments, the activation energy (Ea) was derived from the plot of Log β vs 1/T, yielding values for polymorph form I and II of 154 and 123 kJ mol-1, respectively. In the isothermal experiments, the Ea was obtained from the plot of lnt vs 1/T at a constant conversion level. The mean values found for form I and form II were 137 and 144 kJ mol-1, respectively.
Resumo:
The thermodynamic properties of the magnetic semiconductors GaMnAs and GaCrAs are studied under biaxial strain. The calculations are based on the projector augmented wave method combined with the generalized quasichemical approach to treat the disorder and composition effects. Considering the influence of biaxial strain, we find a tendency to the suppression of binodal decomposition mainly for GaMnAs under compressive strain. For a substrate with a lattice constant 5% smaller than the one of GaAs, for GaMnAs, the solubility limit increases up to 40%. Thus, the strain can be a useful tool for tailoring magnetic semiconductors to the formation or not of embedded nanoclusters. (C) 2010 American Institute of Physics. [doi:10.1063/1.3448025]
Resumo:
The decomposition of peroxynitrite to nitrite and dioxygen at neutral pH follows complex kinetics, compared to its isomerization to nitrate at low pH. Decomposition may involve radicals or proceed by way of the classical peracid decomposition mechanism. Peroxynitrite (ONOOH/ONOO(-)) decomposition has been proposed to involve formation of peroxynitrate (O(2)NOOH/O(2)NOO(-)) at neutral pH (D. Gupta, B. Harish, R. Kissner and W. H. Koppenol, Dalton Trans., 2009, DOI: 10.1039/b905535e, see accompanying paper in this issue). Peroxynitrate is unstable and decomposes to nitrite and dioxygen. This study aimed to investigate whether O(2)NOO(-) formed upon ONOOH/ONOO(-) decomposition generates singlet molecular oxygen [O(2) ((1)Delta(g))]. As unequivocally revealed by the measurement of monomol light emission in the near infrared region at 1270 nm and by chemical trapping experiments, the decomposition of ONOO(-) or O(2)NOOH at neutral to alkaline pH generates O(2) ((1)Delta(g)) at a yield of ca. 1% and 2-10%, respectively. Characteristic light emission, corresponding to O(2) ((1)Delta(g)) monomolecular decay was observed for ONOO(-) and for O(2)NOOH prepared by reaction of H(2)O(2) with NO(2)BF(4) and of H(2)O(2) with NO(2)(-) in HClO(4). The generation of O(2) ((1)Delta(g)) from ONOO(-) increased in a concentration-dependent manner in the range of 0.1-2.5 mM and was dependent on pH, giving a sigmoid pro. le with an apparent pK(a) around pD 8.1 (pH 7.7). Taken together, our results clearly identify the generation of O(2) ((1)Delta(g)) from peroxynitrate [O(2)NOO(-) -> NO(2)(-) + O(2) ((1)Delta(g))] generated from peroxynitrite and also from the reactions of H(2)O(2) with either NO(2)BF(4) or NO(2)(-) in acidic media.
Resumo:
Due to the worldwide increase in demand for biofuels, the area cultivated with sugarcane is expected to increase. For environmental and economic reasons, an increasing proportion of the areas are being harvested without burning, leaving the residues on the soil surface. This periodical input of residues affects soil physical, chemical and biological properties, as well as plant growth and nutrition. Modeling can be a useful tool in the study of the complex interactions between the climate, residue quality, and the biological factors controlling plant growth and residue decomposition. The approach taken in this work was to parameterize the CENTURY model for the sugarcane crop, to simulate the temporal dynamics of aboveground phytomass and litter decomposition, and to validate the model through field experiment data. When studying aboveground growth, burned and unburned harvest systems were compared, as well as the effect of mineral fertilizer and organic residue applications. The simulations were performed with data from experiments with different durations, from 12 months to 60 years, in Goiana, TimbaA(0)ba and Pradpolis, Brazil; Harwood, Mackay and Tully, Australia; and Mount Edgecombe, South Africa. The differentiation of two pools in the litter, with different decomposition rates, was found to be a relevant factor in the simulations made. Originally, the model had a basically unlimited layer of mulch directly available for decomposition, 5,000 g m(-2). Through a parameter optimization process, the thickness of the mulch layer closer to the soil, more vulnerable to decomposition, was set as 110 g m(-2). By changing the layer of mulch at any given time available for decomposition, the sugarcane residues decomposition simulations where close to measured values (R (2) = 0.93), contributing to making the CENTURY model a tool for the study of sugarcane litter decomposition patterns. The CENTURY model accurately simulated aboveground carbon stalk values (R (2) = 0.76), considering burned and unburned harvest systems, plots with and without nitrogen fertilizer and organic amendment applications, in different climates and soil conditions.
Resumo:
In this paper a bond graph methodology is used to model incompressible fluid flows with viscous and thermal effects. The distinctive characteristic of these flows is the role of pressure, which does not behave as a state variable but as a function that must act in such a way that the resulting velocity field has divergence zero. Velocity and entropy per unit volume are used as independent variables for a single-phase, single-component flow. Time-dependent nodal values and interpolation functions are introduced to represent the flow field, from which nodal vectors of velocity and entropy are defined as state variables. The system for momentum and continuity equations is coincident with the one obtained by using the Galerkin method for the weak formulation of the problem in finite elements. The integral incompressibility constraint is derived based on the integral conservation of mechanical energy. The weak formulation for thermal energy equation is modeled with true bond graph elements in terms of nodal vectors of temperature and entropy rates, resulting a Petrov-Galerkin method. The resulting bond graph shows the coupling between mechanical and thermal energy domains through the viscous dissipation term. All kind of boundary conditions are handled consistently and can be represented as generalized effort or flow sources. A procedure for causality assignment is derived for the resulting graph, satisfying the Second principle of Thermodynamics. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
A Fe-22.5%Cr-4.53%Ni-3.0%Mo duplex stainless steel was solution treated at 1,325 A degrees C for 1 h, quenched in water and isothermally treated at 900 A degrees C for 5,000 s. The crystallography of austenite was studied using EBSD technique. Intragranular austenite particles formed from delta ferrite are shown to nucleate on inclusions, and to be subdivided in twin-related sub-particles. Intragranular austenite appears to have planar-only orientation relationships with the ferrite matrix, close to Kurdjumov-Sachs and Nishyiama-Wassermann, but not related to a conjugate direction. Samples treated at 900 A degrees C underwent sparse formation of sigma phase and pronounced growth of elongated austenite particles, very similar to acicular ferrite.
Resumo:
This letter addresses the optimization and complexity reduction of switch-reconfigured antennas. A new optimization technique based on graph models is investigated. This technique is used to minimize the redundancy in a reconfigurable antenna structure and reduce its complexity. A graph modeling rule for switch-reconfigured antennas is proposed, and examples are presented.