924 resultados para Graph operations
Resumo:
The eccentric connectivity index of a graph G, ξ^C, was proposed by Sharma, Goswami and Madan. It is defined as ξ^C(G) = ∑ u ∈ V(G) degG(u)εG(u), where degG(u) denotes the degree of the vertex x in G and εG(u) = Max{d(u, x) | x ∈ V (G)}. The eccentric connectivity polynomial is a polynomial version of this topological index. In this paper, exact formulas for the eccentric connectivity polynomial of Cartesian product, symmetric difference, disjunction and join of graphs are presented.
Resumo:
A weighted Bethe graph $B$ is obtained from a weighted generalized Bethe tree by identifying each set of children with the vertices of a graph belonging to a family $F$ of graphs. The operation of identifying the root vertex of each of $r$ weighted Bethe graphs to the vertices of a connected graph $\mathcal{R}$ of order $r$ is introduced as the $\mathcal{R}$-concatenation of a family of $r$ weighted Bethe graphs. It is shown that the Laplacian eigenvalues (when $F$ has arbitrary graphs) as well as the signless Laplacian and adjacency eigenvalues (when the graphs in $F$ are all regular) of the $\mathcal{R}$-concatenation of a family of weighted Bethe graphs can be computed (in a unified way) using the stable and low computational cost methods available for the determination of the eigenvalues of symmetric tridiagonal matrices. Unlike the previous results already obtained on this topic, the more general context of families of distinct weighted Bethe graphs is herein considered.
Resumo:
Consider two graphs G and H. Let H^k[G] be the lexicographic product of H^k and G, where H^k is the lexicographic product of the graph H by itself k times. In this paper, we determine the spectrum of H^k[G]H and H^k when G and H are regular and the Laplacian spectrum of H^k[G] and H^k for G and H arbitrary. Particular emphasis is given to the least eigenvalue of the adjacency matrix in the case of lexicographic powers of regular graphs, and to the algebraic connectivity and the largest Laplacian eigenvalues in the case of lexicographic powers of arbitrary graphs. This approach allows the determination of the spectrum (in case of regular graphs) and Laplacian spectrum (for arbitrary graphs) of huge graphs. As an example, the spectrum of the lexicographic power of the Petersen graph with the googol number (that is, 10^100 ) of vertices is determined. The paper finishes with the extension of some well known spectral and combinatorial invariant properties of graphs to its lexicographic powers.
Resumo:
A model based on graph isomorphisms is used to formalize software evolution. Step by step we narrow the search space by an informed selection of the attributes based on the current state-of-the-art in software engineering and generate a seed solution. We then traverse the resulting space using graph isomorphisms and other set operations over the vertex sets. The new solutions will preserve the desired attributes. The goal of defining an isomorphism based search mechanism is to construct predictors of evolution that can facilitate the automation of ’software factory’ paradigm. The model allows for automation via software tools implementing the concepts.
Resumo:
A model based on graph isomorphisms is used to formalize software evolution. Step by step we narrow the search space by an informed selection of the attributes based on the current state-of-the-art in software engineering and generate a seed solution. We then traverse the resulting space using graph isomorphisms and other set operations over the vertex sets. The new solutions will preserve the desired attributes. The goal of defining an isomorphism based search mechanism is to construct predictors of evolution that can facilitate the automation of ’software factory’ paradigm. The model allows for automation via software tools implementing the concepts.
Resumo:
In this paper, Bond Graphs are employed to develop a novel mathematical model of conventional switched-mode DC-DC converters valid for both continuous and discontinuous conduction modes. A unique causality bond graph model of hybrid models is suggested with the operation of the switch and the diode to be represented by a Modulated Transformer with a binary input and a resistor with fixed conductance causality. The operation of the diode is controlled using an if-then function within the model. The extracted hybrid model is implemented on a Boost and Buck converter with their operations to change from CCM to DCM and to return to CCM. The vector fields of the models show validity in a wide operation area and comparison with the simulation of the converters using PSPICE reveals high accuracy of the proposed model, with the Normalised Root Means Square Error and the Maximum Absolute Error remaining adequately low. The model is also experimentally tested on a Buck topology.
Resumo:
The skewness sk(G) of a graph G = (V, E) is the smallest integer sk(G) >= 0 such that a planar graph can be obtained from G by the removal of sk(C) edges. The splitting number sp(G) of C is the smallest integer sp(G) >= 0 such that a planar graph can be obtained from G by sp(G) vertex splitting operations. The vertex deletion vd(G) of G is the smallest integer vd(G) >= 0 such that a planar graph can be obtained from G by the removal of vd(G) vertices. Regular toroidal meshes are popular topologies for the connection networks of SIMD parallel machines. The best known of these meshes is the rectangular toroidal mesh C(m) x C(n) for which is known the skewness, the splitting number and the vertex deletion. In this work we consider two related families: a triangulation Tc(m) x c(n) of C(m) x C(n) in the torus, and an hexagonal mesh Hc(m) x c(n), the dual of Tc(m) x c(n) in the torus. It is established that sp(Tc(m) x c(n)) = vd(Tc(m) x c(n) = sk(Hc(m) x c(n)) = sp(Hc(m) x c(n)) = vd(Hc(m) x c(n)) = min{m, n} and that sk(Tc(m) x c(n)) = 2 min {m, n}.
Resumo:
In this paper a bond graph methodology is used to model incompressible fluid flows with viscous and thermal effects. The distinctive characteristic of these flows is the role of pressure, which does not behave as a state variable but as a function that must act in such a way that the resulting velocity field has divergence zero. Velocity and entropy per unit volume are used as independent variables for a single-phase, single-component flow. Time-dependent nodal values and interpolation functions are introduced to represent the flow field, from which nodal vectors of velocity and entropy are defined as state variables. The system for momentum and continuity equations is coincident with the one obtained by using the Galerkin method for the weak formulation of the problem in finite elements. The integral incompressibility constraint is derived based on the integral conservation of mechanical energy. The weak formulation for thermal energy equation is modeled with true bond graph elements in terms of nodal vectors of temperature and entropy rates, resulting a Petrov-Galerkin method. The resulting bond graph shows the coupling between mechanical and thermal energy domains through the viscous dissipation term. All kind of boundary conditions are handled consistently and can be represented as generalized effort or flow sources. A procedure for causality assignment is derived for the resulting graph, satisfying the Second principle of Thermodynamics. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
This letter addresses the optimization and complexity reduction of switch-reconfigured antennas. A new optimization technique based on graph models is investigated. This technique is used to minimize the redundancy in a reconfigurable antenna structure and reduce its complexity. A graph modeling rule for switch-reconfigured antennas is proposed, and examples are presented.
Resumo:
Current theoretical thinking about dual processes in recognition relies heavily on the measurement operations embodied within the process dissociation procedure. We critically evaluate the ability of this procedure to support this theoretical enterprise. We show that there are alternative processes that would produce a rough invariance in familiarity (a key prediction of the dual-processing approach) and that the process dissociation procedure does not have the power to differentiate between these alternative possibilities. We also show that attempts to relate parameters estimated by the process dissociation procedure to subjective reports (remember-know judgments) cannot differentiate between alternative dual-processing models and that there are problems with some of the historical evidence and with obtaining converging evidence. Our conclusion is that more specific theories incorporating ideas about representation and process are required.
Resumo:
We introduced a spectral clustering algorithm based on the bipartite graph model for the Manufacturing Cell Formation problem in [Oliveira S, Ribeiro JFF, Seok SC. A spectral clustering algorithm for manufacturing cell formation. Computers and Industrial Engineering. 2007 [submitted for publication]]. It constructs two similarity matrices; one for parts and one for machines. The algorithm executes a spectral clustering algorithm on each separately to find families of parts and cells of machines. The similarity measure in the approach utilized limited information between parts and between machines. This paper reviews several well-known similarity measures which have been used for Group Technology. Computational clustering results are compared by various performance measures. (C) 2008 The Society of Manufacturing Engineers. Published by Elsevier Ltd. All rights reserved.
Resumo:
The elevated plus-maze is a device widely used to assess rodent anxiety under the effect of several treatments, including pharmacological agents. The animal is placed at the center of the apparatus, which consists of two open arms and two arms enclosed by walls, and the number of entries and duration of stay in each arm are measured for a 5-min exposure period. The effect of an anxiolytic drug is to increase the percentage of time spent and number of entries into the open arms. In this work, we propose a new measure of anxiety levels in the rat submitted to the elevated plus-maze. We represented the spatial structure of the elevated plus-maze in terms of a directed graph and studied the statistics of the rat`s transitions between the nodes of the graph. By counting the number of times each transition is made and ordering them in descending frequency we represented the rat`s behavior in a rank-frequency plot. Our results suggest that the curves obtained under different pharmacological conditions can be well fitted by a power law with an exponent sensitive to both the drug type and the dose used. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Spleen removal may be recommended during organ transplantation in ABO-incompatible recipients as well as for hypoperfusion of the grafted liver, besides conventional surgical indications, but elevation of serum lipids has been observed in certain contexts. Aiming to analyze the influence of two dietary regimens on lipid profile, an experimental study was conducted. Methods: Male Wistar rats (n = 86, 333.0 +/- 32.2 g) were divided in four groups: group 1: controls; group 2: sham operation; group 3: total splenectomy; group 4: subtotal splenectomy with upper pole preservation; subgroups A (cholesterol reducing chow) and B (cholesterol-rich mixture) were established, and diet was given during 90 days. Total cholesterol (Tchol), high-density lipoprotein (HDL), low-density lipoprotein (LDL), very-low-density lipoprotein (VLDL), and triglycerides were documented. Results: After total splenectomy, hyperlipidemia ensued with cholesterol-reducing chow. Tchol, LDL, VLDL, triglycerides, and HDL changed from 56.4 +/- 9.2, 24.6 +/- 4.7, 9.7 +/- 2.2, 48.6 +/- 11.1, and 22.4 +/- 4.3 mg/dL to 66.9 +/- 11.4, 29.9 +/- 5.9, 10.9 +/- 2.3, 54.3 +/- 11.4, and 26.1 +/- 5.1 mg/dL, respectively. Upper pole preservation inhibited abnormalities of Tchol, HDL, VLDL, and triglycerides, and LDL decreased (23.6 +/- 4.9 vs. 22.1 +/- 5.1, P = 0.002). Higher concentrations were triggered by splenectomy and cholesterol-enriched diet (Tchol 59.4 +/- 10.1 vs. 83.9 +/- 14.3 mg/dL, P = 0.000), and upper-pole preservation diminished without abolishing hyperlipidemia (Tchol 55.9 +/- 10.0 vs. 62.3 +/- 7.8, P = 0.002). Conclusions: After splenectomy, hyperlipidemia occurred with both diets. Preservation of the upper pole tended to correct dyslipidemia in modality A and to attenuate it in subgroup B. (c) 2008 Wiley-Liss, Inc. Microsurgery 29:154-160, 2009.
Resumo:
Transanal access is one of many currently used procedures for rectal cancer treatment. The techniques used for local excision include conventional transanal excision, posterior access, therapeutic colonoscopy and transanal endoscopic approaches. The aim of the present study was to present a new surgical proctoscope for the endoscopic transanal excision of rectal lesions. A cylindrical proctoscope with a diameter of 4 cm was devised and built. The end inserted into the anus has a bevelled aspect and rounded borders, allowing correct exposure of the anal lesion. The rectoscope is fixed to the anal border with surgical thread through perforations in the external end. A base screw holds a fibre-light which illuminates the operative field. Part of the equipment is a guide which is positioned inside the rectoscope on insertion into the anus. In operations utilizing this proctoscope, 17 adenomas, 25 adenocarcinomas, 1 carcinoid and 1 endometrioma were excised. The diameter of the lesions varied from 1 to 6 cm. The range of procedures that are possible with this new proctoscope are similar to those achieved with conventional techniques which, however, require more expensive equipment. Hence, the present study demonstrates that this newly devised low-cost proctoscope is an efficient tool for the transanal endoscopic excision of rectal lesions.