49 resultados para Vertex Folkman Graph
Resumo:
Quantitatively assessing the importance or criticality of each link in a network is of practical value to operators, as that can help them to increase the network's resilience, provide more efficient services, or improve some other aspect of the service. Betweenness is a graph-theoretical measure of centrality that can be applied to communication networks to evaluate link importance. However, as we illustrate in this paper, the basic definition of betweenness centrality produces inaccurate estimations as it does not take into account some aspects relevant to networking, such as the heterogeneity in link capacity or the difference between node-pairs in their contribution to the total traffic. A new algorithm for discovering link centrality in transport networks is proposed in this paper. It requires only static or semi-static network and topology attributes, and yet produces estimations of good accuracy, as verified through extensive simulations. Its potential value is demonstrated by an example application. In the example, the simple shortest-path routing algorithm is improved in such a way that it outperforms other more advanced algorithms in terms of blocking ratio
Resumo:
Most network operators have considered reducing Label Switched Routers (LSR) label spaces (i.e. the number of labels that can be used) as a means of simplifying management of underlaying Virtual Private Networks (VPNs) and, hence, reducing operational expenditure (OPEX). This letter discusses the problem of reducing the label spaces in Multiprotocol Label Switched (MPLS) networks using label merging - better known as MultiPoint-to-Point (MP2P) connections. Because of its origins in IP, MP2P connections have been considered to have tree- shapes with Label Switched Paths (LSP) as branches. Due to this fact, previous works by many authors affirm that the problem of minimizing the label space using MP2P in MPLS - the Merging Problem - cannot be solved optimally with a polynomial algorithm (NP-complete), since it involves a hard- decision problem. However, in this letter, the Merging Problem is analyzed, from the perspective of MPLS, and it is deduced that tree-shapes in MP2P connections are irrelevant. By overriding this tree-shape consideration, it is possible to perform label merging in polynomial time. Based on how MPLS signaling works, this letter proposes an algorithm to compute the minimum number of labels using label merging: the Full Label Merging algorithm. As conclusion, we reclassify the Merging Problem as Polynomial-solvable, instead of NP-complete. In addition, simulation experiments confirm that without the tree-branch selection problem, more labels can be reduced
Resumo:
Fault location has been studied deeply for transmission lines due to its importance in power systems. Nowadays the problem of fault location on distribution systems is receiving special attention mainly because of the power quality regulations. In this context, this paper presents an application software developed in Matlabtrade that automatically calculates the location of a fault in a distribution power system, starting from voltages and currents measured at the line terminal and the model of the distribution power system data. The application is based on a N-ary tree structure, which is suitable to be used in this application due to the highly branched and the non- homogeneity nature of the distribution systems, and has been developed for single-phase, two-phase, two-phase-to-ground, and three-phase faults. The implemented application is tested by using fault data in a real electrical distribution power system
Resumo:
El programa tracta de fer transformacions de linies simples amb informació en grafs més visuals, definint carrils, simbologies de carril i linies de divisió de trams.
Resumo:
Motivated by the work of Mateu, Orobitg, Pérez and Verdera, who proved inequalities of the form $T_*f\lesssim M(Tf)$ or $T_*f\lesssim M^2(Tf)$ for certain singular integral operators $T$, such as the Hilbert or the Beurling transforms, we study the possibility of establishing this type of control for the Cauchy transform along a Lipschitz graph. We show that this is not possible in general, and we give a partial positive result when the graph is substituted by a Jordan curve.
Resumo:
In March of 2004, the Observatory of European Foreign Policy published a special monograph about Spain in Europe (1996-2004) in digital format. The objective of the monograph was to analyse Spain’s foreign policy agenda and strategy during the period of José María Aznar’s presidency. As the title suggests, one of the initial suppositions of the analysis is the Europeanization of Spanish foreign activities. Is that how it was? Did Aznar’s Spain see the world and relate to it through Brussels? The publication was well received, considering the number of visits received and above all the institutions which asked to link the publication to their web pages. Among these, the EUobserver published the introduction to the piece in English titled Aznar: thinking locally, acting in Europe (described by the EUobserver as a paper of utmost importance). The fact that the elections were held three days after the tragic events of the 11th of March dramatically increased interest in Spain and the implications for Europe. This publication is the second of its type, in this case analysing the period of the Zapatero government (2004-2008). Once again the starting premise (the Europeanization of the agenda and the methods employed) has been considered by the analysts. And once again the articles collected in this publication serve to “triangulate” the analysis. Spain and Europe are two vertices (more or less distant, in essence and in form) which the authors handle in their analysis of the case (third vertex).
Resumo:
Actualment ens trobem en un món on tot gira al voltant de les noves tecnologies, i un pilar fonamental és l'oci i l'entreteniment. Això engloba principalment les indústries del cinema, videojocs i realitat virtual. Un dels problemes que tenen aquestes indústries és com crear l'escenari on es produeix la història. L'objectiu d'aquest projecte de final de carrera és crear una eina integrada al skylineEngine, que serveixi per crear edificis de manera procedural, on l'usuari pugui definir l'estètica d'aquest edifici, introduint la seva planta i els perfils adequats. El que s'implementarà serà una eina de modelatge per a dissenyadors, que a partir d'una planta i perfils pugui crear l'edifici.Aquest projecte es desenvoluparà a sobre del mòdul de generació d'edificis del skylineEngine, una eina pel modelatge de ciutats que s'executa sobre el Houdini 3D, que és una plataforma genèrica pel modelatge procedural d'objectes.El desenvolupament d'aquest projecte implica:• Estudi de la plataforma de desenvolupament Houdini 3D i de les llibreries necessàries per la incorporació de scripts Python. Estudi de les EEDD internes de Houdini.• Aprendre i manejar el llenguatge de programació Python.• Estudi del codi de l'article Interactive Architectural Modeling with Procedural Extrusions, per en Tom Kelly i en Peter Wonka, publicat a la revista ACM Transactions on Graphics (2011).• Desenvolupament d'algorismes de conversió de geometria d'una estructura tipus face-vertex a una de tipus half-edge, i viceversa.• Modificació del codi Java per acceptar crides sense interfície d'usuari i amb estructures de dades generades des de Python.• Aprendre el funcionament de la llibreria JPype per permetre enllaçar el Java dins el Python.• Estudi del skylineEngine i de les llibreries per la creació d'edificis.• Integració del resultat dintre del skylineEngine.• Verificació i ajust de les regles i paràmetres de la simulació per a diferents edificis
Resumo:
La presente investigación propone un modelo de datos que puede ser utilizado en diversos tipos de aplicaciones relacionados con la estructura de la propiedad (Catastro, Registro de la propiedad, Notarías, etc). Este modelo definido en lenguaje universal de modelado (UML), e implementado sobre gestor de base de datos orientada a grafos (Neo4j), permite almacenar y consultar ese histórico, pudiendo ser explotado posteriormente tanto por aplicaciones de escritorio como por servicios web.
Resumo:
Bimodal dispersal probability distributions with characteristic distances differing by several orders of magnitude have been derived and favorably compared to observations by Nathan [Nature (London) 418, 409 (2002)]. For such bimodal kernels, we show that two-dimensional molecular dynamics computer simulations are unable to yield accurate front speeds. Analytically, the usual continuous-space random walks (CSRWs) are applied to two dimensions. We also introduce discrete-space random walks and use them to check the CSRW results (because of the inefficiency of the numerical simulations). The physical results reported are shown to predict front speeds high enough to possibly explain Reid's paradox of rapid tree migration. We also show that, for a time-ordered evolution equation, fronts are always slower in two dimensions than in one dimension and that this difference is important both for unimodal and for bimodal kernels
Resumo:
A new graph-based construction of generalized low density codes (GLD-Tanner) with binary BCH constituents is described. The proposed family of GLD codes is optimal on block erasure channels and quasi-optimal on block fading channels. Optimality is considered in the outage probability sense. Aclassical GLD code for ergodic channels (e.g., the AWGN channel,the i.i.d. Rayleigh fading channel, and the i.i.d. binary erasure channel) is built by connecting bitnodes and subcode nodes via a unique random edge permutation. In the proposed construction of full-diversity GLD codes (referred to as root GLD), bitnodes are divided into 4 classes, subcodes are divided into 2 classes, and finally both sides of the Tanner graph are linked via 4 random edge permutations. The study focuses on non-ergodic channels with two states and can be easily extended to channels with 3 states or more.
Resumo:
This paper presents our investigation on iterativedecoding performances of some sparse-graph codes on block-fading Rayleigh channels. The considered code ensembles are standard LDPC codes and Root-LDPC codes, first proposed in and shown to be able to attain the full transmission diversity. We study the iterative threshold performance of those codes as a function of fading gains of the transmission channel and propose a numerical approximation of the iterative threshold versus fading gains, both both LDPC and Root-LDPC codes.Also, we show analytically that, in the case of 2 fading blocks,the iterative threshold root of Root-LDPC codes is proportional to (α1 α2)1, where α1 and α2 are corresponding fading gains.From this result, the full diversity property of Root-LDPC codes immediately follows.
Resumo:
This article reports on the results of the research done towards the fully automatically merging of lexical resources. Our main goal is to show the generality of the proposed approach, which have been previously applied to merge Spanish Subcategorization Frames lexica. In this work we extend and apply the same technique to perform the merging of morphosyntactic lexica encoded in LMF. The experiments showed that the technique is general enough to obtain good results in these two different tasks which is an important step towards performing the merging of lexical resources fully automatically.
Resumo:
The paper presents a competence-based instructional design system and a way to provide a personalization of navigation in the course content. The navigation aid tool builds on the competence graph and the student model, which includes the elements of uncertainty in the assessment of students. An individualized navigation graph is constructed for each student, suggesting the competences the student is more prepared to study. We use fuzzy set theory for dealing with uncertainty. The marks of the assessment tests are transformed into linguistic terms and used for assigning values to linguistic variables. For each competence, the level of difficulty and the level of knowing its prerequisites are calculated based on the assessment marks. Using these linguistic variables and approximate reasoning (fuzzy IF-THEN rules), a crisp category is assigned to each competence regarding its level of recommendation.
Resumo:
En este artículo se presentan los resultados y conclusiones del trabajo deinvestigación llevado a cabo sobre herramientas informáticas para representación de grafos de autómatas de estado finitos. El principal resultado de esta investigación es el desarrollo de una nueva herramienta, que permita dibujar el grafo de forma totalmente automática, partiendo de una tabla de transiciones donde se describe al autómata en cuestión.
Resumo:
Statistical computing when input/output is driven by a Graphical User Interface is considered. A proposal is made for automatic control ofcomputational flow to ensure that only strictly required computationsare actually carried on. The computational flow is modeled by a directed graph for implementation in any object-oriented programming language with symbolic manipulation capabilities. A complete implementation example is presented to compute and display frequency based piecewise linear density estimators such as histograms or frequency polygons.