50 resultados para Graph-theoretical descriptors
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
Quantitatively assessing the importance or criticality of each link in a network is of practical value to operators, as that can help them to increase the network's resilience, provide more efficient services, or improve some other aspect of the service. Betweenness is a graph-theoretical measure of centrality that can be applied to communication networks to evaluate link importance. However, as we illustrate in this paper, the basic definition of betweenness centrality produces inaccurate estimations as it does not take into account some aspects relevant to networking, such as the heterogeneity in link capacity or the difference between node-pairs in their contribution to the total traffic. A new algorithm for discovering link centrality in transport networks is proposed in this paper. It requires only static or semi-static network and topology attributes, and yet produces estimations of good accuracy, as verified through extensive simulations. Its potential value is demonstrated by an example application. In the example, the simple shortest-path routing algorithm is improved in such a way that it outperforms other more advanced algorithms in terms of blocking ratio
Resumo:
We survey the main theoretical aspects of models for Mobile Ad Hoc Networks (MANETs). We present theoretical characterizations of mobile network structural properties, different dynamic graph models of MANETs, and finally we give detailed summaries of a few selected articles. In particular, we focus on articles dealing with connectivity of mobile networks, and on articles which show that mobility can be used to propagate information between nodes of the network while at the same time maintaining small transmission distances, and thus saving energy.
Resumo:
We present a computer-assisted analysis of combinatorial properties of the Cayley graphs of certain finitely generated groups: Given a group with a finite set of generators, we study the density of the corresponding Cayley graph, that is, the least upper bound for the average vertex degree (= number of adjacent edges) of any finite subgraph. It is known that an m-generated group is amenable if and only if the density of the corresponding Cayley graph equals to 2m. We test amenable and non-amenable groups, and also groups for which amenability is unknown. In the latter class we focus on Richard Thompson’s group F.
Resumo:
The goal of this paper is to develop a model of financial intermediation analyze the impact of various forms of taxation. The model considers in a unified framework various functions of banks: monitoring, transaction services and asset transformation. Particular attention is devoted to conditions for separability between deposits and loans. The analysis focuses on: (i) competition between banks and alternative financial arrangements (investment funds and organized security markets), (ii) regulation, and (iii) bank's monopoly power and risk taking behavior.
Resumo:
We extend the linear reforms introduced by Pf¨ahler (1984) to the case of dual taxes. We study the relative effect that linear dual tax cuts have on the inequality of income distribution -a symmetrical study can be made for dual linear tax hikes-. We also introduce measures of the degree of progressivity for dual taxes and show that they can be connected to the Lorenz dominance criterion. Additionally, we study the tax liability elasticity of each of the reforms proposed. Finally, by means of a microsimulation model and a considerably large data set of taxpayers drawn from 2004 Spanish Income Tax Return population, 1) we compare different yield-equivalent tax cuts applied to the Spanish dual income tax and 2) we investigate how much income redistribution the dual tax reform (Act ‘35/2006’) introduced with respect to the previous tax.
Resumo:
This paper develops a simple model that can be used to estimate the effectiveness of Cohesion expenditure relative to similar but unsubsidized projects, thereby making it possible to explicitly test an important assumption that is often implicit in estimates of the impact of Cohesion policies. Some preliminary results are reported for the case of infrastructure investment in the Spanish regions.
Resumo:
Graph pebbling is a network model for studying whether or not a given supply of discrete pebbles can satisfy a given demand via pebbling moves. A pebbling move across an edge of a graph takes two pebbles from one endpoint and places one pebble at the other endpoint; the other pebble is lost in transit as a toll. It has been shown that deciding whether a supply can meet a demand on a graph is NP-complete. The pebbling number of a graph is the smallest t such that every supply of t pebbles can satisfy every demand of one pebble. Deciding if the pebbling number is at most k is NP 2 -complete. In this paper we develop a tool, called theWeight Function Lemma, for computing upper bounds and sometimes exact values for pebbling numbers with the assistance of linear optimization. With this tool we are able to calculate the pebbling numbers of much larger graphs than in previous algorithms, and much more quickly as well. We also obtain results for many families of graphs, in many cases by hand, with much simpler and remarkably shorter proofs than given in previously existing arguments (certificates typically of size at most the number of vertices times the maximum degree), especially for highly symmetric graphs. Here we apply theWeight Function Lemma to several specific graphs, including the Petersen, Lemke, 4th weak Bruhat, Lemke squared, and two random graphs, as well as to a number of infinite families of graphs, such as trees, cycles, graph powers of cycles, cubes, and some generalized Petersen and Coxeter graphs. This partly answers a question of Pachter, et al., by computing the pebbling exponent of cycles to within an asymptotically small range. It is conceivable that this method yields an approximation algorithm for graph pebbling.
Resumo:
En aquest treball s'analitza la contribució estèrica de les molècules a les seves propietats químiques i físiques, mitjançant l'avaluació del seu volum i de la seva mesura de semblança, a partir d'ara definits com a descriptors moleculars de primer ordre. La difeèsncia entre aquests dos conceptes ha estat aclarida: mentre que el volum és la magnitud de l'espai que ocupa la molècula com a entitat global, la mesura de semblança ens dóna una idea de com està distribuïda la densitat electrònica al llarg d'aquest volum, i reflecteix més les diferències locals existents. L'ús de diverses aproximacions per a l'obtenció d'ambdós valors ha estat analitzat sobre diferents classes d'isòmers
Resumo:
En aquest article es defineixen uns nous índexs tridimensionals per a la descripció de les molècules a partir de paràmetres derivats de la Teoria de la Semblança Molecular i de les distàncies euclidianes entre els àtoms i les càrregues atòmiques efectives. Aquests indexs,anomenats 3D, s'han aplicat a l'estudi de les relacions estructura-propietat d'una família d'hidrocarburs, i han demostrat una capacitat de descripció de tres propietats de la família (temperatura d'ebullició, temperatura de fusió i densitat) molt més acurada que quan s'utilitzen els indexs 2D clàssics
Resumo:
Our purpose is to provide a set-theoretical frame to clustering fuzzy relational data basically based on cardinality of the fuzzy subsets that represent objects and their complementaries, without applying any crisp property. From this perspective we define a family of fuzzy similarity indexes which includes a set of fuzzy indexes introduced by Tolias et al, and we analyze under which conditions it is defined a fuzzy proximity relation. Following an original idea due to S. Miyamoto we evaluate the similarity between objects and features by means the same mathematical procedure. Joining these concepts and methods we establish an algorithm to clustering fuzzy relational data. Finally, we present an example to make clear all the process
Resumo:
El present Projecte Final de Carrera s’emmarca dins el projecte HRIMAC (Herramienta de Recuperación de Imágenes Mamográficas por Análisis de Contenido), iniciat l’any 2003 i subvencionat pel Ministerio de Ciencia y Tecnología i els fons FEDER. En el projecte HRIMAC hi participa la Universitat de Girona, la Universitat Ramon Llull i especialistes de l’Hospital de Girona Josep Trueta. Aquest PFC pretén ésser una eina per testejar diferents mètodes d’extracció de característiques útils a l’hora de recuperar casos de la base de dades de HRIMAC. S’han estudiat, discutit, analitzat i implementat la caracterització de lesions segons la seva forma. S’han avaluat diferents descriptors de forma per tal de determinar quins són els millors a l’hora de tractar amb lesions mamogràfiques
Resumo:
This paper presents an application of the Multi-Scale Integrated Analysis of Societal and Ecosystem Metabolism (MuSIASEM) approach to the estimation of quantities of Gross Value Added (GVA) referring to economic entities defined at different scales of study. The method first estimates benchmark values of the pace of GVA generation per hour of labour across economic sectors. These values are estimated as intensive variables –e.g. €/hour– by dividing the various sectorial GVA of the country (expressed in € per year) by the hours of paid work in that same sector per year. This assessment is obtained using data referring to national statistics (top down information referring to the national level). Then, the approach uses bottom-up information (the number of hours of paid work in the various economic sectors of an economic entity –e.g. a city or a province– operating within the country) to estimate the amount of GVA produced by that entity. This estimate is obtained by multiplying the number of hours of work in each sector in the economic entity by the benchmark value of GVA generation per hour of work of that particular sector (national average). This method is applied and tested on two different socio-economic systems: (i) Catalonia (considered level n) and Barcelona (considered level n-1); and (ii) the region of Lima (considered level n) and Lima Metropolitan Area (considered level n-1). In both cases, the GVA per year of the local economic entity –Barcelona and Lima Metropolitan Area – is estimated and the resulting value is compared with GVA data provided by statistical offices. The empirical analysis seems to validate the approach, even though the case of Lima Metropolitan Area indicates a need for additional care when dealing with the estimate of GVA in primary sectors (agriculture and mining).
Resumo:
HEMOLIA (a project under European community’s 7th framework programme) is a new generation Anti-Money Laundering (AML) intelligent multi-agent alert and investigation system which in addition to the traditional financial data makes extensive use of modern society’s huge telecom data source, thereby opening up a new dimension of capabilities to all Money Laundering fighters (FIUs, LEAs) and Financial Institutes (Banks, Insurance Companies, etc.). This Master-Thesis project is done at AIA, one of the partners for the HEMOLIA project in Barcelona. The objective of this thesis is to find the clusters in a network drawn by using the financial data. An extensive literature survey has been carried out and several standard algorithms related to networks have been studied and implemented. The clustering problem is a NP-hard problem and several algorithms like K-Means and Hierarchical clustering are being implemented for studying several problems relating to sociology, evolution, anthropology etc. However, these algorithms have certain drawbacks which make them very difficult to implement. The thesis suggests (a) a possible improvement to the K-Means algorithm, (b) a novel approach to the clustering problem using the Genetic Algorithms and (c) a new algorithm for finding the cluster of a node using the Genetic Algorithm.