25 resultados para scale free network
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
Random scale-free networks have the peculiar property of being prone to the spreading of infections. Here we provide for the susceptible-infected-susceptible model an exact result showing that a scale-free degree distribution with diverging second moment is a sufficient condition to have null epidemic threshold in unstructured networks with either assortative or disassortative mixing. Degree correlations result therefore irrelevant for the epidemic spreading picture in these scale-free networks. The present result is related to the divergence of the average nearest neighbors degree, enforced by the degree detailed balance condition.
Resumo:
Uncorrelated random scale-free networks are useful null models to check the accuracy and the analytical solutions of dynamical processes defined on complex networks. We propose and analyze a model capable of generating random uncorrelated scale-free networks with no multiple and self-connections. The model is based on the classical configuration model, with an additional restriction on the maximum possible degree of the vertices. We check numerically that the proposed model indeed generates scale-free networks with no two- and three-vertex correlations, as measured by the average degree of the nearest neighbors and the clustering coefficient of the vertices of degree k, respectively.
Resumo:
We demonstrate that the self-similarity of some scale-free networks with respect to a simple degree-thresholding renormalization scheme finds a natural interpretation in the assumption that network nodes exist in hidden metric spaces. Clustering, i.e., cycles of length three, plays a crucial role in this framework as a topological reflection of the triangle inequality in the hidden geometry. We prove that a class of hidden variable models with underlying metric spaces are able to accurately reproduce the self-similarity properties that we measured in the real networks. Our findings indicate that hidden geometries underlying these real networks are a plausible explanation for their observed topologies and, in particular, for their self-similarity with respect to the degree-based renormalization.
Resumo:
Economy, and consequently trade, is a fundamental part of human social organization which, until now, has not been studied within the network modeling framework. Here we present the first, to the best of our knowledge, empirical characterization of the world trade web, that is, the network built upon the trade relationships between different countries in the world. This network displays the typical properties of complex networks, namely, scale-free degree distribution, the small-world property, a high clustering coefficient, and, in addition, degree-degree correlation between different vertices. All these properties make the world trade web a complex network, which is far from being well described through a classical random network description.
Resumo:
We analyze the process of informational exchange through complex networks by measuring network efficiencies. Aiming to study nonclustered systems, we propose a modification of this measure on the local level. We apply this method to an extension of the class of small worlds that includes declustered networks and show that they are locally quite efficient, although their clustering coefficient is practically zero. Unweighted systems with small-world and scale-free topologies are shown to be both globally and locally efficient. Our method is also applied to characterize weighted networks. In particular we examine the properties of underground transportation systems of Madrid and Barcelona and reinterpret the results obtained for the Boston subway network.
Resumo:
In this paper we study the reconstruction of a network topology from the values of its betweenness centrality, a measure of the influence of each of its nodes in the dissemination of information over the network. We consider a simple metaheuristic, simulated annealing, as the combinatorial optimization method to generate the network from the values of the betweenness centrality. We compare the performance of this technique when reconstructing different categories of networks –random, regular, small-world, scale-free and clustered–. We show that the method allows an exact reconstruction of small networks and leads to good topological approximations in the case of networks with larger orders. The method can be used to generate a quasi-optimal topology fora communication network from a list with the values of the maximum allowable traffic for each node.
Resumo:
We compare rain event size distributions derived from measurements in climatically different regions, which we find to be well approximated by power laws of similar exponents over broad ranges. Differences can be seen in the large-scale cutoffs of the distributions. Event duration distributions suggest that the scale-free aspects are related to the absence of characteristic scales in the meteorological mesoscale.
Resumo:
We present the derivation of the continuous-time equations governing the limit dynamics of discrete-time reaction-diffusion processes defined on heterogeneous metapopulations. We show that, when a rigorous time limit is performed, the lack of an epidemic threshold in the spread of infections is not limited to metapopulations with a scale-free architecture, as it has been predicted from dynamical equations in which reaction and diffusion occur sequentially in time
Resumo:
The Aitchison vector space structure for the simplex is generalized to a Hilbert space structure A2(P) for distributions and likelihoods on arbitrary spaces. Centralnotations of statistics, such as Information or Likelihood, can be identified in the algebraical structure of A2(P) and their corresponding notions in compositional data analysis, such as Aitchison distance or centered log ratio transform.In this way very elaborated aspects of mathematical statistics can be understoodeasily in the light of a simple vector space structure and of compositional data analysis. E.g. combination of statistical information such as Bayesian updating,combination of likelihood and robust M-estimation functions are simple additions/perturbations in A2(Pprior). Weighting observations corresponds to a weightedaddition of the corresponding evidence.Likelihood based statistics for general exponential families turns out to have aparticularly easy interpretation in terms of A2(P). Regular exponential families formfinite dimensional linear subspaces of A2(P) and they correspond to finite dimensionalsubspaces formed by their posterior in the dual information space A2(Pprior).The Aitchison norm can identified with mean Fisher information. The closing constant itself is identified with a generalization of the cummulant function and shown to be Kullback Leiblers directed information. Fisher information is the local geometry of the manifold induced by the A2(P) derivative of the Kullback Leibler information and the space A2(P) can therefore be seen as the tangential geometry of statistical inference at the distribution P.The discussion of A2(P) valued random variables, such as estimation functionsor likelihoods, give a further interpretation of Fisher information as the expected squared norm of evidence and a scale free understanding of unbiased reasoning
Resumo:
The formation of a hollow cellular sphere is often one of the first steps of multicellular embryonic development. In the case of Hydra, the sphere breaks its initial symmetry to form a foot-head axis. During this process a gene, ks1, is increasingly expressed in localized cell domains whose size distribution becomes scale-free at the axis-locking moment. We show that a physical model based solely on the production and exchange of ks1-promoting factors among neighboring cells robustly reproduces the scaling behavior as well as the experimentally observed spontaneous and temperature-directed symmetry breaking.
Resumo:
We develop a theoretical approach to percolation in random clustered networks. We find that, although clustering in scale-free networks can strongly affect some percolation properties, such as the size and the resilience of the giant connected component, it cannot restore a finite percolation threshold. In turn, this implies the absence of an epidemic threshold in this class of networks, thus extending this result to a wide variety of real scale-free networks which shows a high level of transitivity. Our findings are in good agreement with numerical simulations.
Resumo:
We study a model where agents, located in a social network, decide whether to exert effort or not in experimenting with a new technology (or acquiring a new skill, innovating, etc.). We assume that agents have strong incentives to free ride on their neighbors' effort decisions. In the static version of the model efforts are chosen simultaneously. In equilibrium, agents exerting effort are never connected with each other and all other agents are connected with at least one agent exerting effort. We propose a mean-field dynamics in which agents choose in each period the best response to the last period's decisions of their neighbors. We characterize the equilibrium of such a dynamics and show how the pattern of free riders in the network depends on properties of the connectivity distribution.
Resumo:
This paper describes a Computer-Supported Collaborative Learning (CSCL) case study in engineering education carried out within the context of a network management course. The case study shows that the use of two computing tools developed by the authors and based on Free- and Open-Source Software (FOSS) provide significant educational benefits over traditional engineering pedagogical approaches in terms of both concepts and engineering competencies acquisition. First, the Collage authoring tool guides and supports the course teacher in the process of authoring computer-interpretable representations (using the IMS Learning Design standard notation) of effective collaborative pedagogical designs. Besides, the Gridcole system supports the enactment of that design by guiding the students throughout the prescribed sequence of learning activities. The paper introduces the goals and context of the case study, elaborates onhow Collage and Gridcole were employed, describes the applied evaluation methodology, anddiscusses the most significant findings derived from the case study.
Resumo:
Background: Network reconstructions at the cell level are a major development in Systems Biology. However, we are far from fully exploiting its potentialities. Often, the incremental complexity of the pursued systems overrides experimental capabilities, or increasingly sophisticated protocols are underutilized to merely refine confidence levels of already established interactions. For metabolic networks, the currently employed confidence scoring system rates reactions discretely according to nested categories of experimental evidence or model-based likelihood. Results: Here, we propose a complementary network-based scoring system that exploits the statistical regularities of a metabolic network as a bipartite graph. As an illustration, we apply it to the metabolism of Escherichia coli. The model is adjusted to the observations to derive connection probabilities between individual metabolite-reaction pairs and, after validation, to assess the reliability of each reaction in probabilistic terms. This network-based scoring system uncovers very specific reactions that could be functionally or evolutionary important, identifies prominent experimental targets, and enables further confirmation of modeling results. Conclusions: We foresee a wide range of potential applications at different sub-cellular or supra-cellular levels of biological interactions given the natural bipartivity of many biological networks.