936 resultados para Real 3G networks
Resumo:
Complex networks have recently attracted a significant amount of research attention due to their ability to model real world phenomena. One important problem often encountered is to limit diffusive processes spread over the network, for example mitigating pandemic disease or computer virus spread. A number of problem formulations have been proposed that aim to solve such problems based on desired network characteristics, such as maintaining the largest network component after node removal. The recently formulated critical node detection problem aims to remove a small subset of vertices from the network such that the residual network has minimum pairwise connectivity. Unfortunately, the problem is NP-hard and also the number of constraints is cubic in number of vertices, making very large scale problems impossible to solve with traditional mathematical programming techniques. Even many approximation algorithm strategies such as dynamic programming, evolutionary algorithms, etc. all are unusable for networks that contain thousands to millions of vertices. A computationally efficient and simple approach is required in such circumstances, but none currently exist. In this thesis, such an algorithm is proposed. The methodology is based on a depth-first search traversal of the network, and a specially designed ranking function that considers information local to each vertex. Due to the variety of network structures, a number of characteristics must be taken into consideration and combined into a single rank that measures the utility of removing each vertex. Since removing a vertex in sequential fashion impacts the network structure, an efficient post-processing algorithm is also proposed to quickly re-rank vertices. Experiments on a range of common complex network models with varying number of vertices are considered, in addition to real world networks. The proposed algorithm, DFSH, is shown to be highly competitive and often outperforms existing strategies such as Google PageRank for minimizing pairwise connectivity.
Resumo:
We explore the influence of the choice of attenuation factor on Katz centrality indices for evolving communication networks. For given snapshots of a network observed over a period of time, recently developed communicability indices aim to identify best broadcasters and listeners in the network. In this article, we looked into the sensitivity of communicability indices on the attenuation factor constraint, in relation to spectral radius (the largest eigenvalue) of the network at any point in time and its computation in the case of large networks. We proposed relaxed communicability measures where the spectral radius bound on attenuation factor is relaxed and the adjacency matrix is normalised in order to maintain the convergence of the measure. Using a vitality based measure of both standard and relaxed communicability indices we looked at the ways of establishing the most important individuals for broadcasting and receiving of messages related to community bridging roles. We illustrated our findings with two examples of real-life networks, MIT reality mining data set of daily communications between 106 individuals during one year and UK Twitter mentions network, direct messages on Twitter between 12.4k individuals during one week.
Resumo:
Complex networks obtained from real-world networks are often characterized by incompleteness and noise, consequences of imperfect sampling as well as artifacts in the acquisition process. Because the characterization, analysis and modeling of complex systems underlain by complex networks are critically affected by the quality and completeness of the respective initial structures, it becomes imperative to devise methodologies for identifying and quantifying the effects of the sampling on the network structure. One way to evaluate these effects is through an analysis of the sensitivity of complex network measurements to perturbations in the topology of the network. In this paper, measurement sensibility is quantified in terms of the relative entropy of the respective distributions. Three particularly important kinds of progressive perturbations to the network are considered, namely, edge suppression, addition and rewiring. The measurements allowing the best balance of stability (smaller sensitivity to perturbations) and discriminability (separation between different network topologies) are identified with respect to each type of perturbation. Such an analysis includes eight different measurements applied on six different complex networks models and three real-world networks. This approach allows one to choose the appropriate measurements in order to obtain accurate results for networks where sampling bias cannot be avoided-a very frequent situation in research on complex networks.
Resumo:
Deviations from the average can provide valuable insights about the organization of natural systems. The present article extends this important principle to the systematic identification and analysis of singular motifs in complex networks. Six measurements quantifying different and complementary features of the connectivity around each node of a network were calculated, and multivariate statistical methods applied to identify singular nodes. The potential of the presented concepts and methodology was illustrated with respect to different types of complex real-world networks, namely the US air transportation network, the protein-protein interactions of the yeast Saccharomyces cerevisiae and the Roget thesaurus networks. The obtained singular motifs possessed unique functional roles in the networks. Three classic theoretical network models were also investigated, with the Barabasi-Albert model resulting in singular motifs corresponding to hubs, confirming the potential of the approach. Interestingly, the number of different types of singular node motifs as well as the number of their instances were found to be considerably higher in the real-world networks than in any of the benchmark networks. Copyright (C) EPLA, 2009
Resumo:
In this work, we propose a hierarchical extension of the polygonality index as the means to characterize geographical planar networks. By considering successive neighborhoods around each node, it is possible to obtain more complete information about the spatial order of the network at progressive spatial scales. The potential of the methodology is illustrated with respect to synthetic and real geographical networks.
Resumo:
The comprehensive characterization of the structure of complex networks is essential to understand the dynamical processes which guide their evolution. The discovery of the scale-free distribution and the small-world properties of real networks were fundamental to stimulate more realistic models and to understand important dynamical processes related to network growth. However, the properties of the network borders (nodes with degree equal to 1), one of its most fragile parts, remained little investigated and understood. The border nodes may be involved in the evolution of structures such as geographical networks. Here we analyze the border trees of complex networks, which are defined as the subgraphs without cycles connected to the remainder of the network (containing cycles) and terminating into border nodes. In addition to describing an algorithm for identification of such tree subgraphs, we also consider how their topological properties can be quantified in terms of their depth and number of leaves. We investigate the properties of border trees for several theoretical models as well as real-world networks. Among the obtained results, we found that more than half of the nodes of some real-world networks belong to the border trees. A power-law with cut-off was observed for the distribution of the depth and number of leaves of the border trees. An analysis of the local role of the nodes in the border trees was also performed.
Resumo:
Complex networks exist in many areas of science such as biology, neuroscience, engineering, and sociology. The growing development of this area has led to the introduction of several topological and dynamical measurements, which describe and quantify the structure of networks. Such characterization is essential not only for the modeling of real systems but also for the study of dynamic processes that may take place in them. However, it is not easy to use several measurements for the analysis of complex networks, due to the correlation between them and the difficulty of their visualization. To overcome these limitations, we propose an effective and comprehensive approach for the analysis of complex networks, which allows the visualization of several measurements in a few projections that contain the largest data variance and the classification of networks into three levels of detail, vertices, communities, and the global topology. We also demonstrate the efficiency and the universality of the proposed methods in a series of real-world networks in the three levels.
Resumo:
The properties of complex networks are highly Influenced by border effects frequently found as a consequence of the finite nature of real-world networks as well as network Sampling Therefore, it becomes critical to devise effective means for sound estimation of net work topological and dynamical properties will le avoiding these types of artifacts. In the current work, an algorithm for minimization of border effects is proposed and discussed, and its potential IS Illustrated with respect to two real-world networks. namely bone canals and air transportation (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The capacitated redistricting problem (CRP) has the objective to redefine, under a given criterion, an initial set of districts of an urban area represented by a geographic network. Each node in the network has different types of demands and each district has a limited capacity. Real-world applications consider more than one criteria in the design of the districts, leading to a multicriteria CRP (MCRP). Examples are found in political districting, sales design, street sweeping, garbage collection and mail delivery. This work addresses the MCRP applied to power meter reading and two criteria are considered: compactness and homogeneity of districts. The proposed solution framework is based on a greedy randomized adaptive search procedure and multicriteria scalarization techniques to approximate the Pareto frontier. The computational experiments show the effectiveness of the method for a set of randomly generated networks and for a real-world network extracted from the city of São Paulo. © 2013 Elsevier Ltd.
Resumo:
Wavelength-routed networks (WRN) are very promising candidates for next-generation Internet and telecommunication backbones. In such a network, optical-layer protection is of paramount importance due to the risk of losing large amounts of data under a failure. To protect the network against this risk, service providers usually provide a pair of risk-independent working and protection paths for each optical connection. However, the investment made for the optical-layer protection increases network cost. To reduce the capital expenditure, service providers need to efficiently utilize their network resources. Among all the existing approaches, shared-path protection has proven to be practical and cost-efficient [1]. In shared-path protection, several protection paths can share a wavelength on a fiber link if their working paths are risk-independent. In real-world networks, provisioning is usually implemented without the knowledge of future network resource utilization status. As the network changes with the addition and deletion of connections, the network utilization will become sub-optimal. Reconfiguration, which is referred to as the method of re-provisioning the existing connections, is an attractive solution to fill in the gap between the current network utilization and its optimal value [2]. In this paper, we propose a new shared-protection-path reconfiguration approach. Unlike some of previous reconfiguration approaches that alter the working paths, our approach only changes protection paths, and hence does not interfere with the ongoing services on the working paths, and is therefore risk-free. Previous studies have verified the benefits arising from the reconfiguration of existing connections [2] [3] [4]. Most of them are aimed at minimizing the total used wavelength-links or ports. However, this objective does not directly relate to cost saving because minimizing the total network resource consumption does not necessarily maximize the capability of accommodating future connections. As a result, service providers may still need to pay for early network upgrades. Alternatively, our proposed shared-protection-path reconfiguration approach is based on a load-balancing objective, which minimizes the network load distribution vector (LDV, see Section 2). This new objective is designed to postpone network upgrades, thus bringing extra cost savings to service providers. In other words, by using the new objective, service providers can establish as many connections as possible before network upgrades, resulting in increased revenue. We develop a heuristic load-balancing (LB) reconfiguration approach based on this new objective and compare its performance with an approach previously introduced in [2] and [4], whose objective is minimizing the total network resource consumption.
Resumo:
Abstract The proliferation of wireless sensor networks and the variety of envisioned applications associated with them has motivated the development of distributed algorithms for collaborative processing over networked systems. One of the applications that has attracted the attention of the researchers is that of target localization where the nodes of the network try to estimate the position of an unknown target that lies within its coverage area. Particularly challenging is the problem of estimating the target’s position when we use received signal strength indicator (RSSI) due to the nonlinear relationship between the measured signal and the true position of the target. Many of the existing approaches suffer either from high computational complexity (e.g., particle filters) or lack of accuracy. Further, many of the proposed solutions are centralized which make their application to a sensor network questionable. Depending on the application at hand and, from a practical perspective it could be convenient to find a balance between localization accuracy and complexity. Into this direction we approach the maximum likelihood location estimation problem by solving a suboptimal (and more tractable) problem. One of the main advantages of the proposed scheme is that it allows for a decentralized implementation using distributed processing tools (e.g., consensus and convex optimization) and therefore, it is very suitable to be implemented in real sensor networks. If further accuracy is needed an additional refinement step could be performed around the found solution. Under the assumption of independent noise among the nodes such local search can be done in a fully distributed way using a distributed version of the Gauss-Newton method based on consensus. Regardless of the underlying application or function of the sensor network it is al¬ways necessary to have a mechanism for data reporting. While some approaches use a special kind of nodes (called sink nodes) for data harvesting and forwarding to the outside world, there are however some scenarios where such an approach is impractical or even impossible to deploy. Further, such sink nodes become a bottleneck in terms of traffic flow and power consumption. To overcome these issues instead of using sink nodes for data reporting one could use collaborative beamforming techniques to forward directly the generated data to a base station or gateway to the outside world. In a dis-tributed environment like a sensor network nodes cooperate in order to form a virtual antenna array that can exploit the benefits of multi-antenna communications. In col-laborative beamforming nodes synchronize their phases in order to add constructively at the receiver. Some of the inconveniences associated with collaborative beamforming techniques is that there is no control over the radiation pattern since it is treated as a random quantity. This may cause interference to other coexisting systems and fast bat-tery depletion at the nodes. Since energy-efficiency is a major design issue we consider the development of a distributed collaborative beamforming scheme that maximizes the network lifetime while meeting some quality of service (QoS) requirement at the re¬ceiver side. Using local information about battery status and channel conditions we find distributed algorithms that converge to the optimal centralized beamformer. While in the first part we consider only battery depletion due to communications beamforming, we extend the model to account for more realistic scenarios by the introduction of an additional random energy consumption. It is shown how the new problem generalizes the original one and under which conditions it is easily solvable. By formulating the problem under the energy-efficiency perspective the network’s lifetime is significantly improved. Resumen La proliferación de las redes inalámbricas de sensores junto con la gran variedad de posi¬bles aplicaciones relacionadas, han motivado el desarrollo de herramientas y algoritmos necesarios para el procesado cooperativo en sistemas distribuidos. Una de las aplicaciones que suscitado mayor interés entre la comunidad científica es la de localization, donde el conjunto de nodos de la red intenta estimar la posición de un blanco localizado dentro de su área de cobertura. El problema de la localization es especialmente desafiante cuando se usan niveles de energía de la seal recibida (RSSI por sus siglas en inglés) como medida para la localization. El principal inconveniente reside en el hecho que el nivel de señal recibida no sigue una relación lineal con la posición del blanco. Muchas de las soluciones actuales al problema de localization usando RSSI se basan en complejos esquemas centralizados como filtros de partículas, mientas que en otras se basan en esquemas mucho más simples pero con menor precisión. Además, en muchos casos las estrategias son centralizadas lo que resulta poco prácticos para su implementación en redes de sensores. Desde un punto de vista práctico y de implementation, es conveniente, para ciertos escenarios y aplicaciones, el desarrollo de alternativas que ofrezcan un compromiso entre complejidad y precisión. En esta línea, en lugar de abordar directamente el problema de la estimación de la posición del blanco bajo el criterio de máxima verosimilitud, proponemos usar una formulación subóptima del problema más manejable analíticamente y que ofrece la ventaja de permitir en¬contrar la solución al problema de localization de una forma totalmente distribuida, convirtiéndola así en una solución atractiva dentro del contexto de redes inalámbricas de sensores. Para ello, se usan herramientas de procesado distribuido como los algorit¬mos de consenso y de optimización convexa en sistemas distribuidos. Para aplicaciones donde se requiera de un mayor grado de precisión se propone una estrategia que con¬siste en la optimización local de la función de verosimilitud entorno a la estimación inicialmente obtenida. Esta optimización se puede realizar de forma descentralizada usando una versión basada en consenso del método de Gauss-Newton siempre y cuando asumamos independencia de los ruidos de medida en los diferentes nodos. Independientemente de la aplicación subyacente de la red de sensores, es necesario tener un mecanismo que permita recopilar los datos provenientes de la red de sensores. Una forma de hacerlo es mediante el uso de uno o varios nodos especiales, llamados nodos “sumidero”, (sink en inglés) que actúen como centros recolectores de información y que estarán equipados con hardware adicional que les permita la interacción con el exterior de la red. La principal desventaja de esta estrategia es que dichos nodos se convierten en cuellos de botella en cuanto a tráfico y capacidad de cálculo. Como alter¬nativa se pueden usar técnicas cooperativas de conformación de haz (beamforming en inglés) de manera que el conjunto de la red puede verse como un único sistema virtual de múltiples antenas y, por tanto, que exploten los beneficios que ofrecen las comu¬nicaciones con múltiples antenas. Para ello, los distintos nodos de la red sincronizan sus transmisiones de manera que se produce una interferencia constructiva en el recep¬tor. No obstante, las actuales técnicas se basan en resultados promedios y asintóticos, cuando el número de nodos es muy grande. Para una configuración específica se pierde el control sobre el diagrama de radiación causando posibles interferencias sobre sis¬temas coexistentes o gastando más potencia de la requerida. La eficiencia energética es una cuestión capital en las redes inalámbricas de sensores ya que los nodos están equipados con baterías. Es por tanto muy importante preservar la batería evitando cambios innecesarios y el consecuente aumento de costes. Bajo estas consideraciones, se propone un esquema de conformación de haz que maximice el tiempo de vida útil de la red, entendiendo como tal el máximo tiempo que la red puede estar operativa garantizando unos requisitos de calidad de servicio (QoS por sus siglas en inglés) que permitan una decodificación fiable de la señal recibida en la estación base. Se proponen además algoritmos distribuidos que convergen a la solución centralizada. Inicialmente se considera que la única causa de consumo energético se debe a las comunicaciones con la estación base. Este modelo de consumo energético es modificado para tener en cuenta otras formas de consumo de energía derivadas de procesos inherentes al funcionamiento de la red como la adquisición y procesado de datos, las comunicaciones locales entre nodos, etc. Dicho consumo adicional de energía se modela como una variable aleatoria en cada nodo. Se cambia por tanto, a un escenario probabilístico que generaliza el caso determinista y se proporcionan condiciones bajo las cuales el problema se puede resolver de forma eficiente. Se demuestra que el tiempo de vida de la red mejora de forma significativa usando el criterio propuesto de eficiencia energética.
Resumo:
Wireless sensor networks are posed as the new communication paradigm where the use of small, low-complexity, and low-power devices is preferred over costly centralized systems. The spectra of potential applications of sensor networks is very wide, ranging from monitoring, surveillance, and localization, among others. Localization is a key application in sensor networks and the use of simple, efficient, and distributed algorithms is of paramount practical importance. Combining convex optimization tools with consensus algorithms we propose a distributed localization algorithm for scenarios where received signal strength indicator readings are used. We approach the localization problem by formulating an alternative problem that uses distance estimates locally computed at each node. The formulated problem is solved by a relaxed version using semidefinite relaxation technique. Conditions under which the relaxed problem yields to the same solution as the original problem are given and a distributed consensusbased implementation of the algorithm is proposed based on an augmented Lagrangian approach and primaldual decomposition methods. Although suboptimal, the proposed approach is very suitable for its implementation in real sensor networks, i.e., it is scalable, robust against node failures and requires only local communication among neighboring nodes. Simulation results show that running an additional local search around the found solution can yield performance close to the maximum likelihood estimate.
Resumo:
Modular organization and degree-degree correlations are ubiquitous in the connectivity structure of biological, technological, and social interacting systems. So far most studies have concentrated on unveiling both features in real world networks, but a model that succeeds in generating them simultaneously is needed. We consider a network of interacting phase oscillators, and an adaptation mechanism for the coupling that promotes the connection strengths between those elements that are dynamically correlated. We show that, under these circumstances, the dynamical organization of the oscillators shapes the topology of the graph in such a way that modularity and assortativity features emerge spontaneously and simultaneously. In turn, we prove that such an emergent structure is associated with an asymptotic arrangement of the collective dynamical state of the network into cluster synchronization.
Resumo:
We propose a new measure to characterize the dimension of complex networks based on the ergodic theory of dynamical systems. This measure is derived from the correlation sum of a trajectory generated by a random walker navigating the network, and extends the classical Grassberger-Procaccia algorithm to the context of complex networks. The method is validated with reliable results for both synthetic networks and real-world networks such as the world air-transportation network or urban networks, and provides a computationally fast way for estimating the dimensionality of networks which only relies on the local information provided by the walkers.
Resumo:
In this paper a combined algorithm for analyzing structural controllability and observability of complex networks is presented. The algorithm addresses the two fundamental properties to guarantee structural controllability of a system: the absence of dilations and the accessibility of all nodes. The first problem is reformulated as a Maximum Matching search and it is addressed via the Hopcroft- Karp algorithm; the second problem is solved via a new wiring algorithm. Both algorithms can be combined to efficiently determine the number of required controllers and observers as well as the new required connections in order to guarantee controllability and observability in real complex networks. An application to a Twitter social network with over 100,000 nodes illustrates the proposed algorithms.