980 resultados para Wijsman topology


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The cost and complexity of deploying measurement infrastructure in the Internet for the purpose of analyzing its structure and behavior is considerable. Basic questions about the utility of increasing the number of measurements and/or measurement sites have not yet been addressed which has lead to a "more is better" approach to wide-area measurements. In this paper, we quantify the marginal utility of performing wide-area measurements in the context of Internet topology discovery. We characterize topology in terms of nodes, links, node degree distribution, and end-to-end flows using statistical and information-theoretic techniques. We classify nodes discovered on the routes between a set of 8 sources and 1277 destinations to differentiate nodes which make up the so called "backbone" from those which border the backbone and those on links between the border nodes and destination nodes. This process includes reducing nodes that advertise multiple interfaces to single IP addresses. We show that the utility of adding sources goes down significantly after 2 from the perspective of interface, node, link and node degree discovery. We show that the utility of adding destinations is constant for interfaces, nodes, links and node degree indicating that it is more important to add destinations than sources. Finally, we analyze paths through the backbone and show that shared link distributions approximate a power law indicating that a small number of backbone links in our study are very heavily utilized.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Existing type systems for object calculi are based on invariant subtyping. Subtyping invariance is required for soundness of static typing in the presence of method overrides, but it is often in the way of the expressive power of the type system. Flexibility of static typing can be recovered in different ways: in first-order systems, by the adoption of object types with variance annotations, in second-order systems by resorting to Self types. Type inference is known to be P-complete for first-order systems of finite and recursive object types, and NP-complete for a restricted version of Self types. The complexity of type inference for systems with variance annotations is yet unknown. This paper presents a new object type system based on the notion of Split types, a form of object types where every method is assigned two types, namely, an update type and a select type. The subtyping relation that arises for Split types is variant and, as a result, subtyping can be performed both in width and in depth. The new type system generalizes all the existing first-order type systems for objects, including systems based on variance annotations. Interestingly, the additional expressive power does not affect the complexity of the type inference problem, as we show by presenting an O(n^3) inference algorithm.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Research on the construction of logical overlay networks has gained significance in recent times. This is partly due to work on peer-to-peer (P2P) systems for locating and retrieving distributed data objects, and also scalable content distribution using end-system multicast techniques. However, there are emerging applications that require the real-time transport of data from various sources to potentially many thousands of subscribers, each having their own quality-of-service (QoS) constraints. This paper primarily focuses on the properties of two popular topologies found in interconnection networks, namely k-ary n-cubes and de Bruijn graphs. The regular structure of these graph topologies makes them easier to analyze and determine possible routes for real-time data than complete or irregular graphs. We show how these overlay topologies compare in their ability to deliver data according to the QoS constraints of many subscribers, each receiving data from specific publishing hosts. Comparisons are drawn on the ability of each topology to route data in the presence of dynamic system effects, due to end-hosts joining and departing the system. Finally, experimental results show the service guarantees and physical link stress resulting from efficient multicast trees constructed over both kinds of overlay networks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Border Gateway Protocol (BGP) is the current inter-domain routing protocol used to exchange reachability information between Autonomous Systems (ASes) in the Internet. BGP supports policy-based routing which allows each AS to independently adopt a set of local policies that specify which routes it accepts and advertises from/to other networks, as well as which route it prefers when more than one route becomes available. However, independently chosen local policies may cause global conflicts, which result in protocol divergence. In this paper, we propose a new algorithm, called Adaptive Policy Management Scheme (APMS), to resolve policy conflicts in a distributed manner. Akin to distributed feedback control systems, each AS independently classifies the state of the network as either conflict-free or potentially-conflicting by observing its local history only (namely, route flaps). Based on the degree of measured conflicts (policy conflict-avoidance vs. -control mode), each AS dynamically adjusts its own path preferences—increasing its preference for observably stable paths over flapping paths. APMS also includes a mechanism to distinguish route flaps due to topology changes, so as not to confuse them with those due to policy conflicts. A correctness and convergence analysis of APMS based on the substability property of chosen paths is presented. Implementation in the SSF network simulator is performed, and simulation results for different performance metrics are presented. The metrics capture the dynamic performance (in terms of instantaneous throughput, delay, routing load, etc.) of APMS and other competing solutions, thus exposing the often neglected aspects of performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Border Gateway Protocol (BGP) is the current inter-domain routing protocol used to exchange reachability information between Autonomous Systems (ASes) in the Internet. BGP supports policy-based routing which allows each AS to independently define a set of local policies on which routes it accepts and advertises from/to other networks, as well as on which route it prefers when more than one route becomes available. However, independently chosen local policies may cause global conflicts, which result in protocol divergence. In this paper, we propose a new algorithm, called Adaptive Policy Management Scheme(APMS), to resolve policy conflicts in a distributed manner. Akin to distributed feedback control systems, each AS independently classifies the state of the network as either conflict-free or potentially conflicting by observing its local history only (namely, route flaps). Based on the degree of measured conflicts, each AS dynamically adjusts its own path preferences---increasing its preference for observably stable paths over flapping paths. APMS also includes a mechanism to distinguish route flaps due to topology changes, so as not to confuse them with those due to policy conflicts. A correctness and convergence analysis of APMS based on the sub-stability property of chosen paths is presented. Implementation in the SSF network simulator is performed, and simulation results for different performance metrics are presented. The metrics capture the dynamic performance (in terms of instantaneous throughput, delay, etc.) of APMS and other competing solutions, thus exposing the often neglected aspects of performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The effectiveness of service provisioning in largescale networks is highly dependent on the number and location of service facilities deployed at various hosts. The classical, centralized approach to determining the latter would amount to formulating and solving the uncapacitated k-median (UKM) problem (if the requested number of facilities is fixed), or the uncapacitated facility location (UFL) problem (if the number of facilities is also to be optimized). Clearly, such centralized approaches require knowledge of global topological and demand information, and thus do not scale and are not practical for large networks. The key question posed and answered in this paper is the following: "How can we determine in a distributed and scalable manner the number and location of service facilities?" We propose an innovative approach in which topology and demand information is limited to neighborhoods, or balls of small radius around selected facilities, whereas demand information is captured implicitly for the remaining (remote) clients outside these neighborhoods, by mapping them to clients on the edge of the neighborhood; the ball radius regulates the trade-off between scalability and performance. We develop a scalable, distributed approach that answers our key question through an iterative reoptimization of the location and the number of facilities within such balls. We show that even for small values of the radius (1 or 2), our distributed approach achieves performance under various synthetic and real Internet topologies that is comparable to that of optimal, centralized approaches requiring full topology and demand information.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new family of neural network architectures is presented. This family of architectures solves the problem of constructing and training minimal neural network classification expert systems by using switching theory. The primary insight that leads to the use of switching theory is that the problem of minimizing the number of rules and the number of IF statements (antecedents) per rule in a neural network expert system can be recast into the problem of minimizing the number of digital gates and the number of connections between digital gates in a Very Large Scale Integrated (VLSI) circuit. The rules that the neural network generates to perform a task are readily extractable from the network's weights and topology. Analysis and simulations on the Mushroom database illustrate the system's performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Global biodiversity is eroding at an alarming rate, through a combination of anthropogenic disturbance and environmental change. Ecological communities are bewildering in their complexity. Experimental ecologists strive to understand the mechanisms that drive the stability and structure of these complex communities in a bid to inform nature conservation and management. Two fields of research have had high profile success at developing theories related to these stabilising structures and testing them through controlled experimentation. Biodiversity-ecosystem functioning (BEF) research has explored the likely consequences of biodiversity loss on the functioning of natural systems and the provision of important ecosystem services. Empirical tests of BEF theory often consist of simplified laboratory and field experiments, carried out on subsets of ecological communities. Such experiments often overlook key information relating to patterns of interactions, important relationships, and fundamental ecosystem properties. The study of multi-species predator-prey interactions has also contributed much to our understanding of how complex systems are structured, particularly through the importance of indirect effects and predator suppression of prey populations. A growing number of studies describe these complex interactions in detailed food webs, which encompass all the interactions in a community. This has led to recent calls for an integration of BEF research with the comprehensive study of food web properties and patterns, to help elucidate the mechanisms that allow complex communities to persist in nature. This thesis adopts such an approach, through experimentation at Lough Hyne marine reserve, in southwest Ireland. Complex communities were allowed to develop naturally in exclusion cages, with only the diversity of top trophic levels controlled. Species removals were carried out and the resulting changes to predator-prey interactions, ecosystem functioning, food web properties, and stability were studied in detail. The findings of these experiments contribute greatly to our understanding of the stability and structure of complex natural communities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Existing Building/Energy Management Systems (BMS/EMS) fail to convey holistic performance to the building manager. A 20% reduction in energy consumption can be achieved by efficiently operated buildings compared with current practice. However, in the majority of buildings, occupant comfort and energy consumption analysis is primarily restricted by available sensor and meter data. Installation of a continuous monitoring process can significantly improve the building systems’ performance. We present WSN-BMDS, an IP-based wireless sensor network building monitoring and diagnostic system. The main focus of WSN-BMDS is to obtain much higher degree of information about the building operation then current BMSs are able to provide. Our system integrates a heterogeneous set of wireless sensor nodes with IEEE 802.11 backbone routers and the Global Sensor Network (GSN) web server. Sensing data is stored in a database at the back office via UDP protocol and can be access over the Internet using GSN. Through this demonstration, we show that WSN-BMDS provides accurate measurements of air-temperature, air-humidity, light, and energy consumption for particular rooms in our target building. Our interactive graphical user interface provides a user-friendly environment showing live network topology, monitor network statistics, and run-time management actions on the network. We also demonstrate actuation by changing the artificial light level in one of the rooms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Lacticin 3147, enterocin AS-48, lacticin 481, variacin, and sakacin P are bacteriocins offering promising perspectives in terms of preservation and shelf-life extension of food products and should find commercial application in the near future. The studies detailing their characterization and bio-preservative applications are reviewed. Transcriptomic analyses showed a cell wall-targeted response of Lactococcus lactis IL1403 during the early stages of infection with the lytic bacteriophage c2, which is probably orchestrated by a number of membrane stress proteins and involves D-alanylation of membrane lipoteichoic acids, restoration of the physiological proton motive force disrupted following bacteriophage infection, and energy conservation. Sequencing of the eight plasmids of L. lactis subsp. cremoris DPC3758 from raw milk cheese revealed three anti-phage restriction/modification (R/M) systems, immunity/resistance to nisin, lacticin 481, cadmium and copper, and six conjugative/mobilization regions. A food-grade derivative strain with enhanced bacteriophage resistance was generated via stacking of R/M plasmids. Sequencing and functional analysis of the four plasmids of L. lactis subsp. lactis biovar. diacetylactis DPC3901 from raw milk cheese revealed genes novel to Lactococcus and typical of bacteria associated with plants, in addition to genes associated with plant-derived lactococcal strains. The functionality of a novel high-affinity regulated system for cobalt uptake was demonstrated. The bacteriophage resistant and bacteriocin-producing plasmid pMRC01 places a metabolic burden on lactococcal hosts resulting in lowered growth rates and increased cell permeability and autolysis. The magnitude of these effects is strain dependent but not related to bacteriocin production. Starters’ acidification capacity is not significantly affected. Transcriptomic analyses showed that pMRC01 abortive infection (Abi) system is probably subjected to a complex regulatory control by Rgg-like ORF51 and CopG-like ORF58 proteins. These regulators are suggested to modulate the activity of the putative Abi effectors ORF50 and ORF49 exhibiting topology and functional similarities to the Rex system aborting bacteriophage λ lytic growth.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this research we focus on the Tyndall 25mm and 10mm nodes energy-aware topology management to extend sensor network lifespan and optimise node power consumption. The two tiered Tyndall Heterogeneous Automated Wireless Sensors (THAWS) tool is used to quickly create and configure application-specific sensor networks. To this end, we propose to implement a distributed route discovery algorithm and a practical energy-aware reaction model on the 25mm nodes. Triggered by the energy-warning events, the miniaturised Tyndall 10mm data collector nodes adaptively and periodically change their association to 25mm base station nodes, while 25mm nodes also change the inter-connections between themselves, which results in reconfiguration of the 25mm nodes tier topology. The distributed routing protocol uses combined weight functions to balance the sensor network traffic. A system level simulation is used to quantify the benefit of the route management framework when compared to other state of the art approaches in terms of the system power-saving.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Embedded wireless sensor network (WSN) systems have been developed and used in a wide variety of applications such as local automatic environmental monitoring; medical applications analysing aspects of fitness and health energy metering and management in the built environment as well as traffic pattern analysis and control applications. While the purpose and functions of embedded wireless sensor networks have a myriad of applications and possibilities in the future, a particular implementation of these ambient sensors is in the area of wearable electronics incorporated into body area networks and everyday garments. Some of these systems will incorporate inertial sensing devices and other physical and physiological sensors with a particular focus on the application areas of athlete performance monitoring and e-health. Some of the important physical requirements for wearable antennas are that they are light-weight, small and robust and should also use materials that are compatible with a standard manufacturing process such as flexible polyimide or fr4 material where low cost consumer market oriented products are being produced. The substrate material is required to be low loss and flexible and often necessitates the use of thin dielectric and metallization layers. This paper describes the development of such a wearable, flexible antenna system for ISM band wearable wireless sensor networks. The material selected for the development of the wearable system in question is DE104i characterized by a dielectric constant of 3.8 and a loss tangent of 0.02. The antenna feed line is a 50 Ohm microstrip topology suitable for use with standard, high-performance and low-cost SMA-type RF connector technologies, widely used for these types of applications. The desired centre frequency is aimed at the 2.4GHz ISM band to be compatible with IEEE 802.15.4 Zigbee communication protocols and the Bluetooth standard which operate in this band.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Since Wireless Sensor Networks (WSNs) are subject to failures, fault-tolerance becomes an important requirement for many WSN applications. Fault-tolerance can be enabled in different areas of WSN design and operation, including the Medium Access Control (MAC) layer and the initial topology design. To be robust to failures, a MAC protocol must be able to adapt to traffic fluctuations and topology dynamics. We design ER-MAC that can switch from energy-efficient operation in normal monitoring to reliable and fast delivery for emergency monitoring, and vice versa. It also can prioritise high priority packets and guarantee fair packet deliveries from all sensor nodes. Topology design supports fault-tolerance by ensuring that there are alternative acceptable routes to data sinks when failures occur. We provide solutions for four topology planning problems: Additional Relay Placement (ARP), Additional Backup Placement (ABP), Multiple Sink Placement (MSP), and Multiple Sink and Relay Placement (MSRP). Our solutions use a local search technique based on Greedy Randomized Adaptive Search Procedures (GRASP). GRASP-ARP deploys relays for (k,l)-sink-connectivity, where each sensor node must have k vertex-disjoint paths of length ≤ l. To count how many disjoint paths a node has, we propose Counting-Paths. GRASP-ABP deploys fewer relays than GRASP-ARP by focusing only on the most important nodes – those whose failure has the worst effect. To identify such nodes, we define Length-constrained Connectivity and Rerouting Centrality (l-CRC). Greedy-MSP and GRASP-MSP place minimal cost sinks to ensure that each sensor node in the network is double-covered, i.e. has two length-bounded paths to two sinks. Greedy-MSRP and GRASP-MSRP deploy sinks and relays with minimal cost to make the network double-covered and non-critical, i.e. all sensor nodes must have length-bounded alternative paths to sinks when an arbitrary sensor node fails. We then evaluate the fault-tolerance of each topology in data gathering simulations using ER-MAC.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis is concerned with inductive charging of electric vehicle batteries. Rectified power form the 50/60 Hz utility feeds a dc-ac converter which delivers high-frequency ac power to the electric vehicle inductive coupling inlet. The inlet configuration has been defined by the Society of Automotive Engineers in Recommended Practice J-1773. This thesis studies converter topologies related to the series resonant converter. When coupled to the vehicle inlet, the frequency-controlled series-resonant converter results in a capacitively-filtered series-parallel LCLC (SP-LCLC) resonant converter topology with zero voltage switching and many other desirable features. A novel time-domain transformation analysis, termed Modal Analysis, is developed, using a state variable transformation, to analyze and characterize this multi-resonant fourth-orderconverter. Next, Fundamental Mode Approximation (FMA) Analysis, based on a voltage-source model of the load, and its novel extension, Rectifier-Compensated FMA (RCFMA) Analysis, are developed and applied to the SP-LCLC converter. The RCFMA Analysis is a simpler and more intuitive analysis than the Modal Analysis, and provides a relatively accurate closed-form solution for the converter behavior. Phase control of the SP-LCLC converter is investigated as a control option. FMA and RCFMA Analyses are used for detailed characterization. The analyses identify areas of operation, which are also validated experimentally, where it is advantageous to phase control the converter. A novel hybrid control scheme is proposed which integrates frequency and phase control and achieves reduced operating frequency range and improved partial-load efficiency. The phase-controlled SP-LCLC converter can also be configured with a parallel load and is an excellent option for the application. The resulting topology implements soft-switching over the entire load range and has high full-load and partial-load efficiencies. RCFMA Analysis is used to analyze and characterize the new converter topology, and good correlation is shown with experimental results. Finally, a novel single-stage power-factor-corrected ac-dc converter is introduced, which uses the current-source characteristic of the SP-LCLC topology to provide power factor correction over a wide output power range from zero to full load. This converter exhibits all the advantageous characteristics of its dc-dc counterpart, with a reduced parts count and cost. Simulation and experimental results verify the operation of the new converter.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work we introduce a new mathematical tool for optimization of routes, topology design, and energy efficiency in wireless sensor networks. We introduce a vector field formulation that models communication in the network, and routing is performed in the direction of this vector field at every location of the network. The magnitude of the vector field at every location represents the density of amount of data that is being transited through that location. We define the total communication cost in the network as the integral of a quadratic form of the vector field over the network area. With the above formulation, we introduce a mathematical machinery based on partial differential equations very similar to the Maxwell's equations in electrostatic theory. We show that in order to minimize the cost, the routes should be found based on the solution of these partial differential equations. In our formulation, the sensors are sources of information, and they are similar to the positive charges in electrostatics, the destinations are sinks of information and they are similar to negative charges, and the network is similar to a non-homogeneous dielectric media with variable dielectric constant (or permittivity coefficient). In one of the applications of our mathematical model based on the vector fields, we offer a scheme for energy efficient routing. Our routing scheme is based on changing the permittivity coefficient to a higher value in the places of the network where nodes have high residual energy, and setting it to a low value in the places of the network where the nodes do not have much energy left. Our simulations show that our method gives a significant increase in the network life compared to the shortest path and weighted shortest path schemes. Our initial focus is on the case where there is only one destination in the network, and later we extend our approach to the case where there are multiple destinations in the network. In the case of having multiple destinations, we need to partition the network into several areas known as regions of attraction of the destinations. Each destination is responsible for collecting all messages being generated in its region of attraction. The complexity of the optimization problem in this case is how to define regions of attraction for the destinations and how much communication load to assign to each destination to optimize the performance of the network. We use our vector field model to solve the optimization problem for this case. We define a vector field, which is conservative, and hence it can be written as the gradient of a scalar field (also known as a potential field). Then we show that in the optimal assignment of the communication load of the network to the destinations, the value of that potential field should be equal at the locations of all the destinations. Another application of our vector field model is to find the optimal locations of the destinations in the network. We show that the vector field gives the gradient of the cost function with respect to the locations of the destinations. Based on this fact, we suggest an algorithm to be applied during the design phase of a network to relocate the destinations for reducing the communication cost function. The performance of our proposed schemes is confirmed by several examples and simulation experiments. In another part of this work we focus on the notions of responsiveness and conformance of TCP traffic in communication networks. We introduce the notion of responsiveness for TCP aggregates and define it as the degree to which a TCP aggregate reduces its sending rate to the network as a response to packet drops. We define metrics that describe the responsiveness of TCP aggregates, and suggest two methods for determining the values of these quantities. The first method is based on a test in which we drop a few packets from the aggregate intentionally and measure the resulting rate decrease of that aggregate. This kind of test is not robust to multiple simultaneous tests performed at different routers. We make the test robust to multiple simultaneous tests by using ideas from the CDMA approach to multiple access channels in communication theory. Based on this approach, we introduce tests of responsiveness for aggregates, and call it CDMA based Aggregate Perturbation Method (CAPM). We use CAPM to perform congestion control. A distinguishing feature of our congestion control scheme is that it maintains a degree of fairness among different aggregates. In the next step we modify CAPM to offer methods for estimating the proportion of an aggregate of TCP traffic that does not conform to protocol specifications, and hence may belong to a DDoS attack. Our methods work by intentionally perturbing the aggregate by dropping a very small number of packets from it and observing the response of the aggregate. We offer two methods for conformance testing. In the first method, we apply the perturbation tests to SYN packets being sent at the start of the TCP 3-way handshake, and we use the fact that the rate of ACK packets being exchanged in the handshake should follow the rate of perturbations. In the second method, we apply the perturbation tests to the TCP data packets and use the fact that the rate of retransmitted data packets should follow the rate of perturbations. In both methods, we use signature based perturbations, which means packet drops are performed with a rate given by a function of time. We use analogy of our problem with multiple access communication to find signatures. Specifically, we assign orthogonal CDMA based signatures to different routers in a distributed implementation of our methods. As a result of orthogonality, the performance does not degrade because of cross interference made by simultaneously testing routers. We have shown efficacy of our methods through mathematical analysis and extensive simulation experiments.