535 resultados para Hyperspaces Topologies
Resumo:
In this paper we present an architecture for network and applications management, which is based on the Active Networks paradigm and shows the advantages of network programmability. The stimulus to develop this architecture arises from an actual need to manage a cluster of active nodes, where it is often required to redeploy network assets and modify nodes connectivity. In our architecture, a remote front-end of the managing entity allows the operator to design new network topologies, to check the status of the nodes and to configure them. Moreover, the proposed framework allows to explore an active network, to monitor the active applications, to query each node and to install programmable traps. In order to take advantage of the Active Networks technology, we introduce active SNMP-like MIBs and agents, which are dynamic and programmable. The programmable management agents make tracing distributed applications a feasible task. We propose a general framework that can inter-operate with any active execution environment. In this framework, both the manager and the monitor front-ends communicate with an active node (the Active Network Access Point) through the XML language. A gateway service performs the translation of the queries from XML to an active packet language and injects the code in the network. We demonstrate the implementation of an active network gateway for PLAN (Packet Language for Active Networks) in a forty active nodes testbed. Finally, we discuss an application of the active management architecture to detect the causes of network failures by tracing network events in time.
Resumo:
Suprathermal electrons (>70 eV) form a small fraction of the total solar wind electron density but serve as valuable tracers of heliospheric magnetic field topology. Their usefulness as tracers of magnetic loops with both feet rooted on the Sun, however, most likely fades as the loops expand beyond some distance owing to scattering. As a first step toward quantifying that distance, we construct an observationally constrained model for the evolution of the suprathermal electron pitch-angle distributions on open field lines. We begin with a near-Sun isotropic distribution moving antisunward along a Parker spiral magnetic field while conserving magnetic moment, resulting in a field-aligned strahl within a few solar radii. Past this point, the distribution undergoes little evolution with heliocentric distance. We then add constant (with heliocentric distance, energy, and pitch angle) ad-hoc pitch-angle scattering. Close to the Sun, pitch-angle focusing still dominates, again resulting in a narrow strahl. Farther from the Sun, however, pitch-angle scattering dominates because focusing is effectively weakened by the increasing angle between the magnetic field direction and intensity gradient, a result of the spiral field. We determine the amount of scattering required to match Ulysses observations of strahl width in the fast solar wind, providing an important tool for inferring the large-scale properties and topologies of field lines in the interplanetary medium. Although the pitch-angle scattering term is independent of energy, time-of-flight effects in the spiral geometry result in an energy dependence of the strahl width that is in the observed sense although weaker in magnitude.
Resumo:
Locality to other nodes on a peer-to-peer overlay network can be established by means of a set of landmarks shared among the participating nodes. Each node independently collects a set of latency measures to landmark nodes, which are used as a multi-dimensional feature vector. Each peer node uses the feature vector to generate a unique scalar index which is correlated to its topological locality. A popular dimensionality reduction technique is the space filling Hilbert’s curve, as it possesses good locality preserving properties. However, there exists little comparison between Hilbert’s curve and other techniques for dimensionality reduction. This work carries out a quantitative analysis of their properties. Linear and non-linear techniques for scaling the landmark vectors to a single dimension are investigated. Hilbert’s curve, Sammon’s mapping and Principal Component Analysis have been used to generate a 1d space with locality preserving properties. This work provides empirical evidence to support the use of Hilbert’s curve in the context of locality preservation when generating peer identifiers by means of landmark vector analysis. A comparative analysis is carried out with an artificial 2d network model and with a realistic network topology model with a typical power-law distribution of node connectivity in the Internet. Nearest neighbour analysis confirms Hilbert’s curve to be very effective in both artificial and realistic network topologies. Nevertheless, the results in the realistic network model show that there is scope for improvements and better techniques to preserve locality information are required.
Resumo:
The node-density effect is an artifact of phylogeny reconstruction that can cause branch lengths to be underestimated in areas of the tree with fewer taxa. Webster, Payne, and Pagel (2003, Science 301:478) introduced a statistical procedure (the "delta" test) to detect this artifact, and here we report the results of computer simulations that examine the test's performance. In a sample of 50,000 random data sets, we find that the delta test detects the artifact in 94.4% of cases in which it is present. When the artifact is not present (n = 10,000 simulated data sets) the test showed a type I error rate of approximately 1.69%, incorrectly reporting the artifact in 169 data sets. Three measures of tree shape or "balance" failed to predict the size of the node-density effect. This may reflect the relative homogeneity of our randomly generated topologies, but emphasizes that nearly any topology can suffer from the artifact, the effect not being confined only to highly unevenly sampled or otherwise imbalanced trees. The ability to screen phylogenies for the node-density artifact is important for phylogenetic inference and for researchers using phylogenetic trees to infer evolutionary processes, including their use in molecular clock dating. [Delta test; molecular clock; molecular evolution; node-density effect; phylogenetic reconstruction; speciation; simulation.]
Resumo:
Traditionally, applications and tools supporting collaborative computing have been designed only with personal computers in mind and support a limited range of computing and network platforms. These applications are therefore not well equipped to deal with network heterogeneity and, in particular, do not cope well with dynamic network topologies. Progress in this area must be made if we are to fulfil the needs of users and support the diversity, mobility, and portability that are likely to characterise group work in future. This paper describes a groupware platform called Coco that is designed to support collaboration in a heterogeneous network environment. The work demonstrates that progress in the p development of a generic supporting groupware is achievable, even in the context of heterogeneous and dynamic networks. The work demonstrates the progress made in the development of an underlying communications infrastructure, building on peer-to-peer concept and topologies to improve scalability and robustness.
Resumo:
Dense deployments of wireless local area networks (WLANs) are fast becoming a permanent feature of all developed cities around the world. While this increases capacity and coverage, the problem of increased interference, which is exacerbated by the limited number of channels available, can severely degrade the performance of WLANs if an effective channel assignment scheme is not employed. In an earlier work, an asynchronous, distributed and dynamic channel assignment scheme has been proposed that (1) is simple to implement, (2) does not require any knowledge of the throughput function, and (3) allows asynchronous channel switching by each access point (AP). In this paper, we present extensive performance evaluation of this scheme when it is deployed in the more practical non-uniform and dynamic topology scenarios. Specifically, we investigate its effectiveness (1) when APs are deployed in a nonuniform fashion resulting in some APs suffering from higher levels of interference than others and (2) when APs are effectively switched `on/off' due to the availability/lack of traffic at different times, which creates a dynamically changing network topology. Simulation results based on actual WLAN topologies show that robust performance gains over other channel assignment schemes can still be achieved even in these realistic scenarios.
Resumo:
The popularity of wireless local area networks (WLANs) has resulted in their dense deployments around the world. While this increases capacity and coverage, the problem of increased interference can severely degrade the performance of WLANs. However, the impact of interference on throughput in dense WLANs with multiple access points (APs) has had very limited prior research. This is believed to be due to 1) the inaccurate assumption that throughput is always a monotonically decreasing function of interference and 2) the prohibitively high complexity of an accurate analytical model. In this work, firstly we provide a useful classification of commonly found interference scenarios. Secondly, we investigate the impact of interference on throughput for each class based on an approach that determines the possibility of parallel transmissions. Extensive packet-level simulations using OPNET have been performed to support the observations made. Interestingly, results have shown that in some topologies, increased interference can lead to higher throughput and vice versa.
Resumo:
Hypercube is one of the most popular topologies for connecting processors in multicomputer systems. In this paper we address the maximum order of a connected component in a faulty cube. The results established include several known conclusions as special cases. We conclude that the hypercube structure is resilient as it includes a large connected component in the presence of large number of faulty vertices.
Resumo:
We have investigated the dynamic mechanical behavior of two cross-linked polymer networks with very different topologies: one made of backbones randomly linked along their length; the other with fixed-length strands uniformly cross-linked at their ends. The samples were analyzed using oscillatory shear, at very small strains corresponding to the linear regime. This was carried out at a range of frequencies, and at temperatures ranging from the glass plateau, through the glass transition, and well into the rubbery region. Through the glass transition, the data obeyed the time-temperature superposition principle, and could be analyzed using WLF treatment. At higher temperatures, in the rubbery region, the storage modulus was found to deviate from this, taking a value that is independent of frequency. This value increased linearly with temperature, as expected for the entropic rubber elasticity, but with a substantial negative offset inconsistent with straightforward enthalpic effects. Conversely, the loss modulus continued to follow time-temperature superposition, decreasing with increasing temperature, and showing a power-law dependence on frequency.
Resumo:
Stochastic Diffusion Search is an efficient probabilistic bestfit search technique, capable of transformation invariant pattern matching. Although inherently parallel in operation it is difficult to implement efficiently in hardware as it requires full inter-agent connectivity. This paper describes a lattice implementation, which, while qualitatively retaining the properties of the original algorithm, restricts connectivity, enabling simpler implementation on parallel hardware. Diffusion times are examined for different network topologies, ranging from ordered lattices, over small-world networks to random graphs.
Resumo:
Overall phylogenetic relationships within the genus Pelargonium (Geraniaceae) were inferred based on DNA sequences from mitochondrial(mt)-encoded nad1 b/c exons and from chloroplast(cp)-encoded trnL (UAA) 5' exon-trnF (GAA) exon regions using two species of Geranium and Sarcocaulon vanderetiae as outgroups. The group II intron between nad1 exons b and c was found to be absent from the Pelargonium, Geranium, and Sarcocaulon sequences presented here as well as from Erodium, which is the first recorded loss of this intron in angiosperms. Separate phylogenetic analyses of the mtDNA and cpDNA data sets produced largely congruent topologies, indicating linkage between mitochondrial and chloroplast genome inheritance. Simultaneous analysis of the combined data sets yielded a well-resolved topology with high clade support exhibiting a basic split into small and large chromosome species, the first group containing two lineages and the latter three. One large chromosome lineage (x = 11) comprises species from sections Myrrhidium and Chorisma and is sister to a lineage comprising P. mutans (x = 11) and species from section Jenkinsonia (x = 9). Sister to these two lineages is a lineage comprising species from sections Ciconium (x = 9) and Subsucculentia (x = 10). Cladistic evaluation of this pattern suggests that x = 11 is the ancestral basic chromosome number for the genus.
Resumo:
Basic Network transactions specifies that datagram from source to destination is routed through numerous routers and paths depending on the available free and uncongested paths which results in the transmission route being too long, thus incurring greater delay, jitter, congestion and reduced throughput. One of the major problems of packet switched networks is the cell delay variation or jitter. This cell delay variation is due to the queuing delay depending on the applied loading conditions. The effect of delay, jitter accumulation due to the number of nodes along transmission routes and dropped packets adds further complexity to multimedia traffic because there is no guarantee that each traffic stream will be delivered according to its own jitter constraints therefore there is the need to analyze the effects of jitter. IP routers enable a single path for the transmission of all packets. On the other hand, Multi-Protocol Label Switching (MPLS) allows separation of packet forwarding and routing characteristics to enable packets to use the appropriate routes and also optimize and control the behavior of transmission paths. Thus correcting some of the shortfalls associated with IP routing. Therefore MPLS has been utilized in the analysis for effective transmission through the various networks. This paper analyzes the effect of delay, congestion, interference, jitter and packet loss in the transmission of signals from source to destination. In effect the impact of link failures, repair paths in the various physical topologies namely bus, star, mesh and hybrid topologies are all analyzed based on standard network conditions.
Resumo:
In Peer-to-Peer (P2P) networks, it is often desirable to assign node IDs which preserve locality relationships in the underlying topology. Node locality can be embedded into node IDs by utilizing a one dimensional mapping by a Hilbert space filling curve on a vector of network distances from each node to a subset of reference landmark nodes within the network. However this approach is fundamentally limited because while robustness and accuracy might be expected to improve with the number of landmarks, the effectiveness of 1 dimensional Hilbert Curve mapping falls for the curse of dimensionality. This work proposes an approach to solve this issue using Landmark Multidimensional Scaling (LMDS) to reduce a large set of landmarks to a smaller set of virtual landmarks. This smaller set of landmarks has been postulated to represent the intrinsic dimensionality of the network space and therefore a space filling curve applied to these virtual landmarks is expected to produce a better mapping of the node ID space. The proposed approach, the Virtual Landmarks Hilbert Curve (VLHC), is particularly suitable for decentralised systems like P2P networks. In the experimental simulations the effectiveness of the methods is measured by means of the locality preservation derived from node IDs in terms of latency to nearest neighbours. A variety of realistic network topologies are simulated and this work provides strong evidence to suggest that VLHC performs better than either Hilbert Curves or LMDS use independently of each other.
Resumo:
The orientation of the heliospheric magnetic field (HMF) in near‒Earth space is generally a good indicator of the polarity of HMF foot points at the photosphere. There are times, however, when the HMF folds back on itself (is inverted), as indicated by suprathermal electrons locally moving sunward, even though they must ultimately be carrying the heat flux away from the Sun. Analysis of the near‒Earth solar wind during the period 1998–2011 reveals that inverted HMF is present approximately 5.5% of the time and is generally associated with slow, dense solar wind and relatively weak HMF intensity. Inverted HMF is mapped to the coronal source surface, where a new method is used to estimate coronal structure from the potential‒field source‒surface model. We find a strong association with bipolar streamers containing the heliospheric current sheet, as expected, but also with unipolar or pseudostreamers, which contain no current sheet. Because large‒scale inverted HMF is a widely accepted signature of interchange reconnection at the Sun, this finding provides strong evidence for models of the slow solar wind which involve coronal loop opening by reconnection within pseudostreamer belts as well as the bipolar streamer belt. Occurrence rates of bipolar‒ and pseudostreamers suggest that they are equally likely to result in inverted HMF and, therefore, presumably undergo interchange reconnection at approximately the same rate. Given the different magnetic topologies involved, this suggests the rate of reconnection is set externally, possibly by the differential rotation rate which governs the circulation of open solar flux.
Resumo:
Epidemic protocols are a bio-inspired communication and computation paradigm for large and extreme-scale networked systems. This work investigates the expansion property of the network overlay topologies induced by epidemic protocols. An expansion quality index for overlay topologies is proposed and adopted for the design of epidemic membership protocols. A novel protocol is proposed, which explicitly aims at improving the expansion quality of the overlay topologies. The proposed protocol is tested with a global aggregation task and compared to other membership protocols. The analysis by means of simulations indicates that the expansion quality directly relates to the speed of dissemination and convergence of epidemic protocols and can be effectively used to design better protocols.