161 resultados para Computer Networks


Relevância:

60.00% 60.00%

Publicador:

Resumo:

We present a technique, using the Imaginary Smith Chart, for determining the admittance of obstacles introduced into evanescent waveguide. The admittance of an inductive iris, capacitive iris, capacitive post, variable width strip and length of evanescent waveuide are investigated. © 2012 IEEE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Conventional approaches of digital modulation schemes make use of amplitude, frequency and/or phase as modulation characteristic to transmit data. In this paper, we exploit circular polarization (CP) of the propagating electromagnetic carrier as modulation attribute which is a novel concept in digital communications. The requirement of antenna alignment to maximize received power is eliminated for CP signals and these are not affected by linearly polarized jamming signals. The work presents the concept of Circular Polarization Modulation for 2, 4 and 8 states of carrier and refers them as binary circular polarization modulation (BCPM), quaternary circular polarization modulation (QCPM) and 8-state circular polarization modulation (8CPM) respectively. Issues of modulation, demodulation, 3D symbol constellations and 3D propagating waveforms for the proposed modulation schemes are presented and analyzed in the presence of channel effects, and they are shown to have the same bit error performance in the presence of AWGN compared with conventional schemes while provide 3dB gain in the flat Rayleigh fading channel.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Web sites that rely on databases for their content are now ubiquitous. Query result pages are dynamically generated from these databases in response to user-submitted queries. Automatically extracting structured data from query result pages is a challenging problem, as the structure of the data is not explicitly represented. While humans have shown good intuition in visually understanding data records on a query result page as displayed by a web browser, no existing approach to data record extraction has made full use of this intuition. We propose a novel approach, in which we make use of the common sources of evidence that humans use to understand data records on a displayed query result page. These include structural regularity, and visual and content similarity between data records displayed on a query result page. Based on these observations we propose new techniques that can identify each data record individually, while ignoring noise items, such as navigation bars and adverts. We have implemented these techniques in a software prototype, rExtractor, and tested it using two datasets. Our experimental results show that our approach achieves significantly higher accuracy than previous approaches. Furthermore, it establishes the case for use of vision-based algorithms in the context of data extraction from web sites.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We consider the problem of self-healing in peer-to-peer networks that are under repeated attack by an omniscient adversary. We assume that, over a sequence of rounds, an adversary either inserts a node with arbitrary connections or deletes an arbitrary node from the network. The network responds to each such change by quick “repairs,” which consist of adding or deleting a small number of edges. These repairs essentially preserve closeness of nodes after adversarial deletions, without increasing node degrees by too much, in the following sense. At any point in the algorithm, nodes v and w whose distance would have been l in the graph formed by considering only the adversarial insertions (not the adversarial deletions), will be at distance at most l log n in the actual graph, where n is the total number of vertices seen so far. Similarly, at any point, a node v whose degree would have been d in the graph with adversarial insertions only, will have degree at most 3d in the actual graph. Our distributed data structure, which we call the Forgiving Graph, has low latency and bandwidth requirements. The Forgiving Graph improves on the Forgiving Tree distributed data structure from Hayes et al. (2008) in the following ways: 1) it ensures low stretch over all pairs of nodes, while the Forgiving Tree only ensures low diameter increase; 2) it handles both node insertions and deletions, while the Forgiving Tree only handles deletions; 3) it requires only a very simple and minimal initialization phase, while the Forgiving Tree initially requires construction of a spanning tree of the network.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper investigates a queuing system for QoS optimization of multimedia traffic consisting of aggregated streams with diverse QoS requirements transmitted to a mobile terminal over a common downlink shared channel. The queuing system, proposed for buffer management of aggregated single-user traffic in the base station of High-Speed Downlink Packet Access (HSDPA), allows for optimum loss/delay/jitter performance for end-user multimedia traffic with delay-tolerant non-real-time streams and partially loss tolerant real-time streams. In the queuing system, the real-time stream has non-preemptive priority in service but the number of the packets in the system is restricted by a constant. The non-real-time stream has no service priority but is allowed unlimited access to the system. Both types of packets arrive in the stationary Poisson flow. Service times follow general distribution depending on the packet type. Stability condition for the model is derived. Queue length distribution for both types of customers is calculated at arbitrary epochs and service completion epochs. Loss probability for priority packets is computed. Waiting time distribution in terms of Laplace-Stieltjes transform is obtained for both types of packets. Mean waiting time and jitter are computed. Numerical examples presented demonstrate the effectiveness of the queuing system for QoS optimization of buffered end-user multimedia traffic with aggregated real-time and non-real-time streams.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Blind steganalysis of JPEG images is addressed by modeling the correlations among the DCT coefficients using K -variate (K = 2) p.d.f. estimates (p.d.f.s) constructed by means of Markov random field (MRF) cliques. The reasoning of using high variate p.d.f.s together with MRF cliques for image steganalysis is explained via a classical detection problem. Although our approach has many improvements over the current state-of-the-art, it suffers from the high dimensionality and the sparseness of the high variate p.d.f.s. The dimensionality problem as well as the sparseness problem are solved heuristically by means of dimensionality reduction and feature selection algorithms. The detection accuracy of the proposed method(s) is evaluated over Memon's (30.000 images) and Goljan's (1912 images) image sets. It is shown that practically applicable steganalysis systems are possible with a suitable dimensionality reduction technique and these systems can provide, in general, improved detection accuracy over the current state-of-the-art. Experimental results also justify this assertion.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Electing a leader is a fundamental task in distributed computing. In its implicit version, only the leader must know who is the elected leader. This paper focuses on studying the message and time complexity of randomized implicit leader election in synchronous distributed networks. Surprisingly, the most "obvious" complexity bounds have not been proven for randomized algorithms. The "obvious" lower bounds of O(m) messages (m is the number of edges in the network) and O(D) time (D is the network diameter) are non-trivial to show for randomized (Monte Carlo) algorithms. (Recent results that show that even O(n) (n is the number of nodes in the network) is not a lower bound on the messages in complete networks, make the above bounds somewhat less obvious). To the best of our knowledge, these basic lower bounds have not been established even for deterministic algorithms (except for the limited case of comparison algorithms, where it was also required that some nodes may not wake up spontaneously, and that D and n were not known).

We establish these fundamental lower bounds in this paper for the general case, even for randomized Monte Carlo algorithms. Our lower bounds are universal in the sense that they hold for all universal algorithms (such algorithms should work for all graphs), apply to every D, m, and n, and hold even if D, m, and n are known, all the nodes wake up simultaneously, and the algorithms can make anyuse of node's identities. To show that these bounds are tight, we present an O(m) messages algorithm. An O(D) time algorithm is known. A slight adaptation of our lower bound technique gives rise to an O(m) message lower bound for randomized broadcast algorithms.

An interesting fundamental problem is whether both upper bounds (messages and time) can be reached simultaneously in the randomized setting for all graphs. (The answer is known to be negative in the deterministic setting). We answer this problem partially by presenting a randomized algorithm that matches both complexities in some cases. This already separates (for some cases) randomized algorithms from deterministic ones. As first steps towards the general case, we present several universal leader election algorithms with bounds that trade-off messages versus time. We view our results as a step towards understanding the complexity of universal leader election in distributed networks.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The increased complexity and interconnectivity of Supervisory Control and Data Acquisition (SCADA) systems in the Smart Grid has exposed them to a wide range of cyber-security issues, and there are a multitude of potential access points for cyber attackers. This paper presents a SCADA-specific cyber-security test-bed which contains SCADA software and communication infrastructure. This test-bed is used to investigate an Address Resolution Protocol (ARP) spoofing based man-in-the-middle attack. Finally, the paper proposes a future work plan which focuses on applying intrusion detection and prevention technology to address cyber-security issues in SCADA systems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Greater complexity and interconnectivity across systems embracing Smart Grid technologies has meant that cyber-security issues have attracted significant attention. This paper describes pertinent cyber-security requirements, in particular cyber attacks and countermeasures which are critical for reliable Smart Grid operation. Relevant published literature is presented for critical aspects of Smart Grid cyber-security, such as vulnerability, interdependency, simulation, and standards. Furthermore, a preliminary study case is given which demonstrates the impact of a cyber attack which violates the integrity of data on the load management of real power system. Finally, the paper proposes future work plan which focuses on applying intrusion detection and prevention technology to address cyber-security issues. This paper also provides an overview of Smart Grid cyber-security with reference to related cross-disciplinary research topics.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We consider the problem of self-healing in reconfigurable networks e.g., peer-to-peer and wireless mesh networks. For such networks under repeated attack by an omniscient adversary, we propose a fully distributed algorithm, Xheal, that maintains good expansion and spectral properties of the network, while keeping the network connected. Moreover, Xheal does this while allowing only low stretch and degree increase per node. The algorithm heals global properties like expansion and stretch while only doing local changes and using only local information. We also provide bounds on the second smallest eigenvalue of the Laplacian which captures key properties such as mixing time, conductance, congestion in routing etc. Xheal has low amortized latency and bandwidth requirements. Our work improves over the self-healing algorithms Forgiving tree [PODC 2008] andForgiving graph [PODC 2009] in that we are able to give guarantees on degree and stretch, while at the same time preserving the expansion and spectral properties of the network.