383 resultados para Packets
Resumo:
This paper proposes a probabilistic prediction based approach for providing Quality of Service (QoS) to delay sensitive traffic for Internet of Things (IoT). A joint packet scheduling and dynamic bandwidth allocation scheme is proposed to provide service differentiation and preferential treatment to delay sensitive traffic. The scheduler focuses on reducing the waiting time of high priority delay sensitive services in the queue and simultaneously keeping the waiting time of other services within tolerable limits. The scheme uses the difference in probability of average queue length of high priority packets at previous cycle and current cycle to determine the probability of average weight required in the current cycle. This offers optimized bandwidth allocation to all the services by avoiding distribution of excess resources for high priority services and yet guaranteeing the services for it. The performance of the algorithm is investigated using MPEG-4 traffic traces under different system loading. The results show the improved performance with respect to waiting time for scheduling high priority packets and simultaneously keeping tolerable limits for waiting time and packet loss for other services. Crown Copyright (C) 2015 Published by Elsevier B.V.
Resumo:
Mobile Ad hoc Networks (MANETs) are self-organized, infrastructureless, decentralized wireless networks consist of a group of heterogeneous mobile devices. Due to the inherent characteristics of MANE -Ts, such as frequent change of topology, nodes mobility, resource scarcity, lack of central control, etc., makes QoS routing is the hardest task. QoS routing is the task of routing data packets from source to destination depending upon the QoS resource constraints, such as bandwidth, delay, packet loss rate, cost, etc. In this paper, we proposed a novel scheme of providing QoS routing in MANETs by using Emergent Intelligence (El). The El is a group intelligence, which is derived from the periodical interaction among a group of agents and nodes. We logically divide MANET into clusters by centrally located static agent, and in each cluster a mobile agent is deployed. The mobile agent interacts with the nodes, neighboring mobile agents and static agent for collection of QoS resource information, negotiations, finding secure and reliable nodes and finding an optimal QoS path from source to destination. Simulation and analytical results show that the effectiveness of the scheme. (C) 2015 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.ore/licenscs/by-nc-nd/4.0/). Peer-review under responsibility of the Conference Program Chairs
Resumo:
In the context of wireless sensor networks, we are motivated by the design of a tree network spanning a set of source nodes that generate packets, a set of additional relay nodes that only forward packets from the sources, and a data sink. We assume that the paths from the sources to the sink have bounded hop count, that the nodes use the IEEE 802.15.4 CSMA/CA for medium access control, and that there are no hidden terminals. In this setting, starting with a set of simple fixed point equations, we derive explicit conditions on the packet generation rates at the sources, so that the tree network approximately provides certain quality of service (QoS) such as end-to-end delivery probability and mean delay. The structures of our conditions provide insight on the dependence of the network performance on the arrival rate vector, and the topological properties of the tree network. Our numerical experiments suggest that our approximations are able to capture a significant part of the QoS aware throughput region (of a tree network), that is adequate for many sensor network applications. Furthermore, for the special case of equal arrival rates, default backoff parameters, and for a range of values of target QoS, we show that among all path-length-bounded trees (spanning a given set of sources and the data sink) that meet the conditions derived in the paper, a shortest path tree achieves the maximum throughput. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
The optimal power-delay tradeoff is studied for a time-slotted independently and identically distributed fading point-to-point link, with perfect channel state information at both transmitter and receiver, and with random packet arrivals to the transmitter queue. It is assumed that the transmitter can control the number of packets served by controlling the transmit power in the slot. The optimal tradeoff between average power and average delay is analyzed for stationary and monotone transmitter policies. For such policies, an asymptotic lower bound on the minimum average delay of the packets is obtained, when average transmitter power approaches the minimum average power required for transmitter queue stability. The asymptotic lower bound on the minimum average delay is obtained from geometric upper bounds on the stationary distribution of the queue length. This approach, which uses geometric upper bounds, also leads to an intuitive explanation of the asymptotic behavior of average delay. The asymptotic lower bounds, along with previously known asymptotic upper bounds, are used to identify three new cases where the order of the asymptotic behavior differs from that obtained from a previously considered approximate model, in which the transmit power is a strictly convex function of real valued service batch size for every fade state.
Resumo:
We develop an approximate analytical technique for evaluating the performance of multi-hop networks based on beaconless IEEE 802.15.4 ( the ``ZigBee'' PHY and MAC), a popular standard for wireless sensor networks. The network comprises sensor nodes, which generate measurement packets, relay nodes which only forward packets, and a data sink (base station). We consider a detailed stochastic process at each node, and analyse this process taking into account the interaction with neighbouring nodes via certain time averaged unknown variables (e.g., channel sensing rates, collision probabilities, etc.). By coupling the analyses at various nodes, we obtain fixed point equations that can be solved numerically to obtain the unknown variables, thereby yielding approximations of time average performance measures, such as packet discard probabilities and average queueing delays. The model incorporates packet generation at the sensor nodes and queues at the sensor nodes and relay nodes. We demonstrate the accuracy of our model by an extensive comparison with simulations. As an additional assessment of the accuracy of the model, we utilize it in an algorithm for sensor network design with quality-of-service (QoS) objectives, and show that designs obtained using our model actually satisfy the QoS constraints (as validated by simulating the networks), and the predictions are accurate to well within 10% as compared to the simulation results in a regime where the packet discard probability is low. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
Ensuring reliable energy efficient data communication in resource constrained Wireless Sensor Networks (WSNs) is of primary concern. Traditionally, two types of re-transmission have been proposed for the data-loss, namely, End-to-End loss recovery (E2E) and per hop. In these mechanisms, lost packets are re-transmitted from a source node or an intermediate node with a low success rate. The proliferation routing(1) for QoS provisioning in WSNs low End-to-End reliability, not energy efficient and works only for transmissions from sensors to sink. This paper proposes a Reliable Proliferation Routing with low Duty Cycle RPRDC] in WSNs that integrates three core concepts namely, (i) reliable path finder, (ii) a randomized dispersity, and (iii) forwarding. Simulation results demonstrates that packet successful delivery rate can be maintained upto 93% in RPRDC and outperform Proliferation Routing(1). (C) 2015 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
Resumo:
For solving complex flow field with multi-scale structure higher order accurate schemes are preferred. Among high order schemes the compact schemes have higher resolving efficiency. When the compact and upwind compact schemes are used to solve aerodynamic problems there are numerical oscillations near the shocks. The reason of oscillation production is because of non-uniform group velocity of wave packets in numerical solutions. For improvement of resolution of the shock a parameter function is introduced in compact scheme to control the group velocity. The newly developed method is simple. It has higher accuracy and less stencil of grid points.
Resumo:
We propose a single optical photon source for quantum cryptography based on the acousto-electric effect. Surface acoustic waves (SAWs) propagating through a quasi-one-dimensional channel have been shown to produce packets of electrons which reside in the SAW minima and travel at the velocity of sound. In our scheme these electron packets are injected into a p-type region, resulting in photon emission. Since the number of electrons in each packet can be controlled down to a single electron, a stream of single (or N) photon states, with a creation time strongly correlated with the driving acoustic field, should be generated.
Resumo:
19 p.
Resumo:
The work presented in this thesis revolves around erasure correction coding, as applied to distributed data storage and real-time streaming communications.
First, we examine the problem of allocating a given storage budget over a set of nodes for maximum reliability. The objective is to find an allocation of the budget that maximizes the probability of successful recovery by a data collector accessing a random subset of the nodes. This optimization problem is challenging in general because of its combinatorial nature, despite its simple formulation. We study several variations of the problem, assuming different allocation models and access models, and determine the optimal allocation and the optimal symmetric allocation (in which all nonempty nodes store the same amount of data) for a variety of cases. Although the optimal allocation can have nonintuitive structure and can be difficult to find in general, our results suggest that, as a simple heuristic, reliable storage can be achieved by spreading the budget maximally over all nodes when the budget is large, and spreading it minimally over a few nodes when it is small. Coding would therefore be beneficial in the former case, while uncoded replication would suffice in the latter case.
Second, we study how distributed storage allocations affect the recovery delay in a mobile setting. Specifically, two recovery delay optimization problems are considered for a network of mobile storage nodes: the maximization of the probability of successful recovery by a given deadline, and the minimization of the expected recovery delay. We show that the first problem is closely related to the earlier allocation problem, and solve the second problem completely for the case of symmetric allocations. It turns out that the optimal allocations for the two problems can be quite different. In a simulation study, we evaluated the performance of a simple data dissemination and storage protocol for mobile delay-tolerant networks, and observed that the choice of allocation can have a significant impact on the recovery delay under a variety of scenarios.
Third, we consider a real-time streaming system where messages created at regular time intervals at a source are encoded for transmission to a receiver over a packet erasure link; the receiver must subsequently decode each message within a given delay from its creation time. For erasure models containing a limited number of erasures per coding window, per sliding window, and containing erasure bursts whose maximum length is sufficiently short or long, we show that a time-invariant intrasession code asymptotically achieves the maximum message size among all codes that allow decoding under all admissible erasure patterns. For the bursty erasure model, we also show that diagonally interleaved codes derived from specific systematic block codes are asymptotically optimal over all codes in certain cases. We also study an i.i.d. erasure model in which each transmitted packet is erased independently with the same probability; the objective is to maximize the decoding probability for a given message size. We derive an upper bound on the decoding probability for any time-invariant code, and show that the gap between this bound and the performance of a family of time-invariant intrasession codes is small when the message size and packet erasure probability are small. In a simulation study, these codes performed well against a family of random time-invariant convolutional codes under a number of scenarios.
Finally, we consider the joint problems of routing and caching for named data networking. We propose a backpressure-based policy that employs virtual interest packets to make routing and caching decisions. In a packet-level simulation, the proposed policy outperformed a basic protocol that combines shortest-path routing with least-recently-used (LRU) cache replacement.
Resumo:
This thesis covers a range of topics in numerical and analytical relativity, centered around introducing tools and methodologies for the study of dynamical spacetimes. The scope of the studies is limited to classical (as opposed to quantum) vacuum spacetimes described by Einstein's general theory of relativity. The numerical works presented here are carried out within the Spectral Einstein Code (SpEC) infrastructure, while analytical calculations extensively utilize Wolfram's Mathematica program.
We begin by examining highly dynamical spacetimes such as binary black hole mergers, which can be investigated using numerical simulations. However, there are difficulties in interpreting the output of such simulations. One difficulty stems from the lack of a canonical coordinate system (henceforth referred to as gauge freedom) and tetrad, against which quantities such as Newman-Penrose Psi_4 (usually interpreted as the gravitational wave part of curvature) should be measured. We tackle this problem in Chapter 2 by introducing a set of geometrically motivated coordinates that are independent of the simulation gauge choice, as well as a quasi-Kinnersley tetrad, also invariant under gauge changes in addition to being optimally suited to the task of gravitational wave extraction.
Another difficulty arises from the need to condense the overwhelming amount of data generated by the numerical simulations. In order to extract physical information in a succinct and transparent manner, one may define a version of gravitational field lines and field strength using spatial projections of the Weyl curvature tensor. Introduction, investigation and utilization of these quantities will constitute the main content in Chapters 3 through 6.
For the last two chapters, we turn to the analytical study of a simpler dynamical spacetime, namely a perturbed Kerr black hole. We will introduce in Chapter 7 a new analytical approximation to the quasi-normal mode (QNM) frequencies, and relate various properties of these modes to wave packets traveling on unstable photon orbits around the black hole. In Chapter 8, we study a bifurcation in the QNM spectrum as the spin of the black hole a approaches extremality.
Resumo:
We simulate incompressible, MHD turbulence using a pseudo-spectral code. Our major conclusions are as follows.
1) MHD turbulence is most conveniently described in terms of counter propagating shear Alfvén and slow waves. Shear Alfvén waves control the cascade dynamics. Slow waves play a passive role and adopt the spectrum set by the shear Alfvén waves. Cascades composed entirely of shear Alfvén waves do not generate a significant measure of slow waves.
2) MHD turbulence is anisotropic with energy cascading more rapidly along k⊥ than along k∥, where k⊥ and k∥ refer to wavevector components perpendicular and parallel to the local magnetic field. Anisotropy increases with increasing k⊥ such that excited modes are confined inside a cone bounded by k∥ ∝ kγ⊥ where γ less than 1. The opening angle of the cone, θ(k⊥) ∝ k-(1-γ)⊥, defines the scale dependent anisotropy.
3) MHD turbulence is generically strong in the sense that the waves which comprise it suffer order unity distortions on timescales comparable to their periods. Nevertheless, turbulent fluctuations are small deep inside the inertial range. Their energy density is less than that of the background field by a factor θ2 (k⊥)≪1.
4) MHD cascades are best understood geometrically. Wave packets suffer distortions as they move along magnetic field lines perturbed by counter propagating waves. Field lines perturbed by unidirectional waves map planes perpendicular to the local field into each other. Shear Alfvén waves are responsible for the mapping's shear and slow waves for its dilatation. The amplitude of the former exceeds that of the latter by 1/θ(k⊥) which accounts for dominance of the shear Alfvén waves in controlling the cascade dynamics.
5) Passive scalars mixed by MHD turbulence adopt the same power spectrum as the velocity and magnetic field perturbations.
6) Decaying MHD turbulence is unstable to an increase of the imbalance between the flux of waves propagating in opposite directions along the magnetic field. Forced MHD turbulence displays order unity fluctuations with respect to the balanced state if excited at low k by δ(t) correlated forcing. It appears to be statistically stable to the unlimited growth of imbalance.
7) Gradients of the dynamic variables are focused into sheets aligned with the magnetic field whose thickness is comparable to the dissipation scale. Sheets formed by oppositely directed waves are uncorrelated. We suspect that these are vortex sheets which the mean magnetic field prevents from rolling up.
8) Items (1)-(5) lend support to the model of strong MHD turbulence put forth by Goldreich and Sridhar (1995, 1997). Results from our simulations are also consistent with the GS prediction γ = 2/3. The sole not able discrepancy is that the 1D power law spectra, E(k⊥) ∝ k-∝⊥, determined from our simulations exhibit ∝ ≈ 3/2, whereas the GS model predicts ∝ = 5/3.
Resumo:
The early stage of laminar-turbulent transition in a hypervelocity boundary layer is studied using a combination of modal linear stability analysis, transient growth analysis, and direct numerical simulation. Modal stability analysis is used to clarify the behavior of first and second mode instabilities on flat plates and sharp cones for a wide range of high enthalpy flow conditions relevant to experiments in impulse facilities. Vibrational nonequilibrium is included in this analysis, its influence on the stability properties is investigated, and simple models for predicting when it is important are described.
Transient growth analysis is used to determine the optimal initial conditions that lead to the largest possible energy amplification within the flow. Such analysis is performed for both spatially and temporally evolving disturbances. The analysis again targets flows that have large stagnation enthalpy, such as those found in shock tunnels, expansion tubes, and atmospheric flight at high Mach numbers, and clarifies the effects of Mach number and wall temperature on the amplification achieved. Direct comparisons between modal and non-modal growth are made to determine the relative importance of these mechanisms under different flow regimes.
Conventional stability analysis employs the assumption that disturbances evolve with either a fixed frequency (spatial analysis) or a fixed wavenumber (temporal analysis). Direct numerical simulations are employed to relax these assumptions and investigate the downstream propagation of wave packets that are localized in space and time, and hence contain a distribution of frequencies and wavenumbers. Such wave packets are commonly observed in experiments and hence their amplification is highly relevant to boundary layer transition prediction. It is demonstrated that such localized wave packets experience much less growth than is predicted by spatial stability analysis, and therefore it is essential that the bandwidth of localized noise sources that excite the instability be taken into account in making transition estimates. A simple model based on linear stability theory is also developed which yields comparable results with an enormous reduction in computational expense. This enables the amplification of finite-width wave packets to be taken into account in transition prediction.
Resumo:
[EU]Gaur egun, Europa mailan European Rail Traffic Management System (ERTMS) seinaleztapen-sistema bateratua hedatzen ari dira trenbide sare desberdinen arteko elkar eragintasuna bultzatzeko. Proiektu honen helburua da ERTMS sistemaren barneko ETCS protokoloa hedatzea simulazio hibridodun ingurune batean, ERTMS sistemaren hedatzea azkartuko duten erakusleak sortuz. Horretarako, OPNET simulagailuaren System-in-the-loop erreminta erabili da. Erreminta hau baliatuz ETCS protokoloaren pakete errealak ingurune simulatuan integratzeko funtzioen liburutegi bat idatzi da. Amaitzeko, liburutegi hori baliatuz ETCS protokoloak sareko arazoen aurrean duen errendimenduaren analisi bat burutu da eta liburutegi berri horrek pakete errealak simulatuetara itzultzean (eta kontrakoa) duen errendimendua zein den aztertu da.
Resumo:
O objetivo desta dissertação é avaliar o desempenho de ambientes virtuais de roteamento construídos sobre máquinas x86 e dispositivos de rede existentes na Internet atual. Entre as plataformas de virtualização mais utilizadas, deseja-se identificar quem melhor atende aos requisitos de um ambiente virtual de roteamento para permitir a programação do núcleo de redes de produção. As plataformas de virtualização Xen e KVM foram instaladas em servidores x86 modernos de grande capacidade, e comparadas quanto a eficiência, flexibilidade e capacidade de isolamento entre as redes, que são os requisitos para o bom desempenho de uma rede virtual. Os resultados obtidos nos testes mostram que, apesar de ser uma plataforma de virtualização completa, o KVM possui desempenho melhor que o do Xen no encaminhamento e roteamento de pacotes, quando o VIRTIO é utilizado. Além disso, apenas o Xen apresentou problemas de isolamento entre redes virtuais. Também avaliamos o efeito da arquitetura NUMA, muito comum em servidores x86 modernos, sobre o desempenho das VMs quando muita memória e núcleos de processamento são alocados nelas. A análise dos resultados mostra que o desempenho das operações de Entrada e Saída (E/S) de rede pode ser comprometido, caso as quantidades de memória e CPU virtuais alocadas para a VM não respeitem o tamanho dos nós NUMA existentes no hardware. Por último, estudamos o OpenFlow. Ele permite que redes sejam segmentadas em roteadores, comutadores e em máquinas x86 para que ambientes virtuais de roteamento com lógicas de encaminhamento diferentes possam ser criados. Verificamos que ao ser instalado com o Xen e com o KVM, ele possibilita a migração de redes virtuais entre diferentes nós físicos, sem que ocorram interrupções nos fluxos de dados, além de permitir que o desempenho do encaminhamento de pacotes nas redes virtuais criadas seja aumentado. Assim, foi possível programar o núcleo da rede para implementar alternativas ao protocolo IP.