850 resultados para Packet switching (Data transmission)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ensuring reliable energy efficient data communication in resource constrained Wireless Sensor Networks (WSNs) is of primary concern. Traditionally, two types of re-transmission have been proposed for the data-loss, namely, End-to-End loss recovery (E2E) and per hop. In these mechanisms, lost packets are re-transmitted from a source node or an intermediate node with a low success rate. The proliferation routing(1) for QoS provisioning in WSNs low End-to-End reliability, not energy efficient and works only for transmissions from sensors to sink. This paper proposes a Reliable Proliferation Routing with low Duty Cycle RPRDC] in WSNs that integrates three core concepts namely, (i) reliable path finder, (ii) a randomized dispersity, and (iii) forwarding. Simulation results demonstrates that packet successful delivery rate can be maintained upto 93% in RPRDC and outperform Proliferation Routing(1). (C) 2015 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The non-deterministic relationship between Bit Error Rate and Packet Error Rate is demonstrated for an optical media access layer in common use. We show that frequency components of coded, non-random data can cause this relationship. © 2005 Optical Society of America.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Severe acute respiratory syndrome (SARS) is a serious disease with many puzzling features. We present a simple, dynamic model to assess the epidemic potential of SARS and the effectiveness of control measures. With this model, we analysed the SARS epidemic data in Beijing. The data fitting gives the basic case reproduction number of 2.16 leading to the outbreak, and the variation of the effective reproduction number reflecting the control effect. Noticeably, our study shows that the response time and the strength of control measures have significant effects on the scale of the outbreak and the lasting time of the epidemic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers the basic present value model of interest rates under rational expectations with two additional features. First, following McCallum (1994), the model assumes a policy reaction function where changes in the short-term interest rate are determined by the long-short spread. Second, the short-term interest rate and the risk premium processes are characterized by a Markov regime-switching model. Using US post-war interest rate data, this paper finds evidence that a two-regime switching model fits the data better than the basic model. The estimation results also show the presence of two alternative states displaying quite different features.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Published as an article in: Studies in Nonlinear Dynamics & Econometrics, 2004, vol. 8, issue 1, pages 5.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes an extended version of the basic New Keynesian monetary (NKM) model which contemplates revision processes of output and inflation data in order to assess the importance of data revisions on the estimated monetary policy rule parameters and the transmission of policy shocks. Our empirical evidence based on a structural econometric approach suggests that although the initial announcements of output and inflation are not rational forecasts of revised output and inflation data, ignoring the presence of non well-behaved revision processes may not be a serious drawback in the analysis of monetary policy in this framework. However, the transmission of inflation-push shocks is largely affected by considering data revisions. The latter being especially true when the nominal stickiness parameter is estimated taking into account data revision processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Transmission investments are currently needed to meet an increasing electricity demand, to address security of supply concerns, and to reach carbon-emissions targets. A key issue when assessing the benefits from an expanded grid concerns the valuation of the uncertain cash flows that result from the expansion. We propose a valuation model that accommodates both physical and economic uncertainties following the Real Options approach. It combines optimization techniques with Monte Carlo simulation. We illustrate the use of our model in a simplified, two-node grid and assess the decision whether to invest or not in a particular upgrade. The generation mix includes coal-and natural gas-fired stations that operate under carbon constraints. The underlying parameters are estimated from observed market data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The work presented in this thesis revolves around erasure correction coding, as applied to distributed data storage and real-time streaming communications.

First, we examine the problem of allocating a given storage budget over a set of nodes for maximum reliability. The objective is to find an allocation of the budget that maximizes the probability of successful recovery by a data collector accessing a random subset of the nodes. This optimization problem is challenging in general because of its combinatorial nature, despite its simple formulation. We study several variations of the problem, assuming different allocation models and access models, and determine the optimal allocation and the optimal symmetric allocation (in which all nonempty nodes store the same amount of data) for a variety of cases. Although the optimal allocation can have nonintuitive structure and can be difficult to find in general, our results suggest that, as a simple heuristic, reliable storage can be achieved by spreading the budget maximally over all nodes when the budget is large, and spreading it minimally over a few nodes when it is small. Coding would therefore be beneficial in the former case, while uncoded replication would suffice in the latter case.

Second, we study how distributed storage allocations affect the recovery delay in a mobile setting. Specifically, two recovery delay optimization problems are considered for a network of mobile storage nodes: the maximization of the probability of successful recovery by a given deadline, and the minimization of the expected recovery delay. We show that the first problem is closely related to the earlier allocation problem, and solve the second problem completely for the case of symmetric allocations. It turns out that the optimal allocations for the two problems can be quite different. In a simulation study, we evaluated the performance of a simple data dissemination and storage protocol for mobile delay-tolerant networks, and observed that the choice of allocation can have a significant impact on the recovery delay under a variety of scenarios.

Third, we consider a real-time streaming system where messages created at regular time intervals at a source are encoded for transmission to a receiver over a packet erasure link; the receiver must subsequently decode each message within a given delay from its creation time. For erasure models containing a limited number of erasures per coding window, per sliding window, and containing erasure bursts whose maximum length is sufficiently short or long, we show that a time-invariant intrasession code asymptotically achieves the maximum message size among all codes that allow decoding under all admissible erasure patterns. For the bursty erasure model, we also show that diagonally interleaved codes derived from specific systematic block codes are asymptotically optimal over all codes in certain cases. We also study an i.i.d. erasure model in which each transmitted packet is erased independently with the same probability; the objective is to maximize the decoding probability for a given message size. We derive an upper bound on the decoding probability for any time-invariant code, and show that the gap between this bound and the performance of a family of time-invariant intrasession codes is small when the message size and packet erasure probability are small. In a simulation study, these codes performed well against a family of random time-invariant convolutional codes under a number of scenarios.

Finally, we consider the joint problems of routing and caching for named data networking. We propose a backpressure-based policy that employs virtual interest packets to make routing and caching decisions. In a packet-level simulation, the proposed policy outperformed a basic protocol that combines shortest-path routing with least-recently-used (LRU) cache replacement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A neural network is a highly interconnected set of simple processors. The many connections allow information to travel rapidly through the network, and due to their simplicity, many processors in one network are feasible. Together these properties imply that we can build efficient massively parallel machines using neural networks. The primary problem is how do we specify the interconnections in a neural network. The various approaches developed so far such as outer product, learning algorithm, or energy function suffer from the following deficiencies: long training/ specification times; not guaranteed to work on all inputs; requires full connectivity.

Alternatively we discuss methods of using the topology and constraints of the problems themselves to design the topology and connections of the neural solution. We define several useful circuits-generalizations of the Winner-Take-All circuitthat allows us to incorporate constraints using feedback in a controlled manner. These circuits are proven to be stable, and to only converge on valid states. We use the Hopfield electronic model since this is close to an actual implementation. We also discuss methods for incorporating these circuits into larger systems, neural and nonneural. By exploiting regularities in our definition, we can construct efficient networks. To demonstrate the methods, we look to three problems from communications. We first discuss two applications to problems from circuit switching; finding routes in large multistage switches, and the call rearrangement problem. These show both, how we can use many neurons to build massively parallel machines, and how the Winner-Take-All circuits can simplify our designs.

Next we develop a solution to the contention arbitration problem of high-speed packet switches. We define a useful class of switching networks and then design a neural network to solve the contention arbitration problem for this class. Various aspects of the neural network/switch system are analyzed to measure the queueing performance of this method. Using the basic design, a feasible architecture for a large (1024-input) ATM packet switch is presented. Using the massive parallelism of neural networks, we can consider algorithms that were previously computationally unattainable. These now viable algorithms lead us to new perspectives on switch design.

Relevância:

30.00% 30.00%

Publicador:

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper deals with the convergence of a remote iterative learning control system subject to data dropouts. The system is composed by a set of discrete-time multiple input-multiple output linear models, each one with its corresponding actuator device and its sensor. Each actuator applies the input signals vector to its corresponding model at the sampling instants and the sensor measures the output signals vector. The iterative learning law is processed in a controller located far away of the models so the control signals vector has to be transmitted from the controller to the actuators through transmission channels. Such a law uses the measurements of each model to generate the input vector to be applied to its subsequent model so the measurements of the models have to be transmitted from the sensors to the controller. All transmissions are subject to failures which are described as a binary sequence taking value 1 or 0. A compensation dropout technique is used to replace the lost data in the transmission processes. The convergence to zero of the errors between the output signals vector and a reference one is achieved as the number of models tends to infinity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[ES]Hoy en día la simulación de elementos de red y de redes completas supone una herramienta esencial para las telecomunicaciones, pudiendo ayudar en el dimensionado y el análisis de las mismas, así como en el estudio de problemáticas y escenarios que puedan darse. Uno de estos puntos de gran interés para el estudio es ARQ, y más concretamente, las técnicas de Stop & Wait y Go Back N. Así pues, nace dentro del grupo de investigación NQAS la necesidad de elaborar un conjunto de simulaciones sobre estas técnicas, con especial interés sobre la recolección de datos relacionados con el rendimiento de las mismas. Se pretende diseñar y simular una serie de escenarios de red partiendo de módulos simples con funciones de envío, conmutación y recepción de paquetes, escalándolo gradualmente para aumentar las funcionalidades de los mismos, hasta conseguir el diseño e implementación de redes basadas en dicha arquitectura cuyos enlaces estén bajo la cobertura de instancias de protocolo ARQ (Stop & Wait, Go Back N). Se tratará el resultado de las simulaciones mediante la recolección de estadísticas relacionadas con rendimiento y desempeño de las técnicas.