867 resultados para workflow scheduling
Resumo:
Orthogonal frequency division multiple access (OFDMA) systems exploit multiuser diversity and frequency-selectivity to achieve high spectral efficiencies. However, they require considerable feedback for scheduling and rate adaptation, and are sensitive to feedback delays. We develop a comprehensive analysis of the OFDMA system throughput as a function of the feedback scheme, frequency-domain scheduler, and discrete rate adaptation rule in the presence of feedback delays. We analyze the popular best-n and threshold-based feedback schemes. We show that for both the greedy and round-robin schedulers, the throughput degradation, given a feedback delay, depends primarily on the fraction of feedback reduced by the feedback scheme and not the feedback scheme itself. Even small feedback delays at low vehicular speeds are shown to significantly degrade the throughput. We also show that optimizing the link adaptation thresholds as a function of the feedback delay can effectively counteract the detrimental effect of delays.
Resumo:
We consider the problem of joint routing, scheduling and power control in a multihop wireless network when the nodes have multiple antennas. We focus on exploiting the multiple degrees-of-freedom available at each transmitter and receiver due to multiple antennas. Specifically we use multiple antennas at each node to form multiple access and broadcast links in the network rather than just point to point links. We show that such a generic transmission model improves the system performance significantly. Since the complexity of the resulting optimization problem is very high, we also develop efficient suboptimal solutions for joint routing, scheduling and power control in this setup.
Resumo:
We consider a scheduler for the downlink of a wireless channel when only partial channel-state information is available at the scheduler. We characterize the network stability region and provide two throughput-optimal scheduling policies. We also derive a deterministic bound on the mean packet delay in the network. Finally, we provide a throughput-optimal policy for the network under QoS constraints when real-time and rate-guaranteed data traffic may be present.
Resumo:
The broadcast nature of the wireless medium jeopardizes secure transmissions. Cryptographic measures fail to ensure security when eavesdroppers have superior computational capability; however, it can be assured from information theoretic security approaches. We use physical layer security to guarantee non-zero secrecy rate in single source, single destination multi-hop networks with eavesdroppers for two cases: when eavesdropper locations and channel gains are known and when their positions are unknown. We propose a two-phase solution which consists of finding activation sets and then obtaining transmit powers subject to SINR constraints for the case when eavesdropper locations are known. We introduce methods to find activation sets and compare their performance. Necessary but reasonable approximations are made in power minimization formulations for tractability reasons. For scenarios with no eavesdropper location information, we suggest vulnerability region (the area having zero secrecy rate) minimization over the network. Our results show that in the absence of location information average number of eavesdroppers who have access to data is reduced.
Resumo:
In wireless sensor networks (WSNs) the communication traffic is often time and space correlated, where multiple nodes in a proximity start transmitting at the same time. Such a situation is known as spatially correlated contention. The random access methods to resolve such contention suffers from high collision rate, whereas the traditional distributed TDMA scheduling techniques primarily try to improve the network capacity by reducing the schedule length. Usually, the situation of spatially correlated contention persists only for a short duration and therefore generating an optimal or sub-optimal schedule is not very useful. On the other hand, if the algorithm takes very large time to schedule, it will not only introduce additional delay in the data transfer but also consume more energy. To efficiently handle the spatially correlated contention in WSNs, we present a distributed TDMA slot scheduling algorithm, called DTSS algorithm. The DTSS algorithm is designed with the primary objective of reducing the time required to perform scheduling, while restricting the schedule length to maximum degree of interference graph. The algorithm uses randomized TDMA channel access as the mechanism to transmit protocol messages, which bounds the message delay and therefore reduces the time required to get a feasible schedule. The DTSS algorithm supports unicast, multicast and broadcast scheduling, simultaneously without any modification in the protocol. The protocol has been simulated using Castalia simulator to evaluate the run time performance. Simulation results show that our protocol is able to considerably reduce the time required to schedule.
Resumo:
We consider the problem of characterizing the minimum average delay, or equivalently the minimum average queue length, of message symbols randomly arriving to the transmitter queue of a point-to-point link which dynamically selects a (n, k) block code from a given collection. The system is modeled by a discrete time queue with an IID batch arrival process and batch service. We obtain a lower bound on the minimum average queue length, which is the optimal value for a linear program, using only the mean (λ) and variance (σ2) of the batch arrivals. For a finite collection of (n, k) codes the minimum achievable average queue length is shown to be Θ(1/ε) as ε ↓ 0 where ε is the difference between the maximum code rate and λ. We obtain a sufficient condition for code rate selection policies to achieve this optimal growth rate. A simple family of policies that use only one block code each as well as two other heuristic policies are shown to be weakly optimal in the sense of achieving the 1/ε growth rate. An appropriate selection from the family of policies that use only one block code each is also shown to achieve the optimal coefficient σ2/2 of the 1/ε growth rate. We compare the performance of the heuristic policies with the minimum achievable average queue length and the lower bound numerically. For a countable collection of (n, k) codes, the optimal average queue length is shown to be Ω(1/ε). We illustrate the selectivity among policies of the growth rate optimality criterion for both finite and countable collections of (n, k) block codes.
Resumo:
We model communication of bursty sources: 1) over multiaccess channels, with either independent decoding or joint decoding and 2) over degraded broadcast channels, by a discrete-time multiclass processor sharing queue. We utilize error exponents to give a characterization of the processor sharing queue. We analyze the processor sharing queue model for the stable region of message arrival rates, and show the existence of scheduling policies for which the stability region converges to the information-theoretic capacity region in an appropriate limiting sense.
Resumo:
We consider the problem of ``fair'' scheduling the resources to one of the many mobile stations by a centrally controlled base station (BS). The BS is the only entity taking decisions in this framework based on truthful information from the mobiles on their radio channel. We study the well-known family of parametric alpha-fair scheduling problems from a game-theoretic perspective in which some of the mobiles may be noncooperative. We first show that if the BS is unaware of the noncooperative behavior from the mobiles, the noncooperative mobiles become successful in snatching the resources from the other cooperative mobiles, resulting in unfair allocations. If the BS is aware of the noncooperative mobiles, a new game arises with BS as an additional player. It can then do better by neglecting the signals from the noncooperative mobiles. The BS, however, becomes successful in eliciting the truthful signals from the mobiles only when it uses additional information (signal statistics). This new policy along with the truthful signals from mobiles forms a Nash equilibrium (NE) that we call a Truth Revealing Equilibrium. Finally, we propose new iterative algorithms to implement fair scheduling policies that robustify the otherwise nonrobust (in presence of noncooperation) alpha-fair scheduling algorithms.
Resumo:
In this paper, we consider an intrusion detection application for Wireless Sensor Networks. We study the problem of scheduling the sleep times of the individual sensors, where the objective is to maximize the network lifetime while keeping the tracking error to a minimum. We formulate this problem as a partially-observable Markov decision process (POMDP) with continuous stateaction spaces, in a manner similar to Fuemmeler and Veeravalli (IEEE Trans Signal Process 56(5), 2091-2101, 2008). However, unlike their formulation, we consider infinite horizon discounted and average cost objectives as performance criteria. For each criterion, we propose a convergent on-policy Q-learning algorithm that operates on two timescales, while employing function approximation. Feature-based representations and function approximation is necessary to handle the curse of dimensionality associated with the underlying POMDP. Our proposed algorithm incorporates a policy gradient update using a one-simulation simultaneous perturbation stochastic approximation estimate on the faster timescale, while the Q-value parameter (arising from a linear function approximation architecture for the Q-values) is updated in an on-policy temporal difference algorithm-like fashion on the slower timescale. The feature selection scheme employed in each of our algorithms manages the energy and tracking components in a manner that assists the search for the optimal sleep-scheduling policy. For the sake of comparison, in both discounted and average settings, we also develop a function approximation analogue of the Q-learning algorithm. This algorithm, unlike the two-timescale variant, does not possess theoretical convergence guarantees. Finally, we also adapt our algorithms to include a stochastic iterative estimation scheme for the intruder's mobility model and this is useful in settings where the latter is not known. Our simulation results on a synthetic 2-dimensional network setting suggest that our algorithms result in better tracking accuracy at the cost of only a few additional sensors, in comparison to a recent prior work.
Resumo:
In WSNs the communication traffic is often time and space correlated, where multiple nodes in a proximity start transmitting simultaneously. Such a situation is known as spatially correlated contention. The random access method to resolve such contention suffers from high collision rate, whereas the traditional distributed TDMA scheduling techniques primarily try to improve the network capacity by reducing the schedule length. Usually, the situation of spatially correlated contention persists only for a short duration, and therefore generating an optimal or suboptimal schedule is not very useful. Additionally, if an algorithm takes very long time to schedule, it will not only introduce additional delay in the data transfer but also consume more energy. In this paper, we present a distributed TDMA slot scheduling (DTSS) algorithm, which considerably reduces the time required to perform scheduling, while restricting the schedule length to the maximum degree of interference graph. The DTSS algorithm supports unicast, multicast, and broadcast scheduling, simultaneously without any modification in the protocol. We have analyzed the protocol for average case performance and also simulated it using Castalia simulator to evaluate its runtime performance. Both analytical and simulation results show that our protocol is able to considerably reduce the time required for scheduling.
Resumo:
We consider optimal power allocation policies for a single server, multiuser system. The power is consumed in transmission of data only. The transmission channel may experience multipath fading. We obtain very efficient, low computational complexity algorithms which minimize power and ensure stability of the data queues. We also obtain policies when the users may have mean delay constraints. If the power required is a linear function of rate then we exploit linearity and obtain linear programs with low complexity.
Resumo:
Contemporary cellular standards, such as Long Term Evolution (LTE) and LTE-Advanced, employ orthogonal frequency-division multiplexing (OFDM) and use frequency-domain scheduling and rate adaptation. In conjunction with feedback reduction schemes, high downlink spectral efficiencies are achieved while limiting the uplink feedback overhead. One such important scheme that has been adopted by these standards is best-m feedback, in which every user feeds back its m largest subchannel (SC) power gains and their corresponding indices. We analyze the single cell average throughput of an OFDM system with uniformly correlated SC gains that employs best-m feedback and discrete rate adaptation. Our model incorporates three schedulers that cover a wide range of the throughput versus fairness tradeoff and feedback delay. We show that, for small m, correlation significantly reduces average throughput with best-m feedback. This result is pertinent as even in typical dispersive channels, correlation is high. We observe that the schedulers exhibit varied sensitivities to correlation and feedback delay. The analysis also leads to insightful expressions for the average throughput in the asymptotic regime of a large number of users.
Resumo:
The correctness of a hard real-time system depends its ability to meet all its deadlines. Existing real-time systems use either a pure real-time scheduler or a real-time scheduler embedded as a real-time scheduling class in the scheduler of an operating system (OS). Existing implementations of schedulers in multicore systems that support real-time and non-real-time tasks, permit the execution of non-real-time tasks in all the cores with priorities lower than those of real-time tasks, but interrupts and softirqs associated with these non-real-time tasks can execute in any core with priorities higher than those of real-time tasks. As a result, the execution overhead of real-time tasks is quite large in these systems, which, in turn, affects their runtime. In order that the hard real-time tasks can be executed in such systems with minimal interference from other Linux tasks, we propose, in this paper, an integrated scheduler architecture, called SchedISA, which aims to considerably reduce the execution overhead of real-time tasks in these systems. In order to test the efficacy of the proposed scheduler, we implemented partitioned earliest deadline first (P-EDF) scheduling algorithm in SchedISA on Linux kernel, version 3.8, and conducted experiments on Intel core i7 processor with eight logical cores. We compared the execution overhead of real-time tasks in the above implementation of SchedISA with that in SCHED_DEADLINE's P-EDF implementation, which concurrently executes real-time and non-real-time tasks in Linux OS in all the cores. The experimental results show that the execution overhead of real-time tasks in the above implementation of SchedISA is considerably less than that in SCHED_DEADLINE. We believe that, with further refinement of SchedISA, the execution overhead of real-time tasks in SchedISA can be reduced to a predictable maximum, making it suitable for scheduling hard real-time tasks without affecting the CPU share of Linux tasks.
Resumo:
We consider a server serving a time-slotted queued system of multiple packet-based flows, where not more than one flow can be serviced in a single time slot. The flows have exogenous packet arrivals and time-varying service rates. At each time, the server can observe instantaneous service rates for only a subset of flows ( selected from a fixed collection of observable subsets) before scheduling a flow in the subset for service. We are interested in queue length aware scheduling to keep the queues short. The limited availability of instantaneous service rate information requires the scheduler to make a careful choice of which subset of service rates to sample. We develop scheduling algorithms that use only partial service rate information from subsets of channels, and that minimize the likelihood of queue overflow in the system. Specifically, we present a new joint subset-sampling and scheduling algorithm called Max-Exp that uses only the current queue lengths to pick a subset of flows, and subsequently schedules a flow using the Exponential rule. When the collection of observable subsets is disjoint, we show that Max-Exp achieves the best exponential decay rate, among all scheduling algorithms that base their decision on the current ( or any finite past history of) system state, of the tail of the longest queue. To accomplish this, we employ novel analytical techniques for studying the performance of scheduling algorithms using partial state, which may be of independent interest. These include new sample-path large deviations results for processes obtained by non-random, predictable sampling of sequences of independent and identically distributed random variables. A consequence of these results is that scheduling with partial state information yields a rate function significantly different from scheduling with full channel information. In the special case when the observable subsets are singleton flows, i.e., when there is effectively no a priori channel state information, Max-Exp reduces to simply serving the flow with the longest queue; thus, our results show that to always serve the longest queue in the absence of any channel state information is large deviations optimal.
Resumo:
In this paper, we design a new dynamic packet scheduling scheme suitable for differentiated service (DiffServ) network. Designed dynamic benefit weighted scheduling (DBWS) uses a dynamic weighted computation scheme loosely based on weighted round robin (WRR) policy. It predicts the weight required by expedited forwarding (EF) service for the current time slot (t) based on two criteria; (i) previous weight allocated to it at time (t-1), and (ii) the average increase in the queue length of EF buffer. This prediction provides smooth bandwidth allocation to all the services by avoiding overbooking of resources for EF service and still providing guaranteed services for it. The performance is analyzed for various scenarios at high, medium and low traffic conditions. The results show that packet loss is minimized, end to end delay is minimized and jitter is reduced and therefore meet quality of service (QoS) requirement of a network.