957 resultados para load-balancing scheduling
Resumo:
In this paper, we consider an intrusion detection application for Wireless Sensor Networks. We study the problem of scheduling the sleep times of the individual sensors, where the objective is to maximize the network lifetime while keeping the tracking error to a minimum. We formulate this problem as a partially-observable Markov decision process (POMDP) with continuous stateaction spaces, in a manner similar to Fuemmeler and Veeravalli (IEEE Trans Signal Process 56(5), 2091-2101, 2008). However, unlike their formulation, we consider infinite horizon discounted and average cost objectives as performance criteria. For each criterion, we propose a convergent on-policy Q-learning algorithm that operates on two timescales, while employing function approximation. Feature-based representations and function approximation is necessary to handle the curse of dimensionality associated with the underlying POMDP. Our proposed algorithm incorporates a policy gradient update using a one-simulation simultaneous perturbation stochastic approximation estimate on the faster timescale, while the Q-value parameter (arising from a linear function approximation architecture for the Q-values) is updated in an on-policy temporal difference algorithm-like fashion on the slower timescale. The feature selection scheme employed in each of our algorithms manages the energy and tracking components in a manner that assists the search for the optimal sleep-scheduling policy. For the sake of comparison, in both discounted and average settings, we also develop a function approximation analogue of the Q-learning algorithm. This algorithm, unlike the two-timescale variant, does not possess theoretical convergence guarantees. Finally, we also adapt our algorithms to include a stochastic iterative estimation scheme for the intruder's mobility model and this is useful in settings where the latter is not known. Our simulation results on a synthetic 2-dimensional network setting suggest that our algorithms result in better tracking accuracy at the cost of only a few additional sensors, in comparison to a recent prior work.
Resumo:
A new generator topology for microhydel power plants, capable of unsupervised operation, is proposed. While conventional microhydel plants operate at constant speed with switched ballast loads, the proposed generator, based on the wound rotor induction machine, operates at variable speed and does away with the need for ballast loads. This increases reliability and substantially decreases system costs and setup times. The proposed generator has a simplified decoupled control structure with stator-referenced voltage control similar to a conventional synchronous generator, and rotor-side frequency control that is facilitated by rotating electronics mounted on the rotor. While this paper describes an isolated plant, the topology can also be tailored for distributed generation enabling conversion of the available hydraulic power into useful electrical power when the grid is present, and supplying local loads in the event of grid outage.
Resumo:
In WSNs the communication traffic is often time and space correlated, where multiple nodes in a proximity start transmitting simultaneously. Such a situation is known as spatially correlated contention. The random access method to resolve such contention suffers from high collision rate, whereas the traditional distributed TDMA scheduling techniques primarily try to improve the network capacity by reducing the schedule length. Usually, the situation of spatially correlated contention persists only for a short duration, and therefore generating an optimal or suboptimal schedule is not very useful. Additionally, if an algorithm takes very long time to schedule, it will not only introduce additional delay in the data transfer but also consume more energy. In this paper, we present a distributed TDMA slot scheduling (DTSS) algorithm, which considerably reduces the time required to perform scheduling, while restricting the schedule length to the maximum degree of interference graph. The DTSS algorithm supports unicast, multicast, and broadcast scheduling, simultaneously without any modification in the protocol. We have analyzed the protocol for average case performance and also simulated it using Castalia simulator to evaluate its runtime performance. Both analytical and simulation results show that our protocol is able to considerably reduce the time required for scheduling.
Resumo:
We consider optimal power allocation policies for a single server, multiuser system. The power is consumed in transmission of data only. The transmission channel may experience multipath fading. We obtain very efficient, low computational complexity algorithms which minimize power and ensure stability of the data queues. We also obtain policies when the users may have mean delay constraints. If the power required is a linear function of rate then we exploit linearity and obtain linear programs with low complexity.
Resumo:
India's energy demand is increasing rapidly with the intensive growth of economy. The electricity demand in India exceeded the availability, both in terms of base load energy and peak availability. The efficient use of energy source and its conversion and utilizations are the viable alternatives available to the utilities or industry. There are essentially two approaches to electrical energy management. First at the supply / utility end (Supply Side Management or SSM) and the other at the consumer end (Demand Side Management or DSM). This work is based on Supply Side Management (SSM) protocol and consists of design, fabrication and testing of a control device that will be able to automatically regulate the power flow to an individual consumer's premise. This control device can monitor the overuse of electricity (above the connected load or contracted demand) by the individual consumers. The present project work specially emphasizes on contract demand of every consumer and tries to reduce the use beyond the contract demand. This control unit design includes both software and hardware work and designed for 0.5 kW contract demand. The device is tested in laboratory and reveals its potential use in the field.
Resumo:
Contemporary cellular standards, such as Long Term Evolution (LTE) and LTE-Advanced, employ orthogonal frequency-division multiplexing (OFDM) and use frequency-domain scheduling and rate adaptation. In conjunction with feedback reduction schemes, high downlink spectral efficiencies are achieved while limiting the uplink feedback overhead. One such important scheme that has been adopted by these standards is best-m feedback, in which every user feeds back its m largest subchannel (SC) power gains and their corresponding indices. We analyze the single cell average throughput of an OFDM system with uniformly correlated SC gains that employs best-m feedback and discrete rate adaptation. Our model incorporates three schedulers that cover a wide range of the throughput versus fairness tradeoff and feedback delay. We show that, for small m, correlation significantly reduces average throughput with best-m feedback. This result is pertinent as even in typical dispersive channels, correlation is high. We observe that the schedulers exhibit varied sensitivities to correlation and feedback delay. The analysis also leads to insightful expressions for the average throughput in the asymptotic regime of a large number of users.
Resumo:
The correctness of a hard real-time system depends its ability to meet all its deadlines. Existing real-time systems use either a pure real-time scheduler or a real-time scheduler embedded as a real-time scheduling class in the scheduler of an operating system (OS). Existing implementations of schedulers in multicore systems that support real-time and non-real-time tasks, permit the execution of non-real-time tasks in all the cores with priorities lower than those of real-time tasks, but interrupts and softirqs associated with these non-real-time tasks can execute in any core with priorities higher than those of real-time tasks. As a result, the execution overhead of real-time tasks is quite large in these systems, which, in turn, affects their runtime. In order that the hard real-time tasks can be executed in such systems with minimal interference from other Linux tasks, we propose, in this paper, an integrated scheduler architecture, called SchedISA, which aims to considerably reduce the execution overhead of real-time tasks in these systems. In order to test the efficacy of the proposed scheduler, we implemented partitioned earliest deadline first (P-EDF) scheduling algorithm in SchedISA on Linux kernel, version 3.8, and conducted experiments on Intel core i7 processor with eight logical cores. We compared the execution overhead of real-time tasks in the above implementation of SchedISA with that in SCHED_DEADLINE's P-EDF implementation, which concurrently executes real-time and non-real-time tasks in Linux OS in all the cores. The experimental results show that the execution overhead of real-time tasks in the above implementation of SchedISA is considerably less than that in SCHED_DEADLINE. We believe that, with further refinement of SchedISA, the execution overhead of real-time tasks in SchedISA can be reduced to a predictable maximum, making it suitable for scheduling hard real-time tasks without affecting the CPU share of Linux tasks.
Resumo:
A multilevel inverter for generating 17 voltage levels using a three-level flying capacitor inverter and cascaded H-bridge modules with floating capacitors has been proposed. Various aspects of the proposed inverter like capacitor voltage balancing have been presented in the present paper. Experimental results are presented to study the performance of the proposed converter. The stability of the capacitor balancing algorithm has been verified both during transients and steady-state operation. All the capacitors in this circuit can be balanced instantaneously by using one of the pole voltage combinations. Another advantage of this topology is its ability to generate all the voltages from a single dc-link power supply which enables back-to-back operation of converter. Also, the proposed inverter can be operated at all load power factors and modulation indices. Additional advantage is, if one of the H-bridges fail, the inverter can still be operated at full load with reduced number of levels. This configuration has very low dv/dt and common-mode voltage variation.
Resumo:
A block-structured adaptive mesh refinement (AMR) technique has been used to obtain numerical solutions for many scientific applications. Some block-structured AMR approaches have focused on forming patches of non-uniform sizes where the size of a patch can be tuned to the geometry of a region of interest. In this paper, we develop strategies for adaptive execution of block-structured AMR applications on GPUs, for hyperbolic directionally split solvers. While effective hybrid execution strategies exist for applications with uniform patches, our work considers efficient execution of non-uniform patches with different workloads. Our techniques include bin-packing work units to load balance GPU computations, adaptive asynchronism between CPU and GPU executions using a knapsack formulation, and scheduling communications for multi-GPU executions. Our experiments with synthetic and real data, for single-GPU and multi-GPU executions, on Tesla S1070 and Fermi C2070 clusters, show that our strategies result in up to a 3.23 speedup in performance over existing strategies.
Resumo:
In this paper, we present the design and development of a portable, hand-operated composite compliant mechanism for estimating the failure-load of cm-sized stiff objects whose stiffness is of the order of 10 s of kN/m. The motivation for the design comes from the need to estimate the failure-load of mesoscale cemented sand specimens in situ, which is not possible with traditional devices used for large specimens or very small specimens. The composite compliant device, developed in this work, consists of two compliant mechanisms: a force-amplifying compliant mechanism (FaCM) to amplify sufficiently the force exerted by hand in order to break the specimen and a displacement-amplifying compliant mechanism (DaCM) to enable measurement of the force using a proximity sensor. The two mechanisms are designed using the selection-maps technique to amplify the force up to 100N by about a factor of 3 and measure the force with a resolution of 15 mN. The composite device, made using a FaCM, a DaCM, and a Hall effect-based proximity sensor, was tested on mesoscale cemented sand specimens that were 10mm in diameter and 20mm in length. The results are compared with those of a large commercial instrument. Through the experiments, it was observed that the failure-load of the cemented sand specimens varied from 0.95N to 24.33 N, depending on the percentage of cementation and curing period. The estimation of the failure-load using the compliant device was found to be within 1.7% of the measurements obtained using the commercial instrument and thus validating the design. The details of the design, prototyping, specimen preparation, testing, and the results comprise the paper.
Resumo:
A new successive displacement type load flow method is developed in this paper. This algorithm differs from the conventional Y-Bus based Gauss Seidel load flow in that the voltages at each bus is updated in every iteration based on the exact solution of the power balance equation at that node instead of an approximate solution used by the Gauss Seidel method. It turns out that this modified implementation translates into only a marginal improvement in convergence behaviour for obtaining load flow solutions of interconnected systems. However it is demonstrated that the new approach can be adapted with some additional refinements in order to develop an effective load flow solution technique for radial systems. Numerical results considering a number of systems-both interconnected and radial, are provided to validate the proposed approach.
Resumo:
We consider a server serving a time-slotted queued system of multiple packet-based flows, where not more than one flow can be serviced in a single time slot. The flows have exogenous packet arrivals and time-varying service rates. At each time, the server can observe instantaneous service rates for only a subset of flows ( selected from a fixed collection of observable subsets) before scheduling a flow in the subset for service. We are interested in queue length aware scheduling to keep the queues short. The limited availability of instantaneous service rate information requires the scheduler to make a careful choice of which subset of service rates to sample. We develop scheduling algorithms that use only partial service rate information from subsets of channels, and that minimize the likelihood of queue overflow in the system. Specifically, we present a new joint subset-sampling and scheduling algorithm called Max-Exp that uses only the current queue lengths to pick a subset of flows, and subsequently schedules a flow using the Exponential rule. When the collection of observable subsets is disjoint, we show that Max-Exp achieves the best exponential decay rate, among all scheduling algorithms that base their decision on the current ( or any finite past history of) system state, of the tail of the longest queue. To accomplish this, we employ novel analytical techniques for studying the performance of scheduling algorithms using partial state, which may be of independent interest. These include new sample-path large deviations results for processes obtained by non-random, predictable sampling of sequences of independent and identically distributed random variables. A consequence of these results is that scheduling with partial state information yields a rate function significantly different from scheduling with full channel information. In the special case when the observable subsets are singleton flows, i.e., when there is effectively no a priori channel state information, Max-Exp reduces to simply serving the flow with the longest queue; thus, our results show that to always serve the longest queue in the absence of any channel state information is large deviations optimal.
Resumo:
In India, the low prevalence of HIV-associated dementia (HAD) in the Human immunodeficiency virus type 1 (HIV-1) subtype C infection is quite paradoxical given the high-rate of macrophage infiltration into the brain. Whether the direct viral burden in individual brain compartments could be associated with the variability of the neurologic manifestations is controversial. To understand this paradox, we examined the proviral DNA load in nine different brain regions and three different peripheral tissues derived from ten human subjects at autopsy. Using a highly sensitive TaqMan probe-based real-time PCR, we determined the proviral load in multiple samples processed in parallel from each site. Unlike previously published reports, the present analysis identified uniform proviral distribution among the brain compartments examined without preferential accumulation of the DNA in any one of them. The overall viral DNA burden in the brain tissues was very low, approximately 1 viral integration per 1000 cells or less. In a subset of the tissue samples tested, the HIV DNA mostly existed in a free unintegrated form. The V3-V5 envelope sequences, demonstrated a brain-specific compartmentalization in four of the ten subjects and a phylogenetic overlap between the neural and non-neural compartments in three other subjects. The envelope sequences phylogenetically belonged to subtype C and the majority of them were R5 tropic. To the best of our knowledge, the present study represents the first analysis of the proviral burden in subtype C postmortem human brain tissues. Future studies should determine the presence of the viral antigens, the viral transcripts, and the proviral DNA, in parallel, in different brain compartments to shed more light on the significance of the viral burden on neurologic consequences of HIV infection.
Resumo:
In this paper, we design a new dynamic packet scheduling scheme suitable for differentiated service (DiffServ) network. Designed dynamic benefit weighted scheduling (DBWS) uses a dynamic weighted computation scheme loosely based on weighted round robin (WRR) policy. It predicts the weight required by expedited forwarding (EF) service for the current time slot (t) based on two criteria; (i) previous weight allocated to it at time (t-1), and (ii) the average increase in the queue length of EF buffer. This prediction provides smooth bandwidth allocation to all the services by avoiding overbooking of resources for EF service and still providing guaranteed services for it. The performance is analyzed for various scenarios at high, medium and low traffic conditions. The results show that packet loss is minimized, end to end delay is minimized and jitter is reduced and therefore meet quality of service (QoS) requirement of a network.
Investigation of schemes for incorporating generator Q limits in the fast decoupled load flow method
Resumo:
Fast Decoupled Load Flow (FDLF) is a very popular and widely used power flow analysis method because of its simplicity and efficiency. Even though the basic FDLF algorithm is well investigated, the same is not true in the case of additional schemes/modifications required to obtain adjusted load flow solutions using the FDLF method. Handling generator Q limits is one such important feature needed in any practical load flow method. This paper presents a comprehensive investigation of two classes of schemes intended to handle this aspect i.e. the bus type switching scheme and the sensitivity scheme. We propose two new sensitivity based schemes and assess their performance in comparison with the existing schemes. In addition, a new scheme to avoid the possibility of anomalous solutions encountered while using the conventional schemes is also proposed and evaluated. Results from extensive simulation studies are provided to highlight the strengths and weaknesses of these existing and proposed schemes, especially from the point of view of reliability.