87 resultados para Scheduler simulator
Resumo:
In this paper, we propose a novel S/D engineering for dual-gated Bilayer Graphene (BLG) Field Effect Transistor (FET) using doped semiconductors (with a bandgap) as source and drain to obtain unipolar complementary transistors. To simulate the device, a self-consistent Non-Equilibrium Green's Function (NEGF) solver has been developed and validated against published experimental data. Using the simulator, we predict an on-off ratio in excess of 10(4) and a subthreshold slope of similar to 110mV/decade with excellent scalability and current saturation, for a 20nm gate length unipolar BLG FET. However, the performance of the proposed device is found to be strongly dependent on the S/D series resistance effect. The obtained results show significant improvements over existing reports, marking an important step towards bilayer graphene logic devices.
Resumo:
Frequency-domain scheduling and rate adaptation have helped next generation orthogonal frequency division multiple access (OFDMA) based wireless cellular systems such as Long Term Evolution (LTE) achieve significantly higher spectral efficiencies. To overcome the severe uplink feedback bandwidth constraints, LTE uses several techniques to reduce the feedback required by a frequency-domain scheduler about the channel state information of all subcarriers of all users. In this paper, we analyze the throughput achieved by the User Selected Subband feedback scheme of LTE. In it, a user feeds back only the indices of the best M subbands and a single 4-bit estimate of the average rate achievable over all selected M subbands. In addition, we compare the performance with the subband-level feedback scheme of LTE, and highlight the role of the scheduler by comparing the performances of the unfair greedy scheduler and the proportional fair (PF) scheduler. Our analysis sheds several insights into the working of the feedback reduction techniques used in LTE.
Resumo:
A new class of nets, called S-nets, is introduced for the performance analysis of scheduling algorithms used in real-time systems Deterministic timed Petri nets do not adequately model the scheduling of resources encountered in real-time systems, and need to be augmented with resource places and signal places, and a scheduler block, to facilitate the modeling of scheduling algorithms. The tokens are colored, and the transition firing rules are suitably modified. Further, the concept of transition folding is used, to get intuitively simple models of multiframe real-time systems. Two generic performance measures, called �load index� and �balance index,� which characterize the resource utilization and the uniformity of workload distribution, respectively, are defined. The utility of S-nets for evaluating heuristic-based scheduling schemes is illustrated by considering three heuristics for real-time scheduling. S-nets are useful in tuning the hardware configuration and the underlying scheduling policy, so that the system utilization is maximized, and the workload distribution among the computing resources is balanced.
Resumo:
Genetic algorithms (GAs) are search methods that are being employed in a multitude of applications with extremely large search spaces. Recently, there has been considerable interest among GA researchers in understanding and formalizing the working of GAs. In an earlier paper, we have introduced the notion of binomially distributed populations as the central idea behind an exact ''populationary'' model of the large-population dynamics of the GA operators for objective functions called ''functions of unitation.'' In this paper, we extend this populationary model of GA dynamics to a more general class of objective functions called functions of unitation variables. We generalize the notion of a binomially distributed population to a generalized binomially distributed population (GBDP). We show that the effects of selection, crossover, and mutation can be exactly modelled after decomposing the population into GBDPs. Based on this generalized model, we have implemented a GA simulator for functions of two unitation variables-GASIM 2, and the distributions predicted by GASIM 2 match with those obtained from actual GA runs. The generalized populationary model of GA dynamics not only presents a novel and natural way of interpreting the workings of GAs with large populations, but it also provides for an efficient implementation of the model as a GA simulator. (C) Elsevier Science Inc. 1997.
Resumo:
Although the recently proposed single-implicit-equation-based input voltage equations (IVEs) for the independent double-gate (IDG) MOSFET promise faster computation time than the earlier proposed coupled-equations-based IVEs, it is not clear how those equations could be solved inside a circuit simulator as the conventional Newton-Raphson (NR)-based root finding method will not always converge due to the presence of discontinuity at the G-zero point (GZP) and nonremovable singularities in the trigonometric IVE. In this paper, we propose a unique algorithm to solve those IVEs, which combines the Ridders algorithm with the NR-based technique in order to provide assured convergence for any bias conditions. Studying the IDG MOSFET operation carefully, we apply an optimized initial guess to the NR component and a minimized solution space to the Ridders component in order to achieve rapid convergence, which is very important for circuit simulation. To reduce the computation budget further, we propose a new closed-form solution of the IVEs in the near vicinity of the GZP. The proposed algorithm is tested with different device parameters in the extended range of bias conditions and successfully implemented in a commercial circuit simulator through its Verilog-A interface.
Resumo:
In recent years, parallel computers have been attracting attention for simulating artificial neural networks (ANN). This is due to the inherent parallelism in ANN. This work is aimed at studying ways of parallelizing adaptive resonance theory (ART), a popular neural network algorithm. The core computations of ART are separated and different strategies of parallelizing ART are discussed. We present mapping strategies for ART 2-A neural network onto ring and mesh architectures. The required parallel architecture is simulated using a parallel architectural simulator, PROTEUS and parallel programs are written using a superset of C for the algorithms presented. A simulation-based scalability study of the algorithm-architecture match is carried out. The various overheads are identified in order to suggest ways of improving the performance. Our main objective is to find out the performance of the ART2-A network on different parallel architectures. (C) 1999 Elsevier Science B.V. All rights reserved.
Resumo:
We consider the problem of wireless channel allocation to multiple users. A slot is given to a user with a highest metric (e.g., channel gain) in that slot. The scheduler may not know the channel states of all the users at the beginning of each slot. In this scenario opportunistic splitting is an attractive solution. However this algorithm requires that the metrics of different users form independent, identically distributed (iid) sequences with same distribution and that their distribution and number be known to the scheduler. This limits the usefulness of opportunistic splitting. In this paper we develop a parametric version of this algorithm. The optimal parameters of the algorithm are learnt online through a stochastic approximation scheme. Our algorithm does not require the metrics of different users to have the same distribution. The statistics of these metrics and the number of users can be unknown and also vary with time. Each metric sequence can be Markov. We prove the convergence of the algorithm and show its utility by scheduling the channel to maximize its throughput while satisfying some fairness and/or quality of service constraints.
Resumo:
Simulation is an important means of evaluating new microarchitectures. With the invention of multi-core (CMP) platforms, simulators are becoming larger and more complex. However, with the availability of CMPs with larger caches and higher operating frequency, the wall clock time required for simulating an application has become comparatively shorter. Reducing this simulation time further is a great challenge, especially in the case of multi-threaded workload due to indeterminacy introduced due to simultaneously executing various threads. In this paper, we propose a technique for speeding multi-core simulation. The model of the processor core and cache are replaced with functional models, to achieve speedup. A timed Petri net model is used to estimate the execution time of the processor and the memory access latencies are estimated using hit/miss information obtained from the functional model of the cache. This model can be used to predict performance of data parallel applications or multiprogramming workload on CMP platform with various cache hierarchies and shared bus interconnect. The error in estimation of the execution time of an application is within 6%. The speedup achieved ranges between an average of 2x--4x over the cycle accurate simulator.
Resumo:
We develop analytical models for estimating the energy spent by stations (STAs) in infrastructure WLANs when performing TCP controlled file downloads. We focus on the energy spent in radio communication when the STAs are in the Continuously Active Mode (CAM), or in the static Power Save Mode (PSM). Our approach is to develop accurate models for obtaining the fraction of times the STA radios spend in idling, receiving and transmitting. We discuss two traffic models for each mode of operation: (i) each STA performs one large file download, and (ii) the STAs perform short file transfers. We evaluate the rate of STA energy expenditure with long file downloads, and show that static PSM is worse than just using CAM. For short file downloads we compute the number of file downloads that can be completed with given battery capacity, and show that PSM performs better than CAM for this case. We provide a validation of our analytical models using the NS-2 simulator.
Resumo:
We consider the problem of scheduling a wireless channel among multiple users. A slot is given to a user with a highest metric (e.g., channel gain) in that slot. The scheduler may not know the channel states of all the users at the beginning of each slot. In this scenario opportunistic splitting is an attractive solution. However this algorithm requires that the metrics of different users form independent, identically distributed (iid) sequences with same distribution and that their distribution and number be known to the scheduler. This limits the usefulness of opportunistic splitting. In this paper we develop a parametric version of this algorithm. The optimal parameters of the algorithm are learnt online through a stochastic approximation scheme. Our algorithm does not require the metrics of different users to have the same distribution. The statistics of these metrics and the number of users can be unknown and also vary with time. We prove the convergence of the algorithm and show its utility by scheduling the channel to maximize its throughput while satisfying some fairness and/or quality of service constraints.
Resumo:
Background: Fighter pilots are frequently exposed to high temperatures during high-speed low-level flight. Heat strain can result in temporary impairment of cognitive functions and when severe, loss of consciousness and consequent loss of life and equipment. Induction of stress proteins is a highly conserved stress response mechanism from bacteria to humans. induced stress protein levels are known to be cytoprotective and have been correlated with stress tolerance. Although many studies on the heat shock response mechanisms have been performed in cell culture and animal model systems, there is very limited information on stress protein induction in human subjects. Hypothesis: Heat shock proteins (Hsp), especially Hsp70, may be induced in human subjects exposed to high temperatures in a hot cockpit designed to simulate heat stress experienced in low flying sorties. Methods: Six healthy volunteers were subjected to heat stress at 55degreesC in a high temperature cockpit simulator for a period of 1 h at 30% humidity. Physiological parameters such as oral and skin temperatures, heart rate, and sweat rate were monitored regularly during this time. The level of Hsp70 in leukocytes was examined before and after the heat exposure in each subject. Conclusions: Hsp70 was found to be significantly induced in all the six subjects exposed to heat stress. The level of induced Hsp70 appears to correlate with other strain indicators such as accumulative circulatory strain and Craig's modified index. The usefulness of Hsp70 as a molecular marker of heat stress in humans is discussed.
Resumo:
We develop several hardware and software simulation blocks for the TinyOS-2 (TOSSIM-T2) simulator. The choice of simulated hardware platform is the popular MICA2 mote. While the hardware simulation elements comprise of radio and external flash memory, the software blocks include an environment noise model, packet delivery model and an energy estimator block for the complete system. The hardware radio block uses the software environment noise model to sample the noise floor.The packet delivery model is built by establishing the SNR-PRR curve for the MICA2 system. The energy estimator block models energy consumption by Micro Controller Unit(MCU), Radio,LEDs, and external flash memory. Using the manufacturer’s data sheets we provide an estimate of the energy consumed by the hardware during transmission, reception and also track several of the MCUs states with the associated energy consumption. To study the effectiveness of this work, we take a case study of a paper presented in [1]. We obtain three sets of results for energy consumption through mathematical analysis, simulation using the blocks built into PowerTossim-T2 and finally laboratory measurements. Since there is a significant match between these result sets, we propose our blocks for T2 community to effectively test their application energy requirements and node life times.
Resumo:
Vehicular ad hoc network (VANET) applications are principally categorized into safety and commercial applications. Efficient traffic management for routing an emergency vehicle is of paramount importance in safety applications of VANETs. In the first case, a typical example of a high dense urban scenario is considered to demonstrate the role of penetration ratio for achieving reduced travel time between source and destination points. The major requirement for testing these VANET applications is a realistic simulation approach which would justify the results prior to actual deployment. A Traffic Simulator coupled with a Network Simulator using a feedback loop feature is apt for realistic simulation of VANETs. Thus, in this paper, we develop the safety application using traffic control interface (TraCI), which couples SUMO (traffic simulator) and NS2 (network simulator). Likewise, the mean throughput is one of the necessary performance measures for commercial applications of VANETs. In the next case, commercial applications have been considered wherein the data is transferred amongst vehicles (V2V) and between roadside infrastructure and vehicles (I2V), for which the throughput is assessed.
Resumo:
In Universal Mobile Telecommunication Systems (UMTS), the Downlink Shared Channel (DSCH) can be used for providing streaming services. The traffic model for streaming services is different from the commonly used continuously- backlogged model. Each connection specifies a required service rate over an interval of time, k, called the "control horizon". In this paper, our objective is to determine how k DSCH frames should be shared among a set of I connections. We need a scheduler that is efficient and fair and introduce the notion of discrepancy to balance the conflicting requirements of aggregate throughput and fairness. Our motive is to schedule the mobiles in such a way that the schedule minimizes the discrepancy over the k frames. We propose an optimal and computationally efficient algorithm, called STEM+. The proof of the optimality of STEM+, when applied to the UMTS rate sets is the major contribution of this paper. We also show that STEM+ performs better in terms of both fairness and aggregate throughput compared to other scheduling algorithms. Thus, STEM+ achieves both fairness and efficiency and is therefore an appealing algorithm for scheduling streaming connections.
Resumo:
Accurate system planning and performance evaluation requires knowledge of the joint impact of scheduling, interference, and fading. However, current analyses either require costly numerical simulations or make simplifying assumptions that limit the applicability of the results. In this paper, we derive analytical expressions for the spectral efficiency of cellular systems that use either the channel-unaware but fair round robin scheduler or the greedy, channel-aware but unfair maximum signal to interference ratio scheduler. As is the case in real deployments, non-identical co-channel interference at each user, both Rayleigh fading and lognormal shadowing, and limited modulation constellation sizes are accounted for in the analysis. We show that using a simple moment generating function-based lognormal approximation technique and an accurate Gaussian-Q function approximation leads to results that match simulations well. These results are more accurate than erstwhile results that instead used the moment-matching Fenton-Wilkinson approximation method and bounds on the Q function. The spectral efficiency of cellular systems is strongly influenced by the channel scheduler and the small constellation size that is typically used in third generation cellular systems.