146 resultados para Generation reallocation
Resumo:
A mechanism is presented here for the amplification of large-scale nonaxisymmetric magnetic fields as a manifestation of the dynamo effect. We generalize a result on restrictions of dynamo actions due to laminar flow originally derived by Zeldovich, Ruzmaikin, and Sokolov [Magnetic Fields in Astrophysics (Gordon and Breach, New York, 1983)]. We show how a screwlike motion having phi and z components of velocity can help to grow a magnetic field. This model postulates a large-scale flow having phi and z components with radial dependences (helical flow). Shear in the radial field, because of a near-flux-freezing condition, causes amplification of the phi component of the magnetic field. The radial and axial components grow due to the presence of turbulent diffusion. The shear in the large scale flow induces an indefinite growth of magnetic field without the a effect; nevertheless, turbulent diffusion forms an important part in the overall mechanism.
Resumo:
Trace processors rely on hierarchy, replication, and prediction to dramatically increase the execution speed of ordinary sequential programs. The authors describe some of the processors will meet future technology demands.
Resumo:
This paper looks at the complexity of four different incremental problems. The following are the problems considered: (1) Interval partitioning of a flow graph (2) Breadth first search (BFS) of a directed graph (3) Lexicographic depth first search (DFS) of a directed graph (4) Constructing the postorder listing of the nodes of a binary tree. The last problem arises out of the need for incrementally computing the Sethi-Ullman (SU) ordering [1] of the subtrees of a tree after it has undergone changes of a given type. These problems are among those that claimed our attention in the process of our designing algorithmic techniques for incremental code generation. BFS and DFS have certainly numerous other applications, but as far as our work is concerned, incremental code generation is the common thread linking these problems. The study of the complexity of these problems is done from two different perspectives. In [2] is given the theory of incremental relative lower bounds (IRLB). We use this theory to derive the IRLBs of the first three problems. Then we use the notion of a bounded incremental algorithm [4] to prove the unboundedness of the fourth problem with respect to the locally persistent model of computation. Possibly, the lower bound result for lexicographic DFS is the most interesting. In [5] the author considers lexicographic DFS to be a problem for which the incremental version may require the recomputation of the entire solution from scratch. In that sense, our IRLB result provides further evidence for this possibility with the proviso that the incremental DFS algorithms considered be ones that do not require too much of preprocessing.
Resumo:
A single-source network is said to be memory-free if all of the internal nodes (those except the source and the sinks) do not employ memory but merely send linear combinations of the incoming symbols (received at their incoming edges) on their outgoing edges. Memory-free networks with delay using network coding are forced to do inter-generation network coding, as a result of which the problem of some or all sinks requiring a large amount of memory for decoding is faced. In this work, we address this problem by utilizing memory elements at the internal nodes of the network also, which results in the reduction of the number of memory elements used at the sinks. We give an algorithm which employs memory at all the nodes of the network to achieve single- generation network coding. For fixed latency, our algorithm reduces the total number of memory elements used in the network to achieve single- generation network coding. We also discuss the advantages of employing single-generation network coding together with convolutional network-error correction codes (CNECCs) for networks with unit- delay and illustrate the performance gain of CNECCs by using memory at the intermediate nodes using simulations on an example network under a probabilistic network error model.
Resumo:
This paper formulates the automatic generation control (AGC) problem as a stochastic multistage decision problem. A strategy for solving this new AGC problem formulation is presented by using a reinforcement learning (RL) approach This method of obtaining an AGC controller does not depend on any knowledge of the system model and more importantly it admits considerable flexibility in defining the control objective. Two specific RL based AGC algorithms are presented. The first algorithm uses the traditional control objective of limiting area control error (ACE) excursions, where as, in the second algorithm, the controller can restore the load-generation balance by only monitoring deviation in tie line flows and system frequency and it does not need to know or estimate the composite ACE signal as is done by all current approaches. The effectiveness and versatility of the approaches has been demonstrated using a two area AGC model. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
In this paper, a wind energy conversion system (WECS) using grid-connected wound rotor induction machine controlled from the rotor side is compared with both fixed speed and variable speed systems using cage rotor induction machine. The comparison is done on the basis of (I) major hardware components required, (II) operating region, and (III) energy output due to a defined wind function using the characteristics of a practical wind turbine. Although a fixed speed system is more simple and reliable, it severely limits the energy output of a wind turbine. In case of variable speed systems, comparison shows that using a wound rotor induction machine of similar rating can significantly enhance energy capture. This comes about due to the ability to operate with rated torque even at supersynchronous speeds; power is then generated out of the rotor as well as the stator. Moreover, with rotor side control, the voltage rating of the power devices and dc bus capacitor bank is reduced. The size of the line side inductor also decreasesd. Results are presented to show the substantial advantages of the doubly fed system.
Resumo:
The IEEE 802.16/WiMAX standard has fully embraced multi-antenna technology and can, thus, deliver robust and high transmission rates and higher system capacity. Nevertheless,due to its inherent form-factor constraints and cost concerns, a WiMAX mobile station (MS) should preferably contain fewer radio frequency (RF) chains than antenna elements.This is because RF chains are often substantially more expensive than antenna elements. Thus, antenna selection, wherein a subset of antennas is dynamically selected to connect to the limited RF chains for transceiving, is a highly appealing performance enhancement technique for multi-antenna WiMAX terminals.In this paper, a novel antenna selection protocol tailored for next-generation IEEE 802.16 mobile stations is proposed. As demonstrated by the extensive OPNET simulations, the proposed protocol delivers a significant performance improvement over conventional 802.16 terminals that lack the antenna selection capability. Moreover, the new protocol leverages the existing signaling methods defined in 802.16, thereby incurring a negligible signaling overhead and requiring only diminutive modifications of the standard. To the best of our knowledge, this paper represents the first effort to support antenna selection capability in IEEE 802.16 mobile stations.
Resumo:
We derive bounds on leptonic double mass insertions of the type delta(l)(i4)delta(l)(4j) in four generational MSSM, using the present limits on l(i) -> l(j) + gamma. Two main features distinguish the rates of these processes in MSSM4 from MSSM3: (a) tan beta is restricted to be very small less than or similar to 3 and (b) the large masses for the fourth generation leptons. In spite of small tan beta, there is an enhancement in amplitudes with LLRR (4 delta(ll)(i4)delta(rr)(4j)) type insertions which pick up the mass of the fourth generation lepton, m(tau'). We find these bounds to be at least two orders of magnitude more stringent than those in MSSM3. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Technology scaling has caused Negative Bias Temperature Instability (NBTI) to emerge as a major circuit reliability concern. Simultaneously leakage power is becoming a greater fraction of the total power dissipated by logic circuits. As both NBTI and leakage power are highly dependent on vectors applied at the circuit’s inputs, they can be minimized by applying carefully chosen input vectors during periods when the circuit is in standby or idle mode. Unfortunately input vectors that minimize leakage power are not the ones that minimize NBTI degradation, so there is a need for a methodology to generate input vectors that minimize both of these variables.This paper proposes such a systematic methodology for the generation of input vectors which minimize leakage power under the constraint that NBTI degradation does not exceed a specified limit. These input vectors can be applied at the primary inputs of a circuit when it is in standby/idle mode and are such that the gates dissipate only a small amount of leakage power and also allow a large majority of the transistors on critical paths to be in the “recovery” phase of NBTI degradation. The advantage of this methodology is that allowing circuit designers to constrain NBTI degradation to below a specified limit enables tighter guardbanding, increasing performance. Our methodology guarantees that the generated input vector dissipates the least leakage power among all the input vectors that satisfy the degradation constraint. We formulate the problem as a zero-one integer linear program and show that this formulation produces input vectors whose leakage power is within 1% of a minimum leakage vector selected by a search algorithm and simultaneously reduces NBTI by about 5.75% of maximum circuit delay as compared to the worst case NBTI degradation. Our paper also proposes two new algorithms for the identification of circuit paths that are affected the most by NBTI degradation. The number of such paths identified by our algorithms are an order of magnitude fewer than previously proposed heuristics.
Resumo:
Scalable Networks on Chips (NoCs) are needed to match the ever-increasing communication demands of large-scale Multi-Processor Systems-on-chip (MPSoCs) for multi media communication applications. The heterogeneous nature of application specific on-chip cores along with the specific communication requirements among the cores calls for the design of application-specific NoCs for improved performance in terms of communication energy, latency, and throughput. In this work, we propose a methodology for the design of customized irregular networks-on-chip. The proposed method exploits a priori knowledge of the applications communication characteristic to generate an optimized network topology and corresponding routing tables.