930 resultados para Reactive optimal power flow
Resumo:
With the proliferation of mobile wireless communication and embedded systems, the energy efficiency becomes a major design constraint. The dissipated energy is often referred as the product of power dissipation and the input-output delay. Most of electronic design automation techniques focus on optimising only one of these parameters either power or delay. Industry standard design flows integrate systematic methods of optimising either area or timing while for power consumption optimisation one often employs heuristics which are characteristic to a specific design. In this work we answer three questions in our quest to provide a systematic approach to joint power and delay Optimisation. The first question of our research is: How to build a design flow which incorporates academic and industry standard design flows for power optimisation? To address this question, we use a reference design flow provided by Synopsys and integrate in this flow academic tools and methodologies. The proposed design flow is used as a platform for analysing some novel algorithms and methodologies for optimisation in the context of digital circuits. The second question we answer is: Is possible to apply a systematic approach for power optimisation in the context of combinational digital circuits? The starting point is a selection of a suitable data structure which can easily incorporate information about delay, power, area and which then allows optimisation algorithms to be applied. In particular we address the implications of a systematic power optimisation methodologies and the potential degradation of other (often conflicting) parameters such as area or the delay of implementation. Finally, the third question which this thesis attempts to answer is: Is there a systematic approach for multi-objective optimisation of delay and power? A delay-driven power and power-driven delay optimisation is proposed in order to have balanced delay and power values. This implies that each power optimisation step is not only constrained by the decrease in power but also the increase in delay. Similarly, each delay optimisation step is not only governed with the decrease in delay but also the increase in power. The goal is to obtain multi-objective optimisation of digital circuits where the two conflicting objectives are power and delay. The logic synthesis and optimisation methodology is based on AND-Inverter Graphs (AIGs) which represent the functionality of the circuit. The switching activities and arrival times of circuit nodes are annotated onto an AND-Inverter Graph under the zero and a non-zero-delay model. We introduce then several reordering rules which are applied on the AIG nodes to minimise switching power or longest path delay of the circuit at the pre-technology mapping level. The academic Electronic Design Automation (EDA) tool ABC is used for the manipulation of AND-Inverter Graphs. We have implemented various combinatorial optimisation algorithms often used in Electronic Design Automation such as Simulated Annealing and Uniform Cost Search Algorithm. Simulated Annealing (SMA) is a probabilistic meta heuristic for the global optimization problem of locating a good approximation to the global optimum of a given function in a large search space. We used SMA to probabilistically decide between moving from one optimised solution to another such that the dynamic power is optimised under given delay constraints and the delay is optimised under given power constraints. A good approximation to the global optimum solution of energy constraint is obtained. Uniform Cost Search (UCS) is a tree search algorithm used for traversing or searching a weighted tree, tree structure, or graph. We have used Uniform Cost Search Algorithm to search within the AIG network, a specific AIG node order for the reordering rules application. After the reordering rules application, the AIG network is mapped to an AIG netlist using specific library cells. Our approach combines network re-structuring, AIG nodes reordering, dynamic power and longest path delay estimation and optimisation and finally technology mapping to an AIG netlist. A set of MCNC Benchmark circuits and large combinational circuits up to 100,000 gates have been used to validate our methodology. Comparisons for power and delay optimisation are made with the best synthesis scripts used in ABC. Reduction of 23% in power and 15% in delay with minimal overhead is achieved, compared to the best known ABC results. Also, our approach is also implemented on a number of processors with combinational and sequential components and significant savings are achieved.
Resumo:
This dissertation applies a variety of quantitative methods to electricity and carbon market data, utility company accounts data, capital and operating costs to analyse some of the challenges associated with investment in energy assets. In particular, three distinct research topics are analysed within this general theme: the efficiency of interconnector trading, the optimal sizing of intermittent wind facilities and the impact of carbon pricing on the cost of capital for investors are researched in successive sections.
Resumo:
The synthetic utilities of the diazo and diazonium groups are matched only by their reputation for explosive decomposition. Continuous processing technology offers new opportunities to make and use these versatile intermediates at a range of scales with improved safety over traditional batch processes. In this minireview, the state of the art in the continuous flow processing of reactive diazo and diazonium species is discussed.
Cost savings from relaxation of operational constraints on a power system with high wind penetration
Resumo:
Wind energy is predominantly a nonsynchronous generation source. Large-scale integration of wind generation with existing electricity systems, therefore, presents challenges in maintaining system frequency stability and local voltage stability. Transmission system operators have implemented system operational constraints (SOCs) in order to maintain stability with high wind generation, but imposition of these constraints results in higher operating costs. A mixed integer programming tool was used to simulate generator dispatch in order to assess the impact of various SOCs on generation costs. Interleaved day-ahead scheduling and real-time dispatch models were developed to allow accurate representation of forced outages and wind forecast errors, and were applied to the proposed Irish power system of 2020 with a wind penetration of 32%. Savings of at least 7.8% in generation costs and reductions in wind curtailment of 50% were identified when the most influential SOCs were relaxed. The results also illustrate the need to relax local SOCs together with the system-wide nonsynchronous penetration limit SOC, as savings from increasing the nonsynchronous limit beyond 70% were restricted without relaxation of local SOCs. The methodology and results allow for quantification of the costs of SOCs, allowing the optimal upgrade path for generation and transmission infrastructure to be determined.
Resumo:
Current methods for large-scale wind collection are unviable in urban areas. In order to investigate the feasibility of generating power from winds in these environments, we sought to optimize placements of small vertical-axis wind turbines in areas of artificially-generated winds. We explored both vehicular transportation and architecture as sources of artificial wind, using a combination of anemometer arrays, global positioning system (GPS), and weather report data. We determined that transportation-generated winds were not significant enough for turbine implementation. In addition, safety and administrative concerns restricted the implementation of said wind turbines along roadways for transportation-generated wind collection. Wind measurements from our architecture collection were applied in models that can help predict other similar areas with artificial wind, as well as the optimal placement of a wind turbine in those areas.
Resumo:
This paper considers the problem of sequencing n jobs in a three-machine flow shop with the objective of minimizing the makespan, which is the completion time of the last job. An O(n log n) time heuristic that is based on Johnson's algorithm is presented. It is shown to generate a schedule with length at most 5/3 times that of an optimal schedule, thereby reducing the previous best available worst-case performance ratio of 2. An application to the general flow shop is also discussed.
Resumo:
We study the special case of the m machine flow shop problem in which the processing time of each operation of job j is equal to pj; this variant of the flow shop problem is known as the proportionate flow shop problem. We show that for any number of machines and for any regular performance criterion we can restrict our search for an optimal schedule to permutation schedules. Moreover, we show that the problem of minimizing total weighted completion time is solvable in O(n2) time. © 1998 John Wiley & Sons, Ltd.
Resumo:
This paper considers a special class of flow-shop problems, known as the proportionate flow shop. In such a shop, each job flows through the machines in the same order and has equal processing times on the machines. The processing times of different jobs may be different. It is assumed that all operations of a job may be compressed by the same amount which will incur an additional cost. The objective is to minimize the makespan of the schedule together with a compression cost function which is non-decreasing with respect to the amount of compression. For a bicriterion problem of minimizing the makespan and a linear cost function, an O(n log n) algorithm is developed to construct the Pareto optimal set. For a single criterion problem, an O(n2) algorithm is developed to minimize the sum of the makespan and compression cost. Copyright © 1999 John Wiley & Sons, Ltd.
Resumo:
In this paper, we study a problem of scheduling and batching on two machines in a flow-shop and open-shop environment. Each machine processes operations in batches, and the processing time of a batch is the sum of the processing times of the operations in that batch. A setup time, which depends only on the machine, is required before a batch is processed on a machine, and all jobs in a batch remain at the machine until the entire batch is processed. The aim is to make batching and sequencing decisions, which specify a partition of the jobs into batches on each machine, and a processing order of the batches on each machine, respectively, so that the makespan is minimized. The flow-shop problem is shown to be strongly NP-hard. We demonstrate that there is an optimal solution with the same batches on the two machines; we refer to these as consistent batches. A heuristic is developed that selects the best schedule among several with one, two, or three consistent batches, and is shown to have a worst-case performance ratio of 4/3. For the open-shop, we show that the problem is NP-hard in the ordinary sense. By proving the existence of an optimal solution with one, two or three consistent batches, a close relationship is established with the problem of scheduling two or three identical parallel machines to minimize the makespan. This allows a pseudo-polynomial algorithm to be derived, and various heuristic methods to be suggested.
Resumo:
The growth of computer power allows the solution of complex problems related to compressible flow, which is an important class of problems in modern day CFD. Over the last 15 years or so, many review works on CFD have been published. This book concerns both mathematical and numerical methods for compressible flow. In particular, it provides a clear cut introduction as well as in depth treatment of modern numerical methods in CFD. This book is organised in two parts. The first part consists of Chapters 1 and 2, and is mainly devoted to theoretical discussions and results. Chapter 1 concerns fundamental physical concepts and theoretical results in gas dynamics. Chapter 2 describes the basic mathematical theory of compressible flow using the inviscid Euler equations and the viscous Navier–Stokes equations. Existence and uniqueness results are also included. The second part consists of modern numerical methods for the Euler and Navier–Stokes equations. Chapter 3 is devoted entirely to the finite volume method for the numerical solution of the Euler equations and covers fundamental concepts such as order of numerical schemes, stability and high-order schemes. The finite volume method is illustrated for 1-D as well as multidimensional Euler equations. Chapter 4 covers the theory of the finite element method and its application to compressible flow. A section is devoted to the combined finite volume–finite element method, and its background theory is also included. Throughout the book numerous examples have been included to demonstrate the numerical methods. The book provides a good insight into the numerical schemes, theoretical analysis, and validation of test problems. It is a very useful reference for applied mathematicians, numerical analysts, and practice engineers. It is also an important reference for postgraduate researchers in the field of scientific computing and CFD.
Resumo:
This paper discusses the reliability of power electronics modules. The approach taken combines numerical modeling techniques with experimentation and accelerated testing to identify failure modes and mechanisms for the power module structure and most importantly the root cause of a potential failure. The paper details results for two types of failure (i) wire bond fatigue and (ii) substrate delamination. Finite element method modeling techniques have been used to predict the stress distribution within the module structures. A response surface optimisation approach has been employed to enable the optimal design and parameter sensitivity to be determined. The response surface is used by a Monte Carlo method to determine the effects of uncertainty in the design.
Resumo:
In this paper the reliability of the isolation substrate and chip mountdown solder interconnect of power modules under thermal-mechanical loading has been analysed using a numerical modelling approach. The damage indicators such as the peel stress and the accumulated plastic work density in solder interconnect are calculated for a range of geometrical design parameters, and the effects of these parameters on the reliability are studied by using a combination of the finite element analysis (FEA) method and optimisation techniques. The sensitivities of the reliability of the isolation substrate and solder interconnect to the changes of the design parameters are obtained and optimal designs are studied using response surface approximation and gradient optimization method
Resumo:
We study the two-machine flow shop problem with an uncapacitated interstage transporter. The jobs have to be split into batches, and upon completion on the first machine, each batch has to be shipped to the second machine by a transporter. The best known heuristic for the problem is a –approximation algorithm that outputs a two-shipment schedule. We design a –approximation algorithm that finds schedules with at most three shipments, and this ratio cannot be improved, unless schedules with more shipments are created. This improvement is achieved due to a thorough analysis of schedules with two and three shipments by means of linear programming. We formulate problems of finding an optimal schedule with two or three shipments as integer linear programs and develop strongly polynomial algorithms that find solutions to their continuous relaxations with a small number of fractional variables
Resumo:
We study the two-machine flow shop problem with an uncapacitated interstage transporter. The jobs have to be split into batches, and upon completion on the first machine, each batch has to be shipped to the second machine by a transporter. The best known heuristic for the problem is a –approximation algorithm that outputs a two-shipment schedule. We design a –approximation algorithm that finds schedules with at most three shipments, and this ratio cannot be improved, unless schedules with more shipments are created. This improvement is achieved due to a thorough analysis of schedules with two and three shipments by means of linear programming. We formulate problems of finding an optimal schedule with two or three shipments as integer linear programs and develop strongly polynomial algorithms that find solutions to their continuous relaxations with a small number of fractional variables.
Resumo:
Optimal design of a power electronics module isolation substrate is assessed using a combination of finite element structural mechanics analysis and response surface optimisation technique. Primary failure modes in power electronics modules include the loss of structural integrity in the ceramic substrate materials due to stresses induced through thermal cycling. Analysis of the influence of ceramic substrate design parameters is undertaken using a design of experiments approach. Finite element analysis is used to determine the stress distribution for each design, and the results are used to construct a quadratic response surface function. A particle swarm optimisation algorithm is then used to determine the optimal substrate design. Analysis of response surface function gradients is used to perform sensitivity analysis and develop isolation substrate design rules. The influence of design uncertainties introduced through manufacturing tolerances is assessed using a Monte-Carlo algorithm, resulting in a stress distribution histogram. The probability of failure caused by the violation of design constraints has been analyzed. Six geometric design parameters are considered in this work and the most important design parameters have been identified. Overall analysis results can be used to enhance the design and reliability of the component.