136 resultados para Graph-Based Linear Programming Modelling
Resumo:
The generation expansion planning (GEP) problem consists in determining the type of technology, size, location and time at which new generation units must be integrated to the system, over a given planning horizon, to satisfy the forecasted energy demand. Over the past few years, due to an increasing awareness of environmental issues, different approaches to solve the GEP problem have included some sort of environmental policy, typically based on emission constraints. This paper presents a linear model in a dynamic version to solve the GEP problem. The main difference between the proposed model and most of the works presented in the specialized literature is the way the environmental policy is envisaged. Such policy includes: i) the taxation of CO(2) emissions, ii) an annual Emissions Reduction Rate (ERR) in the overall system, and iii) the gradual retirement of old inefficient generation plants. The proposed model is applied in an 11-region to design the most cost-effective and sustainable 10-technology US energy portfolio for the next 20 years.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Piecewise-Linear Programming (PLP) is an important area of Mathematical Programming and concerns the minimisation of a convex separable piecewise-linear objective function, subject to linear constraints. In this paper a subarea of PLP called Network Piecewise-Linear Programming (NPLP) is explored. The paper presents four specialised algorithms for NPLP: (Strongly Feasible) Primal Simplex, Dual Method, Out-of-Kilter and (Strongly Polynomial) Cost-Scaling and their relative efficiency is studied. A statistically designed experiment is used to perform a computational comparison of the algorithms. The response variable observed in the experiment is the CPU time to solve randomly generated network piecewise-linear problems classified according to problem class (Transportation, Transshipment and Circulation), problem size, extent of capacitation, and number of breakpoints per arc. Results and conclusions on performance of the algorithms are reported.
Resumo:
In this work the problem of defects location in power systems is formulated through a binary linear programming (BLP) model based on alarms historical database of control and protection devices from the system control center, sets theory of minimal coverage (AI) and protection philosophy adopted by the electric utility. In this model, circuit breaker operations are compared to their expected states in a strictly mathematical manner. For solving this BLP problem, which presents a great number of decision variables, a dedicated Genetic Algorithm (GA), is proposed. Control parameters of the GA, such as crossing over and mutation rates, population size, iterations number and population diversification, are calibrated in order to obtain efficiency and robustness. Results for a test system found in literature, are presented and discussed. © 2004 IEEE.
Resumo:
In this paper, an expert and interactive system for developing protection system for overhead and radial distribution feeders is proposed. In this system the protective devices can be allocated through heuristic and an optimized way. In the latter one, the placement problem is modeled as a mixed integer non-linear programming, which is solved by genetic algorithm (GA). Using information stored in a database as well as a knowledge base, the computational system is able to obtain excellent conditions of selectivity and coordination for improving the feeder reliability indices. Tests for assessment of the algorithm efficiency were carried out using a real-life 660-nodes feeder. © 2006 IEEE.
Resumo:
This paper presents a methodology to solve the transmission network expansion planning problem (TNEP) considering reliability and uncertainty in the demand. The proposed methodology provides an optimal expansion plan that allows the power system to operate adequately with an acceptable level of reliability and in an enviroment with uncertainness. The reliability criterion limits the expected value of the reliability index (LOLE - Loss Of Load Expectation) of the expanded system. The reliability is evaluated for the transmission system using an analytical technique based in enumeration. The mathematical model is solved, in a efficient way, using a specialized genetic algorithm of Chu-Beasley modified. Detailed results from an illustrative example are presented and discussed. © 2009 IEEE.
Resumo:
This paper presents a new methodology for solving the optimal VAr planning problem in multi-area electric power systems, using the Dantzig-Wolfe decomposition. The original multi-area problem is decomposed into subproblems (one for each area) and a master problem (coordinator). The solution of the VAr planning problem in each area is based on the application of successive linear programming, and the coordination scheme is based on the reactive power marginal costs in the border bus. The aim of the model is to provide coordinated mechanisms to carry out the VAr planning studies maximizing autonomy and confidentiality for each area, assuring global economy to the whole system. Using the mathematical model and computational implementation of the proposed methodology, numerical results are presented for two interconnected systems, each of them composed of three equal subsystems formed by IEEE30 and IEEE118 test systems. © 2011 IEEE.
Resumo:
Increased accessibility to high-performance computing resources has created a demand for user support through performance evaluation tools like the iSPD (iconic Simulator for Parallel and Distributed systems), a simulator based on iconic modelling for distributed environments such as computer grids. It was developed to make it easier for general users to create their grid models, including allocation and scheduling algorithms. This paper describes how schedulers are managed by iSPD and how users can easily adopt the scheduling policy that improves the system being simulated. A thorough description of iSPD is given, detailing its scheduler manager. Some comparisons between iSPD and Simgrid simulations, including runs of the simulated environment in a real cluster, are also presented. © 2012 IEEE.
Resumo:
In trickle irrigation systems, the design is based on the pre-established emission uniformity (EU) which is the combined result of the equipment characteristics and its hydraulic configuration. However, this desired value of the EU may not be confirmed by the final project (in field conditions) and neither by the yield uniformity. The hypotheses of this research were: a) the EU of a trickle irrigation system at field conditions is equal to the emission uniformity pre-established in the its design; b) EU has always the lowest value when compared with other indicators of uniformity; c) the discharge variation coefficient (VC) is not equal to production variation coefficient in the operational unit; d) the difference between the discharge variation coefficient and the productivity variation coefficient depends on the water depth applied. This study aimed to evaluate the relationship between EU used in the irrigation system design and the final yield uniformity. The uniformity indicators evaluated were: EU, distribution uniformity (UD) and the index proposed by Barragan & Wu (2005). They were compared estimating the performance of a trickle irrigation system applied in a citrus orchard with dimensions of 400m x 600m. The design of the irrigation system was optimized by a Linear Programming model. The tree rows were leveled in the larger direction and the spacing adopted in the orchard was 7m x 4m. The manifold line was always operating on a slope condition. The sensitivity analysis involved different slopes, 0, 3, 6, 9 and 12%, and different values of emission uniformity, 60, 70, 75, 80, 85, 90 and 94%. The citrus yield uniformity was evaluated by the variation coefficient. The emission uniformity (EU) after design differed from the EU pre-established, more sharply in the initial values lower than 90%. Comparing the uniformity indexes, the EU always generated lower values when compared with the UD and with the index proposed by Barragan. The emitter variation coefficient was always lower than the productivity variation coefficient. To obtain uniformity of production, it is necessary to consider the irrigation system uniformity and mainly the water depth to be applied.
Resumo:
An important tool for the heart disease diagnosis is the analysis of electrocardiogram (ECG) signals, since the non-invasive nature and simplicity of the ECG exam. According to the application, ECG data analysis consists of steps such as preprocessing, segmentation, feature extraction and classification aiming to detect cardiac arrhythmias (i.e.; cardiac rhythm abnormalities). Aiming to made a fast and accurate cardiac arrhythmia signal classification process, we apply and analyze a recent and robust supervised graph-based pattern recognition technique, the optimum-path forest (OPF) classifier. To the best of our knowledge, it is the first time that OPF classifier is used to the ECG heartbeat signal classification task. We then compare the performance (in terms of training and testing time, accuracy, specificity, and sensitivity) of the OPF classifier to the ones of other three well-known expert system classifiers, i.e.; support vector machine (SVM), Bayesian and multilayer artificial neural network (MLP), using features extracted from six main approaches considered in literature for ECG arrhythmia analysis. In our experiments, we use the MIT-BIH Arrhythmia Database and the evaluation protocol recommended by The Association for the Advancement of Medical Instrumentation. A discussion on the obtained results shows that OPF classifier presents a robust performance, i.e.; there is no need for parameter setup, as well as a high accuracy at an extremely low computational cost. Moreover, in average, the OPF classifier yielded greater performance than the MLP and SVM classifiers in terms of classification time and accuracy, and to produce quite similar performance to the Bayesian classifier, showing to be a promising technique for ECG signal analysis. © 2012 Elsevier Ltd. All rights reserved.
Resumo:
Perhaps due to its origins in a production scheduling software called Optimised Production Technology (OPT), plus the idea of focusing on system constraints, many believe that the Theory of Constraints (TOC) has a vocation for optimal solutions. Those who assess TOC according to this perspective indicate that it guarantees an optimal solution only in certain circumstances. In opposition to this view and founded on a numeric example of a production mix problem, this paper shows, by means of TOC assumptions, why the TOC should not be compared to methods intended to seek optimal or the best solutions, but rather sufficiently good solutions, possible in non-deterministic environments. Moreover, we extend the range of relevant literature on product mix decision by introducing a heuristic based on the uniquely identified work that aims at achieving feasible solutions according to the TOC point of view. The heuristic proposed is tested on 100 production mix problems and the results are compared with the responses obtained with the use of Integer Linear Programming. The results show that the heuristic gives good results on average, but performance falls sharply in some situations. © 2013 Copyright Taylor and Francis Group, LLC.
Resumo:
Defining product mix is very important for organisations because it determines how productive resources are allocated among various operations. However, it is often defined subjectively. The methods commonly used for this definition are Integer Linear Programming and heuristics based in Theory of Constraints, which use maximum throughput as a performance measure. Although this measure provides maximum throughput to specific problem, it does not consider aspects of time, as days, utilised to make the throughput. Taking this into account, the aim of this paper is to present a throughput per day approach to define product mix, as well as to propose a constructive heuristic to help in this process. The results show that the proposed heuristic obtained satisfactory approximation when compared to the optimum values obtained by enumeration. © 2013 Copyright Taylor and Francis Group, LLC.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Pós-graduação em Engenharia Elétrica - FEIS