48 resultados para heuristic algorithms
em Instituto Politécnico do Porto, Portugal
Resumo:
Intensive use of Distributed Generation (DG) represents a change in the paradigm of power systems operation making small-scale energy generation and storage decision making relevant for the whole system. This paradigm led to the concept of smart grid for which an efficient management, both in technical and economic terms, should be assured. This paper presents a new approach to solve the economic dispatch in smart grids. The proposed methodology for resource management involves two stages. The first one considers fuzzy set theory to define the natural resources range forecast as well as the load forecast. The second stage uses heuristic optimization to determine the economic dispatch considering the generation forecast, storage management and demand response
Resumo:
Genetic Algorithms (GAs) are adaptive heuristic search algorithm based on the evolutionary ideas of natural selection and genetic. The basic concept of GAs is designed to simulate processes in natural system necessary for evolution, specifically those that follow the principles first laid down by Charles Darwin of survival of the fittest. On the other hand, Particle swarm optimization (PSO) is a population based stochastic optimization technique inspired by social behavior of bird flocking or fish schooling. PSO shares many similarities with evolutionary computation techniques such as GAs. The system is initialized with a population of random solutions and searches for optima by updating generations. However, unlike GA, PSO has no evolution operators such as crossover and mutation. In PSO, the potential solutions, called particles, fly through the problem space by following the current optimum particles. PSO is attractive because there are few parameters to adjust. This paper presents hybridization between a GA algorithm and a PSO algorithm (crossing the two algorithms). The resulting algorithm is applied to the synthesis of combinational logic circuits. With this combination is possible to take advantage of the best features of each particular algorithm.
Resumo:
Fuzzy logic controllers (FLC) are intelligent systems, based on heuristic knowledge, that have been largely applied in numerous areas of everyday life. They can be used to describe a linear or nonlinear system and are suitable when a real system is not known or too difficult to find their model. FLC provide a formal methodology for representing, manipulating and implementing a human heuristic knowledge on how to control a system. These controllers can be seen as artificial decision makers that operate in a closed-loop system, in real time. The main aim of this work was to develop a single optimal fuzzy controller, easily adaptable to a wide range of systems – simple to complex, linear to nonlinear – and able to control all these systems. Due to their efficiency in searching and finding optimal solution for high complexity problems, GAs were used to perform the FLC tuning by finding the best parameters to obtain the best responses. The work was performed using the MATLAB/SIMULINK software. This is a very useful tool that provides an easy way to test and analyse the FLC, the PID and the GAs in the same environment. Therefore, it was proposed a Fuzzy PID controller (FL-PID) type namely, the Fuzzy PD+I. For that, the controller was compared with the classical PID controller tuned with, the heuristic Ziegler-Nichols tuning method, the optimal Zhuang-Atherton tuning method and the GA method itself. The IAE, ISE, ITAE and ITSE criteria, used as the GA fitness functions, were applied to compare the controllers performance used in this work. Overall, and for most systems, the FL-PID results tuned with GAs were very satisfactory. Moreover, in some cases the results were substantially better than for the other PID controllers. The best system responses were obtained with the IAE and ITAE criteria used to tune the FL-PID and PID controllers.
Resumo:
The best places to locate the Gas Supply Units (GSUs) on a natural gas systems and their optimal allocation to loads are the key factors to organize an efficient upstream gas infrastructure. The number of GSUs and their optimal location in a gas network is a decision problem that can be formulated as a linear programming problem. Our emphasis is on the formulation and use of a suitable location model, reflecting real-world operations and constraints of a natural gas system. This paper presents a heuristic model, based on lagrangean approach, developed for finding the optimal GSUs location on a natural gas network, minimizing expenses and maximizing throughput and security of supply.The location model is applied to the Iberian high pressure natural gas network, a system modelised with 65 demand nodes. These nodes are linked by physical and virtual pipelines – road trucks with gas in liquefied form. The location model result shows the best places to locate, with the optimal demand allocation and the most economical gas transport mode: by pipeline or by road truck.
Resumo:
The introduction of new distributed energy resources, based on natural intermittent power sources, in power systems imposes the development of new adequate operation management and control methods. This paper proposes a short-term Energy Resource Management (ERM) methodology performed in two phases. The first one addresses the hour-ahead ERM scheduling and the second one deals with the five-minute ahead ERM scheduling. Both phases consider the day-ahead resource scheduling solution. The ERM scheduling is formulated as an optimization problem that aims to minimize the operation costs from the point of view of a virtual power player that manages the network and the existing resources. The optimization problem is solved by a deterministic mixed-integer non-linear programming approach and by a heuristic approach based on genetic algorithms. A case study considering a distribution network with 33 bus, 66 distributed generation, 32 loads with demand response contracts and 7 storage units has been implemented in a PSCADbased simulator developed in the field of the presented work, in order to validate the proposed short-term ERM methodology considering the dynamic power system behavior.
Resumo:
The smart grid concept appears as a suitable solution to guarantee the power system operation in the new electricity paradigm with electricity markets and integration of large amounts of Distributed Energy Resources (DERs). Virtual Power Player (VPP) will have a significant importance in the management of a smart grid. In the context of this new paradigm, Electric Vehicles (EVs) rise as a good available resource to be used as a DER by a VPP. This paper presents the application of the Simulated Annealing (SA) technique to solve the Energy Resource Management (ERM) of a VPP. It is also presented a new heuristic approach to intelligently handle the charge and discharge of the EVs. This heuristic process is incorporated in the SA technique, in order to improve the results of the ERM. The case study shows the results of the ERM for a 33-bus distribution network with three different EVs penetration levels, i. e., with 1000, 2000 and 3000 EVs. The results of the proposed adaptation of the SA technique are compared with a previous SA version and a deterministic technique.
Resumo:
Important research effort has been devoted to the topic of optimal planning of distribution systems. The non linear nature of the system, the need to consider a large number of scenarios and the increasing necessity to deal with uncertainties make optimal planning in distribution systems a difficult task. Heuristic techniques approaches have been proposed to deal with these issues, overcoming some of the inherent difficulties of classic methodologies. This paper considers several methodologies used to address planning problems of electrical power distribution networks, namely mixedinteger linear programming (MILP), ant colony algorithms (AC), genetic algorithms (GA), tabu search (TS), branch exchange (BE), simulated annealing (SA) and the Bender´s decomposition deterministic non-linear optimization technique (BD). Adequacy of theses techniques to deal with uncertainties is discussed. The behaviour of each optimization technique is compared from the point of view of the obtained solution and of the methodology performance. The paper presents results of the application of these optimization techniques to a real case of a 10-kV electrical distribution system with 201 nodes that feeds an urban area.
Resumo:
This paper proposes two meta-heuristics (Genetic Algorithm and Evolutionary Particle Swarm Optimization) for solving a 15 bid-based case of Ancillary Services Dispatch in an Electricity Market. A Linear Programming approach is also included for comparison purposes. A test case based on the dispatch of Regulation Down, Regulation Up, Spinning Reserve and Non-Spinning Reserve services is used to demonstrate that the use of meta-heuristics is suitable for solving this kind of optimization problem. Faster execution times and lower computational resources requirements are the most relevant advantages of the used meta-heuristics when compared with the Linear Programming approach.
Resumo:
Electricity market players operating in a liberalized environment requires access to an adequate decision support tool, allowing them to consider all the business opportunities and take strategic decisions. Ancillary services represent a good negotiation opportunity that must be considered by market players. For this, decision support tool must include ancillary market simulation. This paper proposes two different methods (Linear Programming and Genetic Algorithm approaches) for ancillary services dispatch. The methodologies are implemented in MASCEM, a multi-agent based electricity market simulator. A test case based on California Independent System Operator (CAISO) data concerning the dispatch of Regulation Down, Regulation Up, Spinning Reserve and Non-Spinning Reserve services is included in this paper.
Resumo:
Introduction: Image resizing is a normal feature incorporated into the Nuclear Medicine digital imaging. Upsampling is done by manufacturers to adequately fit more the acquired images on the display screen and it is applied when there is a need to increase - or decrease - the total number of pixels. This paper pretends to compare the “hqnx” and the “nxSaI” magnification algorithms with two interpolation algorithms – “nearest neighbor” and “bicubic interpolation” – in the image upsampling operations. Material and Methods: Three distinct Nuclear Medicine images were enlarged 2 and 4 times with the different digital image resizing algorithms (nearest neighbor, bicubic interpolation nxSaI and hqnx). To evaluate the pixel’s changes between the different output images, 3D whole image plot profiles and surface plots were used as an addition to the visual approach in the 4x upsampled images. Results: In the 2x enlarged images the visual differences were not so noteworthy. Although, it was clearly noticed that bicubic interpolation presented the best results. In the 4x enlarged images the differences were significant, with the bicubic interpolated images presenting the best results. Hqnx resized images presented better quality than 4xSaI and nearest neighbor interpolated images, however, its intense “halo effect” affects greatly the definition and boundaries of the image contents. Conclusion: The hqnx and the nxSaI algorithms were designed for images with clear edges and so its use in Nuclear Medicine images is obviously inadequate. Bicubic interpolation seems, from the algorithms studied, the most suitable and its each day wider applications seem to show it, being assumed as a multi-image type efficient algorithm.
Resumo:
Introduction: A major focus of data mining process - especially machine learning researches - is to automatically learn to recognize complex patterns and help to take the adequate decisions strictly based on the acquired data. Since imaging techniques like MPI – Myocardial Perfusion Imaging on Nuclear Cardiology, can implicate a huge part of the daily workflow and generate gigabytes of data, there could be advantages on Computerized Analysis of data over Human Analysis: shorter time, homogeneity and consistency, automatic recording of analysis results, relatively inexpensive, etc.Objectives: The aim of this study relates with the evaluation of the efficacy of this methodology on the evaluation of MPI Stress studies and the process of decision taking concerning the continuation – or not – of the evaluation of each patient. It has been pursued has an objective to automatically classify a patient test in one of three groups: “Positive”, “Negative” and “Indeterminate”. “Positive” would directly follow to the Rest test part of the exam, the “Negative” would be directly exempted from continuation and only the “Indeterminate” group would deserve the clinician analysis, so allowing economy of clinician’s effort, increasing workflow fluidity at the technologist’s level and probably sparing time to patients. Methods: WEKA v3.6.2 open source software was used to make a comparative analysis of three WEKA algorithms (“OneR”, “J48” and “Naïve Bayes”) - on a retrospective study using the comparison with correspondent clinical results as reference, signed by nuclear cardiologist experts - on “SPECT Heart Dataset”, available on University of California – Irvine, at the Machine Learning Repository. For evaluation purposes, criteria as “Precision”, “Incorrectly Classified Instances” and “Receiver Operating Characteristics (ROC) Areas” were considered. Results: The interpretation of the data suggests that the Naïve Bayes algorithm has the best performance among the three previously selected algorithms. Conclusions: It is believed - and apparently supported by the findings - that machine learning algorithms could significantly assist, at an intermediary level, on the analysis of scintigraphic data obtained on MPI, namely after Stress acquisition, so eventually increasing efficiency of the entire system and potentially easing both roles of Technologists and Nuclear Cardiologists. In the actual continuation of this study, it is planned to use more patient information and significantly increase the population under study, in order to allow improving system accuracy.
Resumo:
Aquando da definição de um layout por fluxo de produto, ou linha de produção, é necessário proceder-se à melhor selecção de combinações de tarefas a serem executadas em cada estação / posto de trabalho para que o trabalho seja executado numa sequência exequível e sejam necessárias quantidades de tempo aproximadamente iguais em cada estação / posto de trabalho. Este processo é chamado de balanceamento da linha de produção. Verifica-se que as estações de trabalho e equipamentos podem ser combinados de muitas maneiras diferentes; daí que a necessidade de efectuar o balanceamento das linhas de produção implique a distribuição de actividades sequenciais por postos de trabalho de modo a permitir uma elevada utilização de trabalho e de equipamentos e a minimizar o tempo de vazio. Os problemas de balanceamento de linhas são tipicamente problemas complexos de tratar, devido ao elevado número de combinações possíveis. Entre os métodos utilizados para resolver estes problemas encontram-se métodos de tentativa e erro, métodos heurísticos, métodos computacionais de avaliação de diferentes opções até se encontrar uma boa solução e métodos de optimização. O objectivo deste trabalho passou pelo desenvolvimento de uma ferramenta computacional para efectuar o balanceamento de linhas de produção recorrendo a algoritmos genéticos. Foi desenvolvida uma aplicação que implementa dois algoritmos genéticos, um primeiro que obtém soluções para o problema e um segundo que optimiza essas soluções, associada a uma interface gráfica em C# que permite a inserção do problema e a visualização de resultados. Obtiveram-se resultados exequíveis demonstrando vantagens em relação aos métodos heurísticos, pois é possível obter-se mais do que uma solução. Além disso, para problemas complexos torna-se mais prático o uso da aplicação desenvolvida. No entanto, esta aplicação permite no máximo seis precedências por cada operação e resultados com o máximo de nove estações de trabalho.
Resumo:
Screening of topologies developed by hierarchical heuristic procedures can be carried out by comparing their optimal performance. In this work we will be exploiting mono-objective process optimization using two algorithms, simulated annealing and tabu search, and four different objective functions: two of the net present value type, one of them including environmental costs and two of the global potential impact type. The hydrodealkylation of toluene to produce benzene was used as case study, considering five topologies with different complexities mainly obtained by including or not liquid recycling and heat integration. The performance of the algorithms together with the objective functions was observed, analyzed and discussed from various perspectives: average deviation of results for each algorithm, capacity for producing high purity product, screening of topologies, objective functions robustness in screening of topologies, trade-offs between economic and environmental type objective functions and variability of optimum solutions.