925 resultados para algorithm optimization
Resumo:
Nos tempos actuais os equipamentos para Aquecimento Ventilação e Ar Condicionado (AVAC) ocupam um lugar de grande importância na concepção, desenvolvimento e manutenção de qualquer edifício por mais pequeno que este seja. Assim, surge a necessidade premente de racionalizar os consumos energéticos optimizando-os. A alta fiabilidade desejada nestes sistemas obriga-nos cada vez mais a descobrir formas de tornar a sua manutenção mais eficiente, pelo que é necessário prevenir de uma forma proactiva todas as falhas que possam prejudicar o bom desempenho destas instalações. Como tal, torna-se necessário detectar estas falhas/anomalias, sendo imprescíndivel que nos antecipemos a estes eventos prevendo o seu acontecimento num horizonte temporal pré-definido, permitindo actuar o mais cedo possível. É neste domínio que a presente dissertação tenta encontrar soluções para que a manutenção destes equipamentos aconteça de uma forma proactiva e o mais eficazmente possível. A ideia estruturante é a de tentar intervir ainda numa fase incipiente do problema, alterando o comportamento dos equipamentos monitorizados, de uma forma automática, com recursos a agentes inteligentes de diagnóstico de falhas. No caso em estudo tenta-se adaptar de forma automática o funcionamento de uma Unidade de Tratamento de Ar (UTA) aos desvios/anomalias detectadas, promovendo a paragem integral do sistema apenas como último recurso. A arquitectura aplicada baseia-se na utilização de técnicas de inteligência artificial, nomeadamente dos sistemas multiagente. O algoritmo utilizado e testado foi construído em Labview®, utilizando um kit de ferramentas de controlo inteligente para Labview®. O sistema proposto é validado através de um simulador com o qual se conseguem reproduzir as condições reais de funcionamento de uma UTA.
Resumo:
This paper is on the unit commitment problem, considering not only the economic perspective, but also the environmental perspective. We propose a bi-objective approach to handle the problem with conflicting profit and emission objectives. Numerical results based on the standard IEEE 30-bus test system illustrate the proficiency of the proposed approach.
Resumo:
In this paper we present results on the optimization of device architectures for colour and imaging applications, using a device with a TCO/pinpi'n/TCO configuration. The effect of the applied voltage on the color selectivity is discussed. Results show that the spectral response curves demonstrate rather good separation between the red, green and blue basic colors. Combining the information obtained under positive and negative applied bias a colour image is acquired without colour filters or pixel architecture. A low level image processing algorithm is used for the colour image reconstruction.
Resumo:
Mestrado de Radiações aplicadas às Tecnologias da Saúde. Área de especialização: Imagem Digital com Radiação X.
Resumo:
5th. European Congress on Computational Methods in Applied Sciences and Engineering (ECCOMAS 2008) 8th. World Congress on Computational Mechanics (WCCM8)
Resumo:
In this paper, a stochastic programming approach is proposed for trading wind energy in a market environment under uncertainty. Uncertainty in the energy market prices is the main cause of high volatility of profits achieved by power producers. The volatile and intermittent nature of wind energy represents another source of uncertainty. Hence, each uncertain parameter is modeled by scenarios, where each scenario represents a plausible realization of the uncertain parameters with an associated occurrence probability. Also, an appropriate risk measurement is considered. The proposed approach is applied on a realistic case study, based on a wind farm in Portugal. Finally, conclusions are duly drawn. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Fluorescence confocal microscopy (FCM) is now one of the most important tools in biomedicine research. In fact, it makes it possible to accurately study the dynamic processes occurring inside the cell and its nucleus by following the motion of fluorescent molecules over time. Due to the small amount of acquired radiation and the huge optical and electronics amplification, the FCM images are usually corrupted by a severe type of Poisson noise. This noise may be even more damaging when very low intensity incident radiation is used to avoid phototoxicity. In this paper, a Bayesian algorithm is proposed to remove the Poisson intensity dependent noise corrupting the FCM image sequences. The observations are organized in a 3-D tensor where each plane is one of the images acquired along the time of a cell nucleus using the fluorescence loss in photobleaching (FLIP) technique. The method removes simultaneously the noise by considering different spatial and temporal correlations. This is accomplished by using an anisotropic 3-D filter that may be separately tuned in space and in time dimensions. Tests using synthetic and real data are described and presented to illustrate the application of the algorithm. A comparison with several state-of-the-art algorithms is also presented.
Resumo:
In practical applications of optimization it is common to have several conflicting objective functions to optimize. Frequently, these functions are subject to noise or can be of black-box type, preventing the use of derivative-based techniques. We propose a novel multiobjective derivative-free methodology, calling it direct multisearch (DMS), which does not aggregate any of the objective functions. Our framework is inspired by the search/poll paradigm of direct-search methods of directional type and uses the concept of Pareto dominance to maintain a list of nondominated points (from which the new iterates or poll centers are chosen). The aim of our method is to generate as many points in the Pareto front as possible from the polling procedure itself, while keeping the whole framework general enough to accommodate other disseminating strategies, in particular, when using the (here also) optional search step. DMS generalizes to multiobjective optimization (MOO) all direct-search methods of directional type. We prove under the common assumptions used in direct search for single objective optimization that at least one limit point of the sequence of iterates generated by DMS lies in (a stationary form of) the Pareto front. However, extensive computational experience has shown that our methodology has an impressive capability of generating the whole Pareto front, even without using a search step. Two by-products of this paper are (i) the development of a collection of test problems for MOO and (ii) the extension of performance and data profiles to MOO, allowing a comparison of several solvers on a large set of test problems, in terms of their efficiency and robustness to determine Pareto fronts.
Resumo:
Long-term contractual decisions are the basis of an efficient risk management. However those types of decisions have to be supported with a robust price forecast methodology. This paper reports a different approach for long-term price forecast which tries to give answers to that need. Making use of regression models, the proposed methodology has as main objective to find the maximum and a minimum Market Clearing Price (MCP) for a specific programming period, and with a desired confidence level α. Due to the problem complexity, the meta-heuristic Particle Swarm Optimization (PSO) was used to find the best regression parameters and the results compared with the obtained by using a Genetic Algorithm (GA). To validate these models, results from realistic data are presented and discussed in detail.
Resumo:
A novel hybrid approach, combining wavelet transform, particle swarm optimization, and adaptive-network-based fuzzy inference system, is proposed in this paper for short-term electricity prices forecasting in a competitive market. Results from a case study based on the electricity market of mainland Spain are presented. A thorough comparison is carried out, taking into account the results of previous publications. Finally, conclusions are duly drawn.
Resumo:
This paper presents a methodology that aims to increase the probability of delivering power to any load point of the electrical distribution system by identifying new investments in distribution components. The methodology is based on statistical failure and repair data of the distribution power system components and it uses fuzzy-probabilistic modelling for system component outage parameters. Fuzzy membership functions of system component outage parameters are obtained by statistical records. A mixed integer non-linear optimization technique is developed to identify adequate investments in distribution networks components that allow increasing the availability level for any customer in the distribution system at minimum cost for the system operator. To illustrate the application of the proposed methodology, the paper includes a case study that considers a real distribution network.
Resumo:
This paper presents an algorithm to efficiently generate the state-space of systems specified using the IOPT Petri-net modeling formalism. IOPT nets are a non-autonomous Petri-net class, based on Place-Transition nets with an extended set of features designed to allow the rapid prototyping and synthesis of system controllers through an existing hardware-software co-design framework. To obtain coherent and deterministic operation, IOPT nets use a maximal-step execution semantics where, in a single execution step, all enabled transitions will fire simultaneously. This fact increases the resulting state-space complexity and can cause an arc "explosion" effect. Real-world applications, with several million states, will reach a higher order of magnitude number of arcs, leading to the need for high performance state-space generator algorithms. The proposed algorithm applies a compilation approach to read a PNML file containing one IOPT model and automatically generate an optimized C program to calculate the corresponding state-space.
Resumo:
This paper addresses the problem of energy resources management using modern metaheuristics approaches, namely Particle Swarm Optimization (PSO), New Particle Swarm Optimization (NPSO) and Evolutionary Particle Swarm Optimization (EPSO). The addressed problem in this research paper is intended for aggregators’ use operating in a smart grid context, dealing with Distributed Generation (DG), and gridable vehicles intelligently managed on a multi-period basis according to its users’ profiles and requirements. The aggregator can also purchase additional energy from external suppliers. The paper includes a case study considering a 30 kV distribution network with one substation, 180 buses and 90 load points. The distribution network in the case study considers intense penetration of DG, including 116 units from several technologies, and one external supplier. A scenario of 6000 EVs for the given network is simulated during 24 periods, corresponding to one day. The results of the application of the PSO approaches to this case study are discussed deep in the paper.
Resumo:
The large penetration of intermittent resources, such as solar and wind generation, involves the use of storage systems in order to improve power system operation. Electric Vehicles (EVs) with gridable capability (V2G) can operate as a means for storing energy. This paper proposes an algorithm to be included in a SCADA (Supervisory Control and Data Acquisition) system, which performs an intelligent management of three types of consumers: domestic, commercial and industrial, that includes the joint management of loads and the charge/discharge of EVs batteries. The proposed methodology has been implemented in a SCADA system developed by the authors of this paper – the SCADA House Intelligent Management (SHIM). Any event in the system, such as a Demand Response (DR) event, triggers the use of an optimization algorithm that performs the optimal energy resources scheduling (including loads and EVs), taking into account the priorities of each load defined by the installation users. A case study considering a specific consumer with several loads and EVs is presented in this paper.
Resumo:
Metaheuristics performance is highly dependent of the respective parameters which need to be tuned. Parameter tuning may allow a larger flexibility and robustness but requires a careful initialization. The process of defining which parameters setting should be used is not obvious. The values for parameters depend mainly on the problem, the instance to be solved, the search time available to spend in solving the problem, and the required quality of solution. This paper presents a learning module proposal for an autonomous parameterization of Metaheuristics, integrated on a Multi-Agent System for the resolution of Dynamic Scheduling problems. The proposed learning module is inspired on Autonomic Computing Self-Optimization concept, defining that systems must continuously and proactively improve their performance. For the learning implementation it is used Case-based Reasoning, which uses previous similar data to solve new cases. In the use of Case-based Reasoning it is assumed that similar cases have similar solutions. After a literature review on topics used, both AutoDynAgents system and Self-Optimization module are described. Finally, a computational study is presented where the proposed module is evaluated, obtained results are compared with previous ones, some conclusions are reached, and some future work is referred. It is expected that this proposal can be a great contribution for the self-parameterization of Metaheuristics and for the resolution of scheduling problems on dynamic environments.