133 resultados para dynamic optimization
Resumo:
The main purpose of this paper is to propose a Multi-Agent Autonomic and Bio-Inspired based framework with selfmanaging capabilities to solve complex scheduling problems using cooperative negotiation. Scheduling resolution requires the intervention of highly skilled human problem-solvers. This is a very hard and challenging domain because current systems are becoming more and more complex, distributed, interconnected and subject to rapidly changing. A natural Autonomic Computing (AC) evolution in relation to Current Computing is to provide systems with Self-Managing ability with a minimum human interference.
Resumo:
This paper describes a Multi-agent Scheduling System that assumes the existence of several Machines Agents (which are decision-making entities) distributed inside the Manufacturing System that interact and cooperate with other agents in order to obtain optimal or near-optimal global performances. Agents have to manage their internal behaviors and their relationships with other agents via cooperative negotiation in accordance with business policies defined by the user manager. Some Multi Agent Systems (MAS) organizational aspects are considered. An original Cooperation Mechanism for a Team-work based Architecture is proposed to address dynamic scheduling using Meta-Heuristics.
Resumo:
Cloud computing is increasingly being adopted in different scenarios, like social networking, business applications, scientific experiments, etc. Relying in virtualization technology, the construction of these computing environments targets improvements in the infrastructure, such as power-efficiency and fulfillment of users’ SLA specifications. The methodology usually applied is packing all the virtual machines on the proper physical servers. However, failure occurrences in these networked computing systems can induce substantial negative impact on system performance, deviating the system from ours initial objectives. In this work, we propose adapted algorithms to dynamically map virtual machines to physical hosts, in order to improve cloud infrastructure power-efficiency, with low impact on users’ required performance. Our decision making algorithms leverage proactive fault-tolerance techniques to deal with systems failures, allied with virtual machine technology to share nodes resources in an accurately and controlled manner. The results indicate that our algorithms perform better targeting power-efficiency and SLA fulfillment, in face of cloud infrastructure failures.
Resumo:
In real optimization problems, usually the analytical expression of the objective function is not known, nor its derivatives, or they are complex. In these cases it becomes essential to use optimization methods where the calculation of the derivatives, or the verification of their existence, is not necessary: the Direct Search Methods or Derivative-free Methods are one solution. When the problem has constraints, penalty functions are often used. Unfortunately the choice of the penalty parameters is, frequently, very difficult, because most strategies for choosing it are heuristics strategies. As an alternative to penalty function appeared the filter methods. A filter algorithm introduces a function that aggregates the constrained violations and constructs a biobjective problem. In this problem the step is accepted if it either reduces the objective function or the constrained violation. This implies that the filter methods are less parameter dependent than a penalty function. In this work, we present a new direct search method, based on simplex methods, for general constrained optimization that combines the features of the simplex method and filter methods. This method does not compute or approximate any derivatives, penalty constants or Lagrange multipliers. The basic idea of simplex filter algorithm is to construct an initial simplex and use the simplex to drive the search. We illustrate the behavior of our algorithm through some examples. The proposed methods were implemented in Java.
Resumo:
The filter method is a technique for solving nonlinear programming problems. The filter algorithm has two phases in each iteration. The first one reduces a measure of infeasibility, while in the second the objective function value is reduced. In real optimization problems, usually the objective function is not differentiable or its derivatives are unknown. In these cases it becomes essential to use optimization methods where the calculation of the derivatives or the verification of their existence is not necessary: direct search methods or derivative-free methods are examples of such techniques. In this work we present a new direct search method, based on simplex methods, for general constrained optimization that combines the features of simplex and filter methods. This method neither computes nor approximates derivatives, penalty constants or Lagrange multipliers.
Resumo:
In this work we solve Mathematical Programs with Complementarity Constraints using the hyperbolic smoothing strategy. Under this approach, the complementarity condition is relaxed through the use of the hyperbolic smoothing function, involving a positive parameter that can be decreased to zero. An iterative algorithm is implemented in MATLAB language and a set of AMPL problems from MacMPEC database were tested.
Resumo:
Mestrado em Engenharia Informática
Resumo:
Mestrado em Engenharia Informática
Resumo:
We have developed a new method for single-drop microextraction (SDME) for the preconcentration of organochlorine pesticides (OCP) from complex matrices. It is based on the use of a silicone ring at the tip of the syringe. A 5 μL drop of n-hexane is applied to an aqueous extract containing the OCP and found to be adequate to preconcentrate the OCPs prior to analysis by GC in combination with tandem mass spectrometry. Fourteen OCP were determined using this technique in combination with programmable temperature vaporization. It is shown to have many advantages over traditional split/splitless injection. The effects of kind of organic solvent, exposure time, agitation and organic drop volume were optimized. Relative recoveries range from 59 to 117 %, with repeatabilities of <15 % (coefficient of variation) were achieved. The limits of detection range from 0.002 to 0.150 μg kg−1. The method was applied to the preconcentration of OCPs in fresh strawberry, strawberry jam, and soil.
Resumo:
A QuEChERS method has been developed for the determination of 14 organochlorine pesticides in 14 soils from different Portuguese regions with wide range composition. The extracts were analysed by GC-ECD (where GC-ECD is gas chromatography-electron-capture detector) and confirmed by GC-MS/MS (where MS/MS is tandem mass spectrometry). The organic matter content is a key factor in the process efficiency. An optimization was carried out according to soils organic carbon level, divided in two groups: HS (organic carbon>2.3%) and LS (organic carbon<2.3%). Themethod was validated through linearity, recovery, precision and accuracy studies. The quantification was carried out using a matrixmatched calibration to minimize the existence of the matrix effect. Acceptable recoveries were obtained (70–120%) with a relative standard deviation of ≤16% for the three levels of contamination. The ranges of the limits of detection and of the limits of quantification in soils HS were from 3.42 to 23.77 μg kg−1 and from 11.41 to 79.23 μg kg−1, respectively. For LS soils, the limits of detection ranged from 6.11 to 14.78 μg kg−1 and the limits of quantification from 20.37 to 49.27 μg kg−1. In the 14 collected soil samples only one showed a residue of dieldrin (45.36 μg kg−1) above the limit of quantification. This methodology combines the advantages of QuEChERS, GC-ECD detection and GC-MS/MS confirmation producing a very rapid, sensitive and reliable procedure which can be applied in routine analytical laboratories.
Resumo:
Scientific evidence has shown an association between organochlorine compounds (OCC) exposure and human health hazards. Concerning this, OCC detection in human adipose samples has to be considered a public health priority. This study evaluated the efficacy of various solid-phase extraction (SPE) and cleanup methods for OCC determination in human adipose tissue. Octadecylsilyl endcapped (C18-E), benzenesulfonic acid modified silica cation exchanger (SA), poly (styrene-divinylbenzene (EN) and EN/RP18 SPE sorbents were evaluated. The relative sample cleanup provided by these SPE columns was evaluated using gas chromatography with electron capture detection (GC–ECD). The C18-E columns with strong homogenization were found to provide the most effective cleanup, removing the greatest amount of interfering substance, and simultaneously ensuring good analyte recoveries higher than 70%. Recoveries>70% with standard deviations (SD)<15% were obtained for all compounds under the selected conditions. Method detection limits were in the 0.003–0.009 mg/kg range. The positive samples were confirmed by gas chromatography coupled with tandem mass spectrometry (GC-MS/MS). The highest percentage found of the OCC in real samples corresponded to HCB, o,p′-DDT and methoxychlor, which were detected in 80 and 95% of samples analyzed respectively. Copyright © 2012 John Wiley & Sons, Ltd.
Resumo:
Multiclass analysis method was optimized in order to analyze pesticides traces by gas chromatography with ion-trap and tandem mass spectrometry (GC-MS/MS). The influence of some analytical parameters on pesticide signal response was explored. Five ion trap mass spectrometry (IT-MS) operating parameters, including isolation time (IT), excitation voltage (EV), excitation time (ET),maximum excitation energy or “q” value (q), and isolationmass window (IMW) were numerically tested in order to maximize the instrument analytical signal response. For this, multiple linear regression was used in data analysis to evaluate the influence of the five parameters on the analytical response in the ion trap mass spectrometer and to predict its response. The assessment of the five parameters based on the regression equations substantially increased the sensitivity of IT-MS/MS in the MS/MS mode. The results obtained show that for most of the pesticides, these parameters have a strong influence on both signal response and detection limit.Using the optimized method, a multiclass pesticide analysis was performed for 46 pesticides in a strawberry matrix. Levels higher than the limit established for strawberries by the European Union were found in some samples.
Resumo:
Este trabalho foi realizado no âmbito da disciplina de Dissertação/Estágio do ramo de Optimização Energética na Indústria Química, do Mestrado em Engenharia Química do Instituto Superior de Engenharia do Porto e foi desenvolvido na empresa GreenWatt. O principal objectivo é efectuar uma auditoria energética e uma auditoria QAI a uma clínica de fisiatria de forma a preparar as ferramentas necessárias para a Certificação Energética e da QAI no enquadramento do Sistema de Certificação Energética. Na auditoria QAI foram analisados parâmetros físicos - temperatura, humidade relativa e partículas respiráveis PM10, parâmetros químicos - CO2, CO, O3, COVs, HCOH e o radão, e ainda parâmetros microbiológicos - bactérias, fungos e legionella. Na auditoria energética foi feita a caracterização dos vectores de energia utilizados no edifício, nomeadamente, gás natural e electricidade. Para esta caracterização efectuou-se um levantamento de toda a informação disponível relativa aos combustíveis utilizados, iluminação instalada, outros equipamentos consumidores de energia e perfis de utilização. Com recurso a analisadores de energia foram ainda medidos os consumos eléctricos do edifício. Com suporte nos dados provenientes da auditoria energética e das facturas anuais efectuou-se a validação da simulação dinâmica do edifício. Esta simulação é a base do cálculo do IEEnominal do edifício. Os resultados da auditoria QAI, permitiram verificar que existem valores nãoregulamentares em relação aos compostos orgânicos voláteis, fungos e bactérias. Da auditoria energética concluiu-se que o principal consumo de energia é o gás natural utilizado pelas caldeiras existentes. Este valor representa cerca de 81% do consumo total de energia, reproduzindo os mesmos resultados obtidos pela desagregação das facturas energéticas. No que respeita à electricidade concluiu-se que as bombas de água e os equipamentos eléctricos são os maiores consumidores deste vector, com, respectivamente, 53% e 23% do consumo total de energia eléctrica. Após a realização da simulação dinâmica, com base nos levantamentos realizados no edifício e na auditoria energética efectuada, obteve-se uma fotografia do edifício no que respeita ao seu desempenho energético, e calculou-se um IEEnominal de 40,54 kgep/m2.ano o que qualifica o edifício com uma Classe Energética E. O valor de CO2 emitido por este edifício em termos nominais, anualmente, é de 76,39 toneladas.
Resumo:
O desenvolvimento deste trabalho teve como objectivo a optimização de um sistema de climatização industrial, constituído por quatro centrais de climatização adiabáticas, que apresentam limitações de capacidade de arrefecimento, controlo e eficiência. Inicialmente foi necessária a pesquisa bibliográfica e recolha de informação relativa à indústria têxtil e ao processo de arrefecimento evaporativo. Numa fase posterior foram recolhidos e analisados os diversos dados essenciais à compreensão do binómio edifício/sistema de climatização, para a obtenção de possíveis hipóteses de optimização. Da fase de recolha de informações e dados, destaca-se, também, a realização de análises à qualidade do ar interior (QAI). As optimizações seleccionadas como passíveis de implementação, foram estudadas e analisadas com o auxílio do software de simulação energética dinâmica DesignBuilder e os resultados obtidos foram devidamente trabalhados e ajustados de modo a permitir uma assimilação amigável e de fácil interpretação das suas vantagens e desvantagens, tendo ainda sido objecto de estudo de viabilidade económica. A optimização proposta reflecte uma melhoria substancial das condições interiores ao nível da temperatura e humidade relativa, resultando, ainda assim, numa redução de consumos energéticos na ordem dos 23 % (490.337 kWh), isto é, uma poupança anual de 42.169 € aos custos de exploração e com um período de retorno de 1 ano e 11 meses.
Resumo:
Over the last two decades the research and development of legged locomotion robots has grown steadily. Legged systems present major advantages when compared with ‘traditional’ vehicles, because they allow locomotion in inaccessible terrain to vehicles with wheels and tracks. However, the robustness of legged robots, and especially their energy consumption, among other aspects, still lag behind mechanisms that use wheels and tracks. Therefore, in the present state of development, there are several aspects that need to be improved and optimized. Keeping these ideas in mind, this paper presents the review of the literature of different methods adopted for the optimization of the structure and locomotion gaits of walking robots. Among the distinct possible strategies often used for these tasks are referred approaches such as the mimicking of biological animals, the use of evolutionary schemes to find the optimal parameters and structures, the adoption of sound mechanical design rules, and the optimization of power-based indexes.