961 resultados para Optimization methods
Resumo:
Meshless methods are used for their capability of producing excellent solutions without requiring a mesh, avoiding mesh related problems encountered in other numerical methods, such as finite elements. However, node placement is still an open question, specially in strong form collocation meshless methods. The number of used nodes can have a big influence on matrix size and therefore produce ill-conditioned matrices. In order to optimize node position and number, a direct multisearch technique for multiobjective optimization is used to optimize node distribution in the global collocation method using radial basis functions. The optimization method is applied to the bending of isotropic simply supported plates. Using as a starting condition a uniformly distributed grid, results show that the method is capable of reducing the number of nodes in the grid without compromising the accuracy of the solution. (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
Mestrado em Engenharia Electrotécnica – Sistemas Eléctricos de Energia
Resumo:
Glass fibre-reinforced plastics (GFRP), nowadays commonly used in the construction, transportation and automobile sectors, have been considered inherently difficult to recycle due to both: cross-linked nature of thermoset resins, which cannot be remolded, and complex composition of the composite itself, which includes glass fibres, matrix and different types of inorganic fillers. Presently, most of the GFRP waste is landfilled leading to negative environmental impacts and supplementary added costs. With an increasing awareness of environmental matters and the subsequent desire to save resources, recycling would convert an expensive waste disposal into a profitable reusable material. There are several methods to recycle GFR thermostable materials: (a) incineration, with partial energy recovery due to the heat generated during organic part combustion; (b) thermal and/or chemical recycling, such as solvolysis, pyrolisis and similar thermal decomposition processes, with glass fibre recovering; and (c) mechanical recycling or size reduction, in which the material is subjected to a milling process in order to obtain a specific grain size that makes the material suitable as reinforcement in new formulations. This last method has important advantages over the previous ones: there is no atmospheric pollution by gas emission, a much simpler equipment is required as compared with ovens necessary for thermal recycling processes, and does not require the use of chemical solvents with subsequent environmental impacts. In this study the effect of incorporation of recycled GFRP waste materials, obtained by means of milling processes, on mechanical behavior of polyester polymer mortars was assessed. For this purpose, different contents of recycled GFRP waste materials, with distinct size gradings, were incorporated into polyester polymer mortars as sand aggregates and filler replacements. The effect of GFRP waste treatment with silane coupling agent was also assessed. Design of experiments and data treatment were accomplish by means of factorial design and analysis of variance ANOVA. The use of factorial experiment design, instead of the one factor at-a-time method is efficient at allowing the evaluation of the effects and possible interactions of the different material factors involved. Experimental results were promising toward the recyclability of GFRP waste materials as polymer mortar aggregates, without significant loss of mechanical properties with regard to non-modified polymer mortars.
Resumo:
The elastic behavior of the demand consumption jointly used with other available resources such as distributed generation (DG) can play a crucial role for the success of smart grids. The intensive use of Distributed Energy Resources (DER) and the technical and contractual constraints result in large-scale non linear optimization problems that require computational intelligence methods to be solved. This paper proposes a Particle Swarm Optimization (PSO) based methodology to support the minimization of the operation costs of a virtual power player that manages the resources in a distribution network and the network itself. Resources include the DER available in the considered time period and the energy that can be bought from external energy suppliers. Network constraints are considered. The proposed approach uses Gaussian mutation of the strategic parameters and contextual self-parameterization of the maximum and minimum particle velocities. The case study considers a real 937 bus distribution network, with 20310 consumers and 548 distributed generators. The obtained solutions are compared with a deterministic approach and with PSO without mutation and Evolutionary PSO, both using self-parameterization.
Resumo:
Trihalomethanes (THMs) are widely referred and studied as disinfection by-products (DBPs). The THMs that are most commonly detected are chloroform (TCM), bromodichloromethane (BDCM), chlorodibromomethane (CDBM), and bromoform (TBM). Several studies regarding the determination of THMs in swimming pool water and air samples have been published. This paper reviews the most recent work in this field, with a special focus on water and air sampling, sample preparation and analytical determination methods. An experimental study has been developed in order to optimize the headspace solid-phasemicroextraction (HS-SPME) conditions of TCM, BDCM, CDBM and TBM from water samples using a 23 factorial design. An extraction temperature of 45 °C, for 25min, and a desorption time of 5 min were found to be the best conditions. Analysis was performed by gas chromatography with an electron capture detector (GC-ECD). The method was successfully applied to a set of 27 swimming pool water samples collected in the Oporto area (Portugal). TCM was the only THM detected with levels between 4.5 and 406.5 μg L−1. Four of the samples exceeded the guideline value for total THMs in swimming pool water (100 μgL−1) indicated by the Portuguese Health Authority.
Resumo:
Locating and identifying points as global minimizers is, in general, a hard and time-consuming task. Difficulties increase in the impossibility of using the derivatives of the functions defining the problem. In this work, we propose a new class of methods suited for global derivative-free constrained optimization. Using direct search of directional type, the algorithm alternates between a search step, where potentially good regions are located, and a poll step where the previously located promising regions are explored. This exploitation is made through the launching of several instances of directional direct searches, one in each of the regions of interest. Differently from a simple multistart strategy, direct searches will merge when sufficiently close. The goal is to end with as many direct searches as the number of local minimizers, which would easily allow locating the global extreme value. We describe the algorithmic structure considered, present the corresponding convergence analysis and report numerical results, showing that the proposed method is competitive with currently commonly used global derivative-free optimization solvers.
Resumo:
The bending of simply supported composite plates is analyzed using a direct collocation meshless numerical method. In order to optimize node distribution the Direct MultiSearch (DMS) for multi-objective optimization method is applied. In addition, the method optimizes the shape parameter in radial basis functions. The optimization algorithm was able to find good solutions for a large variety of nodes distribution.
Resumo:
The smart grid concept is a key issue in the future power systems, namely at the distribution level, with deep concerns in the operation and planning of these systems. Several advantages and benefits for both technical and economic operation of the power system and of the electricity markets are recognized. The increasing integration of demand response and distributed generation resources, all of them mostly with small scale distributed characteristics, leads to the need of aggregating entities such as Virtual Power Players. The operation business models become more complex in the context of smart grid operation. Computational intelligence methods can be used to give a suitable solution for the resources scheduling problem considering the time constraints. This paper proposes a methodology for a joint dispatch of demand response and distributed generation to provide energy and reserve by a virtual power player that operates a distribution network. The optimal schedule minimizes the operation costs and it is obtained using a particle swarm optimization approach, which is compared with a deterministic approach used as reference methodology. The proposed method is applied to a 33-bus distribution network with 32 medium voltage consumers and 66 distributed generation units.
Resumo:
This paper presents a modified Particle Swarm Optimization (PSO) methodology to solve the problem of energy resources management with high penetration of distributed generation and Electric Vehicles (EVs) with gridable capability (V2G). The objective of the day-ahead scheduling problem in this work is to minimize operation costs, namely energy costs, regarding the management of these resources in the smart grid context. The modifications applied to the PSO aimed to improve its adequacy to solve the mentioned problem. The proposed Application Specific Modified Particle Swarm Optimization (ASMPSO) includes an intelligent mechanism to adjust velocity limits during the search process, as well as self-parameterization of PSO parameters making it more user-independent. It presents better robustness and convergence characteristics compared with the tested PSO variants as well as better constraint handling. This enables its use for addressing real world large-scale problems in much shorter times than the deterministic methods, providing system operators with adequate decision support and achieving efficient resource scheduling, even when a significant number of alternative scenarios should be considered. The paper includes two realistic case studies with different penetration of gridable vehicles (1000 and 2000). The proposed methodology is about 2600 times faster than Mixed-Integer Non-Linear Programming (MINLP) reference technique, reducing the time required from 25 h to 36 s for the scenario with 2000 vehicles, with about one percent of difference in the objective function cost value.
Resumo:
Breast cancer is the most common cancer among women, being a major public health problem. Worldwide, X-ray mammography is the current gold-standard for medical imaging of breast cancer. However, it has associated some well-known limitations. The false-negative rates, up to 66% in symptomatic women, and the false-positive rates, up to 60%, are a continued source of concern and debate. These drawbacks prompt the development of other imaging techniques for breast cancer detection, in which Digital Breast Tomosynthesis (DBT) is included. DBT is a 3D radiographic technique that reduces the obscuring effect of tissue overlap and appears to address both issues of false-negative and false-positive rates. The 3D images in DBT are only achieved through image reconstruction methods. These methods play an important role in a clinical setting since there is a need to implement a reconstruction process that is both accurate and fast. This dissertation deals with the optimization of iterative algorithms, with parallel computing through an implementation on Graphics Processing Units (GPUs) to make the 3D reconstruction faster using Compute Unified Device Architecture (CUDA). Iterative algorithms have shown to produce the highest quality DBT images, but since they are computationally intensive, their clinical use is currently rejected. These algorithms have the potential to reduce patient dose in DBT scans. A method of integrating CUDA in Interactive Data Language (IDL) is proposed in order to accelerate the DBT image reconstructions. This method has never been attempted before for DBT. In this work the system matrix calculation, the most computationally expensive part of iterative algorithms, is accelerated. A speedup of 1.6 is achieved proving the fact that GPUs can accelerate the IDL implementation.
Resumo:
The goal of this thesis is the investigation and optimization of the synthesis of potential fragrances. This work is projected as collaboration between the University of Applied Sciences in Merseburg and the company Miltitz Aromatics GmbH in Bitterfeld‐Wolfen (Germany). Flavoured compounds can be synthesized in different ways and by various methods. In this work, methods like the phase transfer catalysis and the Cope‐rearrangement were investigated and applied, for getting a high yield and quantity of the desired substances and without any by‐products or side reactions. This involved the study of syntheses with different process parameters such as temperature, solvent, pressure and reaction time. The main focus was on Cope‐rearrangement, which is a common method in the synthesis of new potential fragrance compounds. The substances synthesized in this work have a hepta‐1,5‐diene‐structure and that is why they can easily undergo this [3,3]‐sigma tropic rearrangement. The lead compound of all research was 2,5‐dimethyl‐2‐vinyl‐4‐hexenenitrile (Neronil). Neronil is synthesized by an alkylation of 2‐methyl‐3‐butenenitrile with prenylchloride under basic conditions in a phase‐transfer system. In this work the yield of isolated Neronil is improved from about 35% to 46% by according to the execution conditions of the reaction. Additionally the amount of side product was decreased. This synthesized hexenenitrile involved not only the aforementioned 1,5‐diene‐structure, but also a cyano group, that makes this structure a suitable base for the synthesis of new potential fragrance compounds. It was observed that Neronil can be transferred into 2,5‐dimethyl‐2‐vinyl‐4‐hexenoic acid by a hydrolysis under basic conditions. After five hours the acid can be obtained with a yield of 96%. The following esterification is realized with isobutanol to produce 2,5‐dimethyl‐2‐vinyl‐4‐hexenoic acid isobutyl ester with quantitative conversion. It was observed that the Neronil and the corresponding ester can be converted into the corresponding Cope‐product, with a conversion of 30 % and 80%. Implementing the Cope‐rearrangement, the acid was heated and an unexpected decarboxylated product is formed. To achieve the best verification of reaction development and structure, scrupulous analyses were done using GC‐MS, 1H‐NMR and 13C‐ NMR.
Resumo:
Phosphorus (P) is becoming a scarce element due to the decreasing availability of primary sources. Therefore, recover P from secondary sources, e.g. waste streams, have become extremely important. Sewage sludge ash (SSA) is a reliable secondary source of P. The use of SSAs as a direct fertilizer has very restricted legislation due to the presence of inorganic contaminants. Furthermore, the P present in SSAs is not in a plant-available form. The electrodialytic (ED) process is one of the methods under development to recover P and simultaneously remove heavy metals. The present work aimed to optimize the P recovery through a 2 compartment electrodialytic cell. The research was divided in three independent phases. In the first phase, ED experiments were carried out for two SSAs from different seasons, varying the duration of the ED process (2, 4, 6 and 9 days). During the ED treatment the SSA was suspended in distilled water in the anolyte, which was separated from the catholyte by a cation exchange membrane. From both ashes 90% of P was successfully extracted after 6 days of treatment. Regarding the heavy metals removal, one of the SSAs had a better removal than the other. Therefore, it was possible to conclude that SSAs from different seasons can be submitted to ED process under the same parameters. In the second phase, the two SSAs were exposed to humidity and air prior to ED, in order to carbonate them. Although this procedure was not successful, ED experiments were carried out varying the duration of the treatment (2 and 6 days) and the period of air exposure that SSAs were submitted to (7, 14 and 30 days). After 6 days of treatment and 30 days of air exposure, 90% of phosphorus was successfully extracted from both ashes. No differences were identified between carbonated and non-carbonated SSAs. Thus, SSAs that were exposed to the air and humidity, e.g. SSAs stored for 30 days in an open deposit, can be treated under the same parameters as the SSAs directly collected from the incineration process. In the third phase, ED experiments were carried out during 6 days varying the stirring time (0, 1, 2 and 4 h/day) in order to investigate if energy can be saved on the stirring process. After 6 days of treatment and 4 h/day stirring, 80% and 90% of P was successfully extracted from SSA-A and SSA-B, respectively. This value is very similar to the one obtained for 6 days of treatment stirring 24 h/day.
Resumo:
Polysaccharides are gaining increasing attention as potential environmental friendly and sustainable building blocks in many fields of the (bio)chemical industry. The microbial production of polysaccharides is envisioned as a promising path, since higher biomass growth rates are possible and therefore higher productivities may be achieved compared to vegetable or animal polysaccharides sources. This Ph.D. thesis focuses on the modeling and optimization of a particular microbial polysaccharide, namely the production of extracellular polysaccharides (EPS) by the bacterial strain Enterobacter A47. Enterobacter A47 was found to be a metabolically versatile organism in terms of its adaptability to complex media, notably capable of achieving high growth rates in media containing glycerol byproduct from the biodiesel industry. However, the industrial implementation of this production process is still hampered due to a largely unoptimized process. Kinetic rates from the bioreactor operation are heavily dependent on operational parameters such as temperature, pH, stirring and aeration rate. The increase of culture broth viscosity is a common feature of this culture and has a major impact on the overall performance. This fact complicates the mathematical modeling of the process, limiting the possibility to understand, control and optimize productivity. In order to tackle this difficulty, data-driven mathematical methodologies such as Artificial Neural Networks can be employed to incorporate additional process data to complement the known mathematical description of the fermentation kinetics. In this Ph.D. thesis, we have adopted such an hybrid modeling framework that enabled the incorporation of temperature, pH and viscosity effects on the fermentation kinetics in order to improve the dynamical modeling and optimization of the process. A model-based optimization method was implemented that enabled to design bioreactor optimal control strategies in the sense of EPS productivity maximization. It is also critical to understand EPS synthesis at the level of the bacterial metabolism, since the production of EPS is a tightly regulated process. Methods of pathway analysis provide a means to unravel the fundamental pathways and their controls in bioprocesses. In the present Ph.D. thesis, a novel methodology called Principal Elementary Mode Analysis (PEMA) was developed and implemented that enabled to identify which cellular fluxes are activated under different conditions of temperature and pH. It is shown that differences in these two parameters affect the chemical composition of EPS, hence they are critical for the regulation of the product synthesis. In future studies, the knowledge provided by PEMA could foster the development of metabolically meaningful control strategies that target the EPS sugar content and oder product quality parameters.
Resumo:
PhD Thesis in Bioengineering
Resumo:
PhD thesis in Bioengineering