30 resultados para Well-Posed Optimization Problems
em Universidade Federal do Rio Grande do Norte(UFRN)
Resumo:
The topology optimization problem characterize and determine the optimum distribution of material into the domain. In other words, after the definition of the boundary conditions in a pre-established domain, the problem is how to distribute the material to solve the minimization problem. The objective of this work is to propose a competitive formulation for optimum structural topologies determination in 3D problems and able to provide high-resolution layouts. The procedure combines the Galerkin Finite Elements Method with the optimization method, looking for the best material distribution along the fixed domain of project. The layout topology optimization method is based on the material approach, proposed by Bendsoe & Kikuchi (1988), and considers a homogenized constitutive equation that depends only on the relative density of the material. The finite element used for the approach is a four nodes tetrahedron with a selective integration scheme, which interpolate not only the components of the displacement field but also the relative density field. The proposed procedure consists in the solution of a sequence of layout optimization problems applied to compliance minimization problems and mass minimization problems under local stress constraint. The microstructure used in this procedure was the SIMP (Solid Isotropic Material with Penalty). The approach reduces considerably the computational cost, showing to be efficient and robust. The results provided a well defined structural layout, with a sharpness distribution of the material and a boundary condition definition. The layout quality was proporcional to the medium size of the element and a considerable reduction of the project variables was observed due to the tetrahedrycal element
Resumo:
Multi-objective combinatorial optimization problems have peculiar characteristics that require optimization methods to adapt for this context. Since many of these problems are NP-Hard, the use of metaheuristics has grown over the last years. Particularly, many different approaches using Ant Colony Optimization (ACO) have been proposed. In this work, an ACO is proposed for the Multi-objective Shortest Path Problem, and is compared to two other optimizers found in the literature. A set of 18 instances from two distinct types of graphs are used, as well as a specific multiobjective performance assessment methodology. Initial experiments showed that the proposed algorithm is able to generate better approximation sets than the other optimizers for all instances. In the second part of this work, an experimental analysis is conducted, using several different multiobjective ACO proposals recently published and the same instances used in the first part. Results show each type of instance benefits a particular type of instance benefits a particular algorithmic approach. A new metaphor for the development of multiobjective ACOs is, then, proposed. Usually, ants share the same characteristics and only few works address multi-species approaches. This works proposes an approach where multi-species ants compete for food resources. Each specie has its own search strategy and different species do not access pheromone information of each other. As in nature, the successful ant populations are allowed to grow, whereas unsuccessful ones shrink. The approach introduced here shows to be able to inherit the behavior of strategies that are successful for different types of problems. Results of computational experiments are reported and show that the proposed approach is able to produce significantly better approximation sets than other methods
Resumo:
The key aspect limiting resolution in crosswell traveltime tomography is illumination, a well known result but not as well exemplified. Resolution in the 2D case is revisited using a simple geometric approach based on the angular aperture distribution and the Radon Transform properties. Analitically it is shown that if an interface has dips contained in the angular aperture limits in all points, it is correctly imaged in the tomogram. By inversion of synthetic data this result is confirmed and it is also evidenced that isolated artifacts might be present when the dip is near the illumination limit. In the inverse sense, however, if an interface is interpretable from a tomogram, even an aproximately horizontal interface, there is no guarantee that it corresponds to a true interface. Similarly, if a body is present in the interwell region it is diffusely imaged in the tomogram, but its interfaces - particularly vertical edges - can not be resolved and additional artifacts might be present. Again, in the inverse sense, there is no guarantee that an isolated anomaly corresponds to a true anomalous body because this anomaly can also be an artifact. Jointly, these results state the dilemma of ill-posed inverse problems: absence of guarantee of correspondence to the true distribution. The limitations due to illumination may not be solved by the use of mathematical constraints. It is shown that crosswell tomograms derived by the use of sparsity constraints, using both Discrete Cosine Transform and Daubechies bases, basically reproduces the same features seen in tomograms obtained with the classic smoothness constraint. Interpretation must be done always taking in consideration the a priori information and the particular limitations due to illumination. An example of interpreting a real data survey in this context is also presented.
Resumo:
The key aspect limiting resolution in crosswell traveltime tomography is illumination, a well known result but not as well exemplified. Resolution in the 2D case is revisited using a simple geometric approach based on the angular aperture distribution and the Radon Transform properties. Analitically it is shown that if an interface has dips contained in the angular aperture limits in all points, it is correctly imaged in the tomogram. By inversion of synthetic data this result is confirmed and it is also evidenced that isolated artifacts might be present when the dip is near the illumination limit. In the inverse sense, however, if an interface is interpretable from a tomogram, even an aproximately horizontal interface, there is no guarantee that it corresponds to a true interface. Similarly, if a body is present in the interwell region it is diffusely imaged in the tomogram, but its interfaces - particularly vertical edges - can not be resolved and additional artifacts might be present. Again, in the inverse sense, there is no guarantee that an isolated anomaly corresponds to a true anomalous body because this anomaly can also be an artifact. Jointly, these results state the dilemma of ill-posed inverse problems: absence of guarantee of correspondence to the true distribution. The limitations due to illumination may not be solved by the use of mathematical constraints. It is shown that crosswell tomograms derived by the use of sparsity constraints, using both Discrete Cosine Transform and Daubechies bases, basically reproduces the same features seen in tomograms obtained with the classic smoothness constraint. Interpretation must be done always taking in consideration the a priori information and the particular limitations due to illumination. An example of interpreting a real data survey in this context is also presented.
Resumo:
The oil wells cementing is a critical step during the phase of well drilling, because problems during the operation of slurry pumping and an incomplete filling of the annular space between the metal casing and the formation can cause the slurry loss. Therefore, the slurry adopted in primary cementing an oil well must be properly dosed so that these problems are avoided during its pumping. When you drill a well in a weak rock formation requires even more careful, because should be a limit of hydrostatic pressure exerted during cementation, that does not occur rock collapse. With the objective of performing the cementing of a well whose formation is weak or unconsolidated are employed lighter slurries. Thus, this study used slurries with sodium silicate and nano silica in concentrations of 0,1; 0,4; 0,7 e 1,0 gpc, in which the slurries with nano silica showed the rheological parameters higher concentrations of up to 0.7 gpc and for concentration of 1.0 the slurry with sodium silicate obtained the highest values, remaining above the limits for application in fields, mainly wells with low fracture gradient, because a significant increase in viscosity may result in an increase in pressure pumping in operations of secondary cementations. Furthermore, there was no decrease in strength with increasing concentration of additive. Then, it is possible use of these additives to formulate Lighter slurry
Resumo:
Cementation operation consists in an extremely important work for the phases of perforation and completion of oil wells, causing a great impact on the well productivity. Several problems can occur with the cement during the primary cementation, as well as throughout the productive period. The corrective operations are frequent, but they are expensive and demands production time. Besides the direct cost, prejudices from the interruption of oil and gas production till the implementation of a corrective operation must be also taken into account. The purpose of this work is the development of an alternative cement paste constituted of Portland cement and porcelainized stoneware residue produced by ceramic industry in order to achieve characteristics as low permeability, high tenacity, and high mechanical resistance, capable of supporting various operations as production or oil wells recuperation. Four different concentration measures of hydrated paste were evaluated: a reference paste, and three additional ones with ceramic residue in concentrations of the order of 10%, 20% and 30% in relation to cement dough. High resistance and low permeability were found in high concentration of residues, as well as it was proved the pozolanic reactivity of the residue in relation to Portland cement, which was characterized through x-ray and thermogravimetry assays. It was evident the decrease of calcium hydroxide content, once it was substituted by formation of new hydrated products as it was added ceramic residue
Resumo:
Cementing operation is one of the most important stages in the oil well drilling processes and has main function to form hydraulic seal between the various permeable zones traversed by the well. However, several problems may occur with the cement sheath, either during primary cementing or during the well production period. Cements low resistance can cause fissures in the cement sheath and compromise the mechanical integrity of the annular, resulting in contamination of groundwater and producing zones. Several researches show that biomass ash, in particular, those generated by the sugarcane industry have pozzolanic activity and can be added in the composition of the cementing slurries in diverse applications, providing improvements in mechanical properties, revenue and cement durability. Due to the importance of a low cost additive that increases the mechanical properties in a well cementing operations, this study aimed to potentiate the use of sugarcane bagasse ash as pozzolanic material, evaluate the mechanisms of action of this one on cement pastes properties and apply this material in systems slurries aimed to cementing a well with 800 m depth and geothermal gradient of 1.7 °F/100 ft, as much primary cementing operations as squeeze. To do this, the ash beneficiation methods were realized through the processes of grinding, sifting and reburning (calcination) and then characterization by X-ray fluorescence, XRD, TG / DTG, specific surface area, particle size distribution by laser diffraction and mass specific. Moreover, the ash pozzolanic activity added to the cement at concentrations of 0%, 20% and 40% BWOC was evaluated by pozzolanic activity index with lime and with Portland cement. The evaluation of the pozzolanic activity by XRD, TG / DTG and compressive strength confirmed the ash reactivity and indicated that the addition of 20% in the composition of cement slurries produces improvement 34% in the mechanical properties of the slurry cured. Cement slurries properties evaluated by rheological measurements, fluid loss, free fluid, slurry sedimentation, thickening time and sonic strength (UCA) were satisfactory and showed the viability of using the sugarcane ash in cement slurries composition for well cementing
Resumo:
Techniques of optimization known as metaheuristics have achieved success in the resolution of many problems classified as NP-Hard. These methods use non deterministic approaches that reach very good solutions which, however, don t guarantee the determination of the global optimum. Beyond the inherent difficulties related to the complexity that characterizes the optimization problems, the metaheuristics still face the dilemma of xploration/exploitation, which consists of choosing between a greedy search and a wider exploration of the solution space. A way to guide such algorithms during the searching of better solutions is supplying them with more knowledge of the problem through the use of a intelligent agent, able to recognize promising regions and also identify when they should diversify the direction of the search. This way, this work proposes the use of Reinforcement Learning technique - Q-learning Algorithm - as exploration/exploitation strategy for the metaheuristics GRASP (Greedy Randomized Adaptive Search Procedure) and Genetic Algorithm. The GRASP metaheuristic uses Q-learning instead of the traditional greedy-random algorithm in the construction phase. This replacement has the purpose of improving the quality of the initial solutions that are used in the local search phase of the GRASP, and also provides for the metaheuristic an adaptive memory mechanism that allows the reuse of good previous decisions and also avoids the repetition of bad decisions. In the Genetic Algorithm, the Q-learning algorithm was used to generate an initial population of high fitness, and after a determined number of generations, where the rate of diversity of the population is less than a certain limit L, it also was applied to supply one of the parents to be used in the genetic crossover operator. Another significant change in the hybrid genetic algorithm is the proposal of a mutually interactive cooperation process between the genetic operators and the Q-learning algorithm. In this interactive/cooperative process, the Q-learning algorithm receives an additional update in the matrix of Q-values based on the current best solution of the Genetic Algorithm. The computational experiments presented in this thesis compares the results obtained with the implementation of traditional versions of GRASP metaheuristic and Genetic Algorithm, with those obtained using the proposed hybrid methods. Both algorithms had been applied successfully to the symmetrical Traveling Salesman Problem, which was modeled as a Markov decision process
Resumo:
The frequency selective surfaces, or FSS (Frequency Selective Surfaces), are structures consisting of periodic arrays of conductive elements, called patches, which are usually very thin and they are printed on dielectric layers, or by openings perforated on very thin metallic surfaces, for applications in bands of microwave and millimeter waves. These structures are often used in aircraft, missiles, satellites, radomes, antennae reflector, high gain antennas and microwave ovens, for example. The use of these structures has as main objective filter frequency bands that can be broadcast or rejection, depending on the specificity of the required application. In turn, the modern communication systems such as GSM (Global System for Mobile Communications), RFID (Radio Frequency Identification), Bluetooth, Wi-Fi and WiMAX, whose services are highly demanded by society, have required the development of antennas having, as its main features, and low cost profile, and reduced dimensions and weight. In this context, the microstrip antenna is presented as an excellent choice for communications systems today, because (in addition to meeting the requirements mentioned intrinsically) planar structures are easy to manufacture and integration with other components in microwave circuits. Consequently, the analysis and synthesis of these devices mainly, due to the high possibility of shapes, size and frequency of its elements has been carried out by full-wave models, such as the finite element method, the method of moments and finite difference time domain. However, these methods require an accurate despite great computational effort. In this context, computational intelligence (CI) has been used successfully in the design and optimization of microwave planar structures, as an auxiliary tool and very appropriate, given the complexity of the geometry of the antennas and the FSS considered. The computational intelligence is inspired by natural phenomena such as learning, perception and decision, using techniques such as artificial neural networks, fuzzy logic, fractal geometry and evolutionary computation. This work makes a study of application of computational intelligence using meta-heuristics such as genetic algorithms and swarm intelligence optimization of antennas and frequency selective surfaces. Genetic algorithms are computational search methods based on the theory of natural selection proposed by Darwin and genetics used to solve complex problems, eg, problems where the search space grows with the size of the problem. The particle swarm optimization characteristics including the use of intelligence collectively being applied to optimization problems in many areas of research. The main objective of this work is the use of computational intelligence, the analysis and synthesis of antennas and FSS. We considered the structures of a microstrip planar monopole, ring type, and a cross-dipole FSS. We developed algorithms and optimization results obtained for optimized geometries of antennas and FSS considered. To validate results were designed, constructed and measured several prototypes. The measured results showed excellent agreement with the simulated. Moreover, the results obtained in this study were compared to those simulated using a commercial software has been also observed an excellent agreement. Specifically, the efficiency of techniques used were CI evidenced by simulated and measured, aiming at optimizing the bandwidth of an antenna for wideband operation or UWB (Ultra Wideband), using a genetic algorithm and optimizing the bandwidth, by specifying the length of the air gap between two frequency selective surfaces, using an optimization algorithm particle swarm
Resumo:
The metaheuristics techiniques are known to solve optimization problems classified as NP-complete and are successful in obtaining good quality solutions. They use non-deterministic approaches to generate solutions that are close to the optimal, without the guarantee of finding the global optimum. Motivated by the difficulties in the resolution of these problems, this work proposes the development of parallel hybrid methods using the reinforcement learning, the metaheuristics GRASP and Genetic Algorithms. With the use of these techniques, we aim to contribute to improved efficiency in obtaining efficient solutions. In this case, instead of using the Q-learning algorithm by reinforcement learning, just as a technique for generating the initial solutions of metaheuristics, we use it in a cooperative and competitive approach with the Genetic Algorithm and GRASP, in an parallel implementation. In this context, was possible to verify that the implementations in this study showed satisfactory results, in both strategies, that is, in cooperation and competition between them and the cooperation and competition between groups. In some instances were found the global optimum, in others theses implementations reach close to it. In this sense was an analyze of the performance for this proposed approach was done and it shows a good performance on the requeriments that prove the efficiency and speedup (gain in speed with the parallel processing) of the implementations performed
Resumo:
In this work, the Markov chain will be the tool used in the modeling and analysis of convergence of the genetic algorithm, both the standard version as for the other versions that allows the genetic algorithm. In addition, we intend to compare the performance of the standard version with the fuzzy version, believing that this version gives the genetic algorithm a great ability to find a global optimum, own the global optimization algorithms. The choice of this algorithm is due to the fact that it has become, over the past thirty yares, one of the more importan tool used to find a solution of de optimization problem. This choice is due to its effectiveness in finding a good quality solution to the problem, considering that the knowledge of a good quality solution becomes acceptable given that there may not be another algorithm able to get the optimal solution for many of these problems. However, this algorithm can be set, taking into account, that it is not only dependent on how the problem is represented as but also some of the operators are defined, to the standard version of this, when the parameters are kept fixed, to their versions with variables parameters. Therefore to achieve good performance with the aforementioned algorithm is necessary that it has an adequate criterion in the choice of its parameters, especially the rate of mutation and crossover rate or even the size of the population. It is important to remember that those implementations in which parameters are kept fixed throughout the execution, the modeling algorithm by Markov chain results in a homogeneous chain and when it allows the variation of parameters during the execution, the Markov chain that models becomes be non - homogeneous. Therefore, in an attempt to improve the algorithm performance, few studies have tried to make the setting of the parameters through strategies that capture the intrinsic characteristics of the problem. These characteristics are extracted from the present state of execution, in order to identify and preserve a pattern related to a solution of good quality and at the same time that standard discarding of low quality. Strategies for feature extraction can either use precise techniques as fuzzy techniques, in the latter case being made through a fuzzy controller. A Markov chain is used for modeling and convergence analysis of the algorithm, both in its standard version as for the other. In order to evaluate the performance of a non-homogeneous algorithm tests will be applied to compare the standard fuzzy algorithm with the genetic algorithm, and the rate of change adjusted by a fuzzy controller. To do so, pick up optimization problems whose number of solutions varies exponentially with the number of variables
Resumo:
This paper presents an evaluative study about the effects of using a machine learning technique on the main features of a self-organizing and multiobjective genetic algorithm (GA). A typical GA can be seen as a search technique which is usually applied in problems involving no polynomial complexity. Originally, these algorithms were designed to create methods that seek acceptable solutions to problems where the global optimum is inaccessible or difficult to obtain. At first, the GAs considered only one evaluation function and a single objective optimization. Today, however, implementations that consider several optimization objectives simultaneously (multiobjective algorithms) are common, besides allowing the change of many components of the algorithm dynamically (self-organizing algorithms). At the same time, they are also common combinations of GAs with machine learning techniques to improve some of its characteristics of performance and use. In this work, a GA with a machine learning technique was analyzed and applied in a antenna design. We used a variant of bicubic interpolation technique, called 2D Spline, as machine learning technique to estimate the behavior of a dynamic fitness function, based on the knowledge obtained from a set of laboratory experiments. This fitness function is also called evaluation function and, it is responsible for determining the fitness degree of a candidate solution (individual), in relation to others in the same population. The algorithm can be applied in many areas, including in the field of telecommunications, as projects of antennas and frequency selective surfaces. In this particular work, the presented algorithm was developed to optimize the design of a microstrip antenna, usually used in wireless communication systems for application in Ultra-Wideband (UWB). The algorithm allowed the optimization of two variables of geometry antenna - the length (Ls) and width (Ws) a slit in the ground plane with respect to three objectives: radiated signal bandwidth, return loss and central frequency deviation. These two dimensions (Ws and Ls) are used as variables in three different interpolation functions, one Spline for each optimization objective, to compose a multiobjective and aggregate fitness function. The final result proposed by the algorithm was compared with the simulation program result and the measured result of a physical prototype of the antenna built in the laboratory. In the present study, the algorithm was analyzed with respect to their success degree in relation to four important characteristics of a self-organizing multiobjective GA: performance, flexibility, scalability and accuracy. At the end of the study, it was observed a time increase in algorithm execution in comparison to a common GA, due to the time required for the machine learning process. On the plus side, we notice a sensitive gain with respect to flexibility and accuracy of results, and a prosperous path that indicates directions to the algorithm to allow the optimization problems with "η" variables
Resumo:
Biosurfactants are molecules produced by microorganisms mainly bacteria as Pseudomonas and Bacillus. Among the biosurfactants, rhamnolipids play an important role due to their tensoactive as well as emulsifying properties. Besides can be produced in a well consolidated way the production costs of biosurfactants are quite expansive mainly if downstream processing is goning to be considered. Actually, attention has been given to identification of biosurfactants as well as optimization of its fermentative processes including downstream ones. This work deals with the development of strategies to recovery and purification of rhamnolipids produced by Pseudomonas aeruginosa P029-GVIIA using sugar-cane molasses as substrate. Broth free of cells was used in order to investigate the best strategies to recovery and purification produced by this system. Between the studied acids (HCl and H2SO4) for the acid precipitation step, HCl was the best one as has been showed by the experimental design 24. Extraction has been carried out using petroleum ether and quantification has been done using the thioglycolic acid method. Adsorption studies were carried out with activated carbon in a batch mode using a 24 experimental design as well as combined with an hydrophobic resin Streamline Phenyl aiming to separate the produced biosurfactant. Biosurfactant partial identification was carried out using High Performance Liquid Chromatography (HPLC). Experiments in batch mode showed that adsorption has been controlled mainly by pH and temperature. It was observed a reduction of 41.4% for the liquid phase and the solid phase it was possible to adsorb up to 15 mg of rhamnolipd/g of activated carbon. The kinetics of adsorption has been well fitted to a pseudo-first order reaction with velocity constant (k1) of 1.93 x 10-2 min-1. Experiments in packed bed ranging concentration on eluent (acetone) has been shown the highest recovery factor of 98% when pure acetone has been used. The combined effect if using activated carbon with an hydrophobic resin Streamline Phenyl has been shown successful for the rhamnolipids purification. It has been possible to purify a fraction of the crude broth with 98% of purity when the eluted of activated carbon packed bed was used with pure acetone
Resumo:
This thesis proposes an architecture of a new multiagent system framework for hybridization of metaheuristics inspired on the general Particle Swarm Optimization framework (PSO). The main contribution is to propose an effective approach to solve hard combinatory optimization problems. The choice of PSO as inspiration was given because it is inherently multiagent, allowing explore the features of multiagent systems, such as learning and cooperation techniques. In the proposed architecture, particles are autonomous agents with memory and methods for learning and making decisions, using search strategies to move in the solution space. The concepts of position and velocity originally defined in PSO are redefined for this approach. The proposed architecture was applied to the Traveling Salesman Problem and to the Quadratic Assignment Problem, and computational experiments were performed for testing its effectiveness. The experimental results were promising, with satisfactory performance, whereas the potential of the proposed architecture has not been fully explored. For further researches, the proposed approach will be also applied to multiobjective combinatorial optimization problems, which are closer to real-world problems. In the context of applied research, we intend to work with both students at the undergraduate level and a technical level in the implementation of the proposed architecture in real-world problems
Resumo:
Due to great difficulty of accurate solution of Combinatorial Optimization Problems, some heuristic methods have been developed and during many years, the analysis of performance of these approaches was not carried through in a systematic way. The proposal of this work is to make a statistical analysis of heuristic approaches to the Traveling Salesman Problem (TSP). The focus of the analysis is to evaluate the performance of each approach in relation to the necessary computational time until the attainment of the optimal solution for one determined instance of the TSP. Survival Analysis, assisted by methods for the hypothesis test of the equality between survival functions was used. The evaluated approaches were divided in three classes: Lin-Kernighan Algorithms, Evolutionary Algorithms and Particle Swarm Optimization. Beyond those approaches, it was enclosed in the analysis, a memetic algorithm (for symmetric and asymmetric TSP instances) that utilizes the Lin-Kernighan heuristics as its local search procedure