893 resultados para Shape optimization method
Resumo:
This paper addresses the use of optimization techniques in the design of a steel riser. Two methods are used: the genetic algorithm, which imitates the process of natural selection, and the simulated annealing, which is based on the process of annealing of a metal. Both of them are capable of searching a given solution space for the best feasible riser configuration according to predefined criteria. Optimization issues are discussed, such as problem codification, parameter selection, definition of objective function, and restrictions. A comparison between the results obtained for economic and structural objective functions is made for a case study. Optimization method parallelization is also addressed. [DOI: 10.1115/1.4001955]
Resumo:
Compliant mechanisms can achieve a specified motion as a mechanism without relying on the use of joints and pins. They have broad application in precision mechanical devices and Micro-Electro Mechanical Systems (MEMS) but may lose accuracy and produce undesirable displacements when subjected to temperature changes. These undesirable effects can be reduced by using sensors in combination with control techniques and/or by applying special design techniques to reduce such undesirable effects at the design stage, a process generally termed ""design for precision"". This paper describes a design for precision method based on a topology optimization method (TOM) for compliant mechanisms that includes thermal compensation features. The optimization problem emphasizes actuator accuracy and it is formulated to yield optimal compliant mechanism configurations that maximize the desired output displacement when a force is applied, while minimizing undesirable thermal effects. To demonstrate the effectiveness of the method, two-dimensional compliant mechanisms are designed considering thermal compensation, and their performance is compared with compliant mechanisms designs that do not consider thermal compensation. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The well-known modified Garabedian-Mcfadden (MGM) method is an attractive alternative for aerodynamic inverse design, for its simplicity and effectiveness (P. Garabedian and G. Mcfadden, Design of supercritical swept wings, AIAA J. 20(3) (1982), 289-291; J.B. Malone, J. Vadyak, and L.N. Sankar, Inverse aerodynamic design method for aircraft components, J. Aircraft 24(2) (1987), 8-9; Santos, A hybrid optimization method for aerodynamic design of lifting surfaces, PhD Thesis, Georgia Institute of Technology, 1993). Owing to these characteristics, the method has been the subject of several authors over the years (G.S. Dulikravich and D.P. Baker, Aerodynamic shape inverse design using a Fourier series method, in AIAA paper 99-0185, AIAA Aerospace Sciences Meeting, Reno, NV, January 1999; D.H. Silva and L.N. Sankar, An inverse method for the design of transonic wings, in 1992 Aerospace Design Conference, No. 92-1025 in proceedings, AIAA, Irvine, CA, February 1992, 1-11; W. Bartelheimer, An Improved Integral Equation Method for the Design of Transonic Airfoils and Wings, AIAA Inc., 1995). More recently, a hybrid formulation and a multi-point algorithm were developed on the basis of the original MGM. This article discusses applications of those latest developments for airfoil and wing design. The test cases focus on wing-body aerodynamic interference and shock wave removal applications. The DLR-F6 geometry is picked as the baseline for the analysis.
Resumo:
A simplex-lattice statistical project was employed to study an optimization method for a preservative system in an ophthalmic suspension of dexametasone and polymyxin B. The assay matrix generated 17 formulas which were differentiated by the preservatives and EDTA (disodium ethylene diamine-tetraacetate), being the independent variable: X-1 = chlorhexidine digluconate (0.010 % w/v); X-2 = phenylethanol (0.500 % w/v); X-3 = EDTA (0.100 % w/v). The dependent variable was the Dvalue obtained from the microbial challenge of the formulas and calculated when the microbial killing process was modeled by an exponential function. The analysis of the dependent variable, performed using the software Design Expert/W, originated cubic equations with terms derived from stepwise adjustment method for the challenging microorganisms: Pseudomonas aeruginosa, Burkholderia cepacia, Staphylococcus aureus, Candida albicans and Aspergillus niger. Besides the mathematical expressions, the response surfaces and the contour graphics were obtained for each assay. The contour graphs obtained were overlaid in order to permit the identification of a region containing the most adequate formulas (graphic strategy), having as representatives: X-1 = 0.10 ( 0.001 % w/v); X-2 = 0.80 (0.400 % w/v); X-3 = 0.10 (0.010 % w/v). Additionally, in order to minimize responses (Dvalue), a numerical strategy corresponding to the use of the desirability function was used, which resulted in the following independent variables combinations: X-1 = 0.25 (0.0025 % w/v); X-2 = 0.75 (0.375 % w/v); X-3 = 0. These formulas, derived from the two strategies (graphic and numerical), were submitted to microbial challenge, and the experimental Dvalue obtained was compared to the theoretical Dvalue calculated from the cubic equation. Both Dvalues were similar to all the assays except that related to Staphylococcus aureus. This microorganism, as well as Pseudomonas aeruginosa, presented intense susceptibility to the formulas independently from the preservative and EDTA concentrations. Both formulas derived from graphic and numerical strategies attained the recommended criteria adopted by the official method. It was concluded that the model proposed allowed the optimization of the formulas in their preservation aspect.
Resumo:
Electrical impedance tomography is a technique to estimate the impedance distribution within a domain, based on measurements on its boundary. In other words, given the mathematical model of the domain, its geometry and boundary conditions, a nonlinear inverse problem of estimating the electric impedance distribution can be solved. Several impedance estimation algorithms have been proposed to solve this problem. In this paper, we present a three-dimensional algorithm, based on the topology optimization method, as an alternative. A sequence of linear programming problems, allowing for constraints, is solved utilizing this method. In each iteration, the finite element method provides the electric potential field within the model of the domain. An electrode model is also proposed (thus, increasing the accuracy of the finite element results). The algorithm is tested using numerically simulated data and also experimental data, and absolute resistivity values are obtained. These results, corresponding to phantoms with two different conductive materials, exhibit relatively well-defined boundaries between them, and show that this is a practical and potentially useful technique to be applied to monitor lung aeration, including the possibility of imaging a pneumothorax.
Resumo:
Meshless methods are used for their capability of producing excellent solutions without requiring a mesh, avoiding mesh related problems encountered in other numerical methods, such as finite elements. However, node placement is still an open question, specially in strong form collocation meshless methods. The number of used nodes can have a big influence on matrix size and therefore produce ill-conditioned matrices. In order to optimize node position and number, a direct multisearch technique for multiobjective optimization is used to optimize node distribution in the global collocation method using radial basis functions. The optimization method is applied to the bending of isotropic simply supported plates. Using as a starting condition a uniformly distributed grid, results show that the method is capable of reducing the number of nodes in the grid without compromising the accuracy of the solution. (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
Energy systems worldwide are complex and challenging environments. Multi-agent based simulation platforms are increasing at a high rate, as they show to be a good option to study many issues related to these systems, as well as the involved players at act in this domain. In this scope the authors’ research group has developed a multi-agent system: MASCEM (Multi- Agent System for Competitive Electricity Markets), which performs realistic simulations of the electricity markets. MASCEM is integrated with ALBidS (Adaptive Learning Strategic Bidding System) that works as a decision support system for market players. The ALBidS system allows MASCEM market negotiating players to take the best possible advantages from each market context. However, it is still necessary to adequately optimize the players’ portfolio investment. For this purpose, this paper proposes a market portfolio optimization method, based on particle swarm optimization, which provides the best investment profile for a market player, considering different market opportunities (bilateral negotiation, market sessions, and operation in different markets) and the negotiation context such as the peak and off-peak periods of the day, the type of day (business day, weekend, holiday, etc.) and most important, the renewable based distributed generation forecast. The proposed approach is tested and validated using real electricity markets data from the Iberian operator – MIBEL.
Resumo:
Energy systems worldwide are complex and challenging environments. Multi-agent based simulation platforms are increasing at a high rate, as they show to be a good option to study many issues related to these systems, as well as the involved players at act in this domain. In this scope the authors’ research group has developed a multi-agent system: MASCEM (Multi-Agent System for Competitive Electricity Markets), which simulates the electricity markets. MASCEM is integrated with ALBidS (Adaptive Learning Strategic Bidding System) that works as a decision support system for market players. The ALBidS system allows MASCEM market negotiating players to take the best possible advantages from the market context. However, it is still necessary to adequately optimize the player’s portfolio investment. For this purpose, this paper proposes a market portfolio optimization method, based on particle swarm optimization, which provides the best investment profile for a market player, considering the different markets the player is acting on in each moment, and depending on different contexts of negotiation, such as the peak and offpeak periods of the day, and the type of day (business day, weekend, holiday, etc.). The proposed approach is tested and validated using real electricity markets data from the Iberian operator – OMIE.
Resumo:
European Journal of Operational Research, nº 73 (1994)
Resumo:
Polysaccharides are gaining increasing attention as potential environmental friendly and sustainable building blocks in many fields of the (bio)chemical industry. The microbial production of polysaccharides is envisioned as a promising path, since higher biomass growth rates are possible and therefore higher productivities may be achieved compared to vegetable or animal polysaccharides sources. This Ph.D. thesis focuses on the modeling and optimization of a particular microbial polysaccharide, namely the production of extracellular polysaccharides (EPS) by the bacterial strain Enterobacter A47. Enterobacter A47 was found to be a metabolically versatile organism in terms of its adaptability to complex media, notably capable of achieving high growth rates in media containing glycerol byproduct from the biodiesel industry. However, the industrial implementation of this production process is still hampered due to a largely unoptimized process. Kinetic rates from the bioreactor operation are heavily dependent on operational parameters such as temperature, pH, stirring and aeration rate. The increase of culture broth viscosity is a common feature of this culture and has a major impact on the overall performance. This fact complicates the mathematical modeling of the process, limiting the possibility to understand, control and optimize productivity. In order to tackle this difficulty, data-driven mathematical methodologies such as Artificial Neural Networks can be employed to incorporate additional process data to complement the known mathematical description of the fermentation kinetics. In this Ph.D. thesis, we have adopted such an hybrid modeling framework that enabled the incorporation of temperature, pH and viscosity effects on the fermentation kinetics in order to improve the dynamical modeling and optimization of the process. A model-based optimization method was implemented that enabled to design bioreactor optimal control strategies in the sense of EPS productivity maximization. It is also critical to understand EPS synthesis at the level of the bacterial metabolism, since the production of EPS is a tightly regulated process. Methods of pathway analysis provide a means to unravel the fundamental pathways and their controls in bioprocesses. In the present Ph.D. thesis, a novel methodology called Principal Elementary Mode Analysis (PEMA) was developed and implemented that enabled to identify which cellular fluxes are activated under different conditions of temperature and pH. It is shown that differences in these two parameters affect the chemical composition of EPS, hence they are critical for the regulation of the product synthesis. In future studies, the knowledge provided by PEMA could foster the development of metabolically meaningful control strategies that target the EPS sugar content and oder product quality parameters.
Resumo:
Centrifugal pumps are a notable end-consumer of electrical energy. Typical application of a centrifugal pump is the filling or emptying of a reservoir tank, where the pump is often operated at a constant speed until the process is completed. Installing a frequency converter to control the motor substitutes the traditional fixed-speed pumping system, allows the optimization of rotational speed profile for the pumping tasks and enables the estimation of rotational speed and shaft torque of an induction motor without any additional measurements from the motor shaft. Utilization of variable-speed operation provides the possibility to decrease the overall energy consumption of the pumping task. The static head of the pumping process may change during the pumping task. In such systems, the minimum rotational speed changes during reservoir filling or emptying, and the minimum energy consumption can’t be achieved with a fixed rotational speed. This thesis presents embedded algorithms to automatically identify, optimize and monitor pumping processes between supply and destination reservoirs, and evaluates the changing static head –based optimization method.
Resumo:
Identification of low-dimensional structures and main sources of variation from multivariate data are fundamental tasks in data analysis. Many methods aimed at these tasks involve solution of an optimization problem. Thus, the objective of this thesis is to develop computationally efficient and theoretically justified methods for solving such problems. Most of the thesis is based on a statistical model, where ridges of the density estimated from the data are considered as relevant features. Finding ridges, that are generalized maxima, necessitates development of advanced optimization methods. An efficient and convergent trust region Newton method for projecting a point onto a ridge of the underlying density is developed for this purpose. The method is utilized in a differential equation-based approach for tracing ridges and computing projection coordinates along them. The density estimation is done nonparametrically by using Gaussian kernels. This allows application of ridge-based methods with only mild assumptions on the underlying structure of the data. The statistical model and the ridge finding methods are adapted to two different applications. The first one is extraction of curvilinear structures from noisy data mixed with background clutter. The second one is a novel nonlinear generalization of principal component analysis (PCA) and its extension to time series data. The methods have a wide range of potential applications, where most of the earlier approaches are inadequate. Examples include identification of faults from seismic data and identification of filaments from cosmological data. Applicability of the nonlinear PCA to climate analysis and reconstruction of periodic patterns from noisy time series data are also demonstrated. Other contributions of the thesis include development of an efficient semidefinite optimization method for embedding graphs into the Euclidean space. The method produces structure-preserving embeddings that maximize interpoint distances. It is primarily developed for dimensionality reduction, but has also potential applications in graph theory and various areas of physics, chemistry and engineering. Asymptotic behaviour of ridges and maxima of Gaussian kernel densities is also investigated when the kernel bandwidth approaches infinity. The results are applied to the nonlinear PCA and to finding significant maxima of such densities, which is a typical problem in visual object tracking.
Resumo:
Factors involved in the determination of PAHs (16 priority PAHs as an example) and PCBs (10 PCB congeners, representing 10 isomeric groups) by capillary gas chromatography coupled with mass spectrometry (GC/MS, for PAHs) and electron capture detection (GC/ECD , for PCBs) were studied, with emphasis on the effect of solvent. Having various volatilities and different polarities, solvent studied included dichloromethane, acetonitrile, hexan e, cyclohexane, isooctane, octane, nonane, dodecane, benzene, toluene, p-xylene, o-xylene, and mesitylene. Temperatures of the capillary column, the injection port, the GC/MS interface, the flow rates of carrier gas and make-up gas, and the injection volume were optimized by one factor at a time method or simplex optimization method. Under the optimized conditions, both peak height and peak area of 16 PAHs, especially the late-eluting PAHs, were significantly enhanced (1 to 500 times) by using relatively higher boiling point solvents such as p-xylene and nonane, compared with commonly used solvents like benzene and isooctane. With the improved sensitivity, detection limits of between 4.4 pg for naphthalene and 30.8 pg for benzo[g,h,i]perylene were obtained when p-xylene was used as an injection solvent. Effect of solvent on peak shape and peak intensity were found to be greatly dependent on temperature parameters, especially the initial temperature of the capillary column. The relationship between initial temperature and shape of peaks from 16 PAHs and 10 PCBs were studied and compared when toluene, p-xylene, isooctane, and nonane were used as injection solvents. If a too low initial temperature was used, fronting or split of peaks was observed. On the other hand, peak tailing occurred at a too high initial column temperature. The optimum initial temperature, at which both peak fronting and tailing were avoided and symmetrical peaks were obtained, depended on both solvents and the stationary phase of the column used. On a methyl silicone column, the alkane solvents provided wider optimum ranges of initial temperature than aromatic solvents did, for achieving well-shaped symmetrical GC peaks. On a 5% diphenyl: 1% vinyl: 94% dimethyl polysiloxane column, when the aromatic solvents were used, the optimum initial temperature ranges for solutes to form symmetrical peaks were improved to a similar degree as those when the alkanes were used as injection solvents. A mechanism, based on the properties of and possible interactions among the analyte, the injection solvent, and the stationary phase of the capillary column, was proposed to explain these observations. The effect of initial temperature on peak height and peak area of the 16 PAHs and the 10 PCBs was also studied. The optimum initial temperature was found to be dependent on the physical properties of the solvent used and the amount of the solvent injected. Generally, from the boiling point of the solvent to 10 0C above its boiling point was an optimum range of initial temperature at which cthe highest peak height and peak area were obtained.
Resumo:
To ensure quality of machined products at minimum machining costs and maximum machining effectiveness, it is very important to select optimum parameters when metal cutting machine tools are employed. Traditionally, the experience of the operator plays a major role in the selection of optimum metal cutting conditions. However, attaining optimum values each time by even a skilled operator is difficult. The non-linear nature of the machining process has compelled engineers to search for more effective methods to attain optimization. The design objective preceding most engineering design activities is simply to minimize the cost of production or to maximize the production efficiency. The main aim of research work reported here is to build robust optimization algorithms by exploiting ideas that nature has to offer from its backyard and using it to solve real world optimization problems in manufacturing processes.In this thesis, after conducting an exhaustive literature review, several optimization techniques used in various manufacturing processes have been identified. The selection of optimal cutting parameters, like depth of cut, feed and speed is a very important issue for every machining process. Experiments have been designed using Taguchi technique and dry turning of SS420 has been performed on Kirlosker turn master 35 lathe. Analysis using S/N and ANOVA were performed to find the optimum level and percentage of contribution of each parameter. By using S/N analysis the optimum machining parameters from the experimentation is obtained.Optimization algorithms begin with one or more design solutions supplied by the user and then iteratively check new design solutions, relative search spaces in order to achieve the true optimum solution. A mathematical model has been developed using response surface analysis for surface roughness and the model was validated using published results from literature.Methodologies in optimization such as Simulated annealing (SA), Particle Swarm Optimization (PSO), Conventional Genetic Algorithm (CGA) and Improved Genetic Algorithm (IGA) are applied to optimize machining parameters while dry turning of SS420 material. All the above algorithms were tested for their efficiency, robustness and accuracy and observe how they often outperform conventional optimization method applied to difficult real world problems. The SA, PSO, CGA and IGA codes were developed using MATLAB. For each evolutionary algorithmic method, optimum cutting conditions are provided to achieve better surface finish.The computational results using SA clearly demonstrated that the proposed solution procedure is quite capable in solving such complicated problems effectively and efficiently. Particle Swarm Optimization (PSO) is a relatively recent heuristic search method whose mechanics are inspired by the swarming or collaborative behavior of biological populations. From the results it has been observed that PSO provides better results and also more computationally efficient.Based on the results obtained using CGA and IGA for the optimization of machining process, the proposed IGA provides better results than the conventional GA. The improved genetic algorithm incorporating a stochastic crossover technique and an artificial initial population scheme is developed to provide a faster search mechanism. Finally, a comparison among these algorithms were made for the specific example of dry turning of SS 420 material and arriving at optimum machining parameters of feed, cutting speed, depth of cut and tool nose radius for minimum surface roughness as the criterion. To summarize, the research work fills in conspicuous gaps between research prototypes and industry requirements, by simulating evolutionary procedures seen in nature that optimize its own systems.
Resumo:
Aquesta tesi presenta un nou mètode pel disseny invers de reflectors. Ens hem centrat en tres temes principals: l’ús de fonts de llum reals i complexes, la definició d’un algoritme ràpid pel càlcul de la il•luminació del reflector, i la definició d’un algoritme d’optimització per trobar més eficientment el reflector desitjat. Les fonts de llum estan representades per models near-field, que es comprimeixen amb un error molt petit, fins i tot per fonts de llum amb milions de raigs i objectes a il•luminar molt propers. Llavors proposem un mètode ràpid per obtenir la distribució de la il•luminació d’un reflector i la seva comparació amb la il•luminació desitjada, i que treballa completament en la GPU. Finalment, proposem un nou mètode d’optimització global que permet trobar la solució en menys passos que molts altres mètodes d’optimització clàssics, i alhora evitant mínims locals.