963 resultados para Objective functions


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Numerical optimization is a technique where a computer is used to explore design parameter combinations to find extremes in performance factors. In multi-objective optimization several performance factors can be optimized simultaneously. The solution to multi-objective optimization problems is not a single design, but a family of optimized designs referred to as the Pareto frontier. The Pareto frontier is a trade-off curve in the objective function space composed of solutions where performance in one objective function is traded for performance in others. A Multi-Objective Hybridized Optimizer (MOHO) was created for the purpose of solving multi-objective optimization problems by utilizing a set of constituent optimization algorithms. MOHO tracks the progress of the Pareto frontier approximation development and automatically switches amongst those constituent evolutionary optimization algorithms to speed the formation of an accurate Pareto frontier approximation. Aerodynamic shape optimization is one of the oldest applications of numerical optimization. MOHO was used to perform shape optimization on a 0.5-inch ballistic penetrator traveling at Mach number 2.5. Two objectives were simultaneously optimized: minimize aerodynamic drag and maximize penetrator volume. This problem was solved twice. The first time the problem was solved by using Modified Newton Impact Theory (MNIT) to determine the pressure drag on the penetrator. In the second solution, a Parabolized Navier-Stokes (PNS) solver that includes viscosity was used to evaluate the drag on the penetrator. The studies show the difference in the optimized penetrator shapes when viscosity is absent and present in the optimization. In modern optimization problems, objective function evaluations may require many hours on a computer cluster to perform these types of analysis. One solution is to create a response surface that models the behavior of the objective function. Once enough data about the behavior of the objective function has been collected, a response surface can be used to represent the actual objective function in the optimization process. The Hybrid Self-Organizing Response Surface Method (HYBSORSM) algorithm was developed and used to make response surfaces of objective functions. HYBSORSM was evaluated using a suite of 295 non-linear functions. These functions involve from 2 to 100 variables demonstrating robustness and accuracy of HYBSORSM.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The usage of multi material structures in industry, especially in the automotive industry are increasing. To overcome the difficulties in joining these structures, adhesives have several benefits over traditional joining methods. Therefore, accurate simulations of the entire process of fracture including the adhesive layer is crucial. In this paper, material parameters of a previously developed meso mechanical finite element (FE) model of a thin adhesive layer are optimized using the Strength Pareto Evolutionary Algorithm (SPEA2). Objective functions are defined as the error between experimental data and simulation data. The experimental data is provided by previously performed experiments where an adhesive layer was loaded in monotonically increasing peel and shear. Two objective functions are dependent on 9 model parameters (decision variables) in total and are evaluated by running two FEsimulations, one is loading the adhesive layer in peel and the other in shear. The original study converted the two objective functions into one function that resulted in one optimal solution. In this study, however, a Pareto frontis obtained by employing the SPEA2 algorithm. Thus, more insight into the material model, objective functions, optimal solutions and decision space is acquired using the Pareto front. We compare the results and show good agreement with the experimental data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

There are many applications in aeronautical/aerospace engineering where some values of the design parameters states cannot be provided or determined accurately. These values can be related to the geometry(wingspan, length, angles) and or to operational flight conditions that vary due to the presence of uncertainty parameters (Mach, angle of attack, air density and temperature, etc.). These uncertainty design parameters cannot be ignored in engineering design and must be taken into the optimisation task to produce more realistic and reliable solutions. In this paper, a robust/uncertainty design method with statistical constraints is introduced to produce a set of reliable solutions which have high performance and low sensitivity. Robust design concept coupled with Multi Objective Evolutionary Algorithms (MOEAs) is defined by applying two statistical sampling formulas; mean and variance/standard deviation associated with the optimisation fitness/objective functions. The methodology is based on a canonical evolution strategy and incorporates the concepts of hierarchical topology, parallel computing and asynchronous evaluation. It is implemented for two practical Unmanned Aerial System (UAS) design problems; the flrst case considers robust multi-objective (single disciplinary: aerodynamics) design optimisation and the second considers a robust multidisciplinary (aero structures) design optimisation. Numerical results show that the solutions obtained by the robust design method with statistical constraints have a more reliable performance and sensitivity in both aerodynamics and structures when compared to the baseline design.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

With the advent of large-scale wind farms and their integration into electrical grids, more uncertainties, constraints and objectives must be considered in power system development. It is therefore necessary to introduce risk-control strategies into the planning of transmission systems connected with wind power generators. This paper presents a probability-based multi-objective model equipped with three risk-control strategies. The model is developed to evaluate and enhance the ability of the transmission system to protect against overload risks when wind power is integrated into the power system. The model involves: (i) defining the uncertainties associated with wind power generators with probability measures and calculating the probabilistic power flow with the combined use of cumulants and Gram-Charlier series; (ii) developing three risk-control strategies by specifying the smallest acceptable non-overload probability for each branch and the whole system, and specifying the non-overload margin for all branches in the whole system; (iii) formulating an overload risk index based on the non-overload probability and the non-overload margin defined; and (iv) developing a multi-objective transmission system expansion planning (TSEP) model with the objective functions composed of transmission investment and the overload risk index. The presented work represents a superior risk-control model for TSEP in terms of security, reliability and economy. The transmission expansion planning model with the three risk-control strategies demonstrates its feasibility in the case study using two typical power systems

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The selection of optimal camera configurations (camera locations, orientations etc.) for multi-camera networks remains an unsolved problem. Previous approaches largely focus on proposing various objective functions to achieve different tasks. Most of them, however, do not generalize well to large scale networks. To tackle this, we introduce a statistical formulation of the optimal selection of camera configurations as well as propose a Trans-Dimensional Simulated Annealing (TDSA) algorithm to effectively solve the problem. We compare our approach with a state-of-the-art method based on Binary Integer Programming (BIP) and show that our approach offers similar performance on small scale problems. However, we also demonstrate the capability of our approach in dealing with large scale problems and show that our approach produces better results than 2 alternative heuristics designed to deal with the scalability issue of BIP.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The selection of optimal camera configurations (camera locations, orientations, etc.) for multi-camera networks remains an unsolved problem. Previous approaches largely focus on proposing various objective functions to achieve different tasks. Most of them, however, do not generalize well to large scale networks. To tackle this, we propose a statistical framework of the problem as well as propose a trans-dimensional simulated annealing algorithm to effectively deal with it. We compare our approach with a state-of-the-art method based on binary integer programming (BIP) and show that our approach offers similar performance on small scale problems. However, we also demonstrate the capability of our approach in dealing with large scale problems and show that our approach produces better results than two alternative heuristics designed to deal with the scalability issue of BIP. Last, we show the versatility of our approach using a number of specific scenarios.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents an efficient algorithm for multi-objective distribution feeder reconfiguration based on Modified Honey Bee Mating Optimization (MHBMO) approach. The main objective of the Distribution feeder reconfiguration (DFR) is to minimize the real power loss, deviation of the nodes’ voltage. Because of the fact that the objectives are different and no commensurable, it is difficult to solve the problem by conventional approaches that may optimize a single objective. So the metahuristic algorithm has been applied to this problem. This paper describes the full algorithm to Objective functions paid, The results of simulations on a 32 bus distribution system is given and shown high accuracy and optimize the proposed algorithm in power loss minimization.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The random early detection (RED) technique has seen a lot of research over the years. However, the functional relationship between RED performance and its parameters viz,, queue weight (omega(q)), marking probability (max(p)), minimum threshold (min(th)) and maximum threshold (max(th)) is not analytically availa ble. In this paper, we formulate a probabilistic constrained optimization problem by assuming a nonlinear relationship between the RED average queue length and its parameters. This problem involves all the RED parameters as the variables of the optimization problem. We use the barrier and the penalty function approaches for its Solution. However (as above), the exact functional relationship between the barrier and penalty objective functions and the optimization variable is not known, but noisy samples of these are available for different parameter values. Thus, for obtaining the gradient and Hessian of the objective, we use certain recently developed simultaneous perturbation stochastic approximation (SPSA) based estimates of these. We propose two four-timescale stochastic approximation algorithms based oil certain modified second-order SPSA updates for finding the optimum RED parameters. We present the results of detailed simulation experiments conducted over different network topologies and network/traffic conditions/settings, comparing the performance of Our algorithms with variants of RED and a few other well known adaptive queue management (AQM) techniques discussed in the literature.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Multi-objective optimization is an active field of research with broad applicability in aeronautics. This report details a variant of the original NSGA-II software aimed to improve the performances of such a widely used Genetic Algorithm in finding the optimal Pareto-front of a Multi-Objective optimization problem for the use of UAV and aircraft design and optimsaiton. Original NSGA-II works on a population of predetermined constant size and its computational cost to evaluate one generation is O(mn^2 ), being m the number of objective functions and n the population size. The basic idea encouraging this work is that of reduce the computational cost of the NSGA-II algorithm by making it work on a population of variable size, in order to obtain better convergence towards the Pareto-front in less time. In this work some test functions will be tested with both original NSGA-II and VPNSGA-II algorithms; each test will be timed in order to get a measure of the computational cost of each trial and the results will be compared.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The paradigm of computational vision hypothesizes that any visual function -- such as the recognition of your grandparent -- can be replicated by computational processing of the visual input. What are these computations that the brain performs? What should or could they be? Working on the latter question, this dissertation takes the statistical approach, where the suitable computations are attempted to be learned from the natural visual data itself. In particular, we empirically study the computational processing that emerges from the statistical properties of the visual world and the constraints and objectives specified for the learning process. This thesis consists of an introduction and 7 peer-reviewed publications, where the purpose of the introduction is to illustrate the area of study to a reader who is not familiar with computational vision research. In the scope of the introduction, we will briefly overview the primary challenges to visual processing, as well as recall some of the current opinions on visual processing in the early visual systems of animals. Next, we describe the methodology we have used in our research, and discuss the presented results. We have included some additional remarks, speculations and conclusions to this discussion that were not featured in the original publications. We present the following results in the publications of this thesis. First, we empirically demonstrate that luminance and contrast are strongly dependent in natural images, contradicting previous theories suggesting that luminance and contrast were processed separately in natural systems due to their independence in the visual data. Second, we show that simple cell -like receptive fields of the primary visual cortex can be learned in the nonlinear contrast domain by maximization of independence. Further, we provide first-time reports of the emergence of conjunctive (corner-detecting) and subtractive (opponent orientation) processing due to nonlinear projection pursuit with simple objective functions related to sparseness and response energy optimization. Then, we show that attempting to extract independent components of nonlinear histogram statistics of a biologically plausible representation leads to projection directions that appear to differentiate between visual contexts. Such processing might be applicable for priming, \ie the selection and tuning of later visual processing. We continue by showing that a different kind of thresholded low-frequency priming can be learned and used to make object detection faster with little loss in accuracy. Finally, we show that in a computational object detection setting, nonlinearly gain-controlled visual features of medium complexity can be acquired sequentially as images are encountered and discarded. We present two online algorithms to perform this feature selection, and propose the idea that for artificial systems, some processing mechanisms could be selectable from the environment without optimizing the mechanisms themselves. In summary, this thesis explores learning visual processing on several levels. The learning can be understood as interplay of input data, model structures, learning objectives, and estimation algorithms. The presented work adds to the growing body of evidence showing that statistical methods can be used to acquire intuitively meaningful visual processing mechanisms. The work also presents some predictions and ideas regarding biological visual processing.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Fuel cells are emerging as alternate green power producers for both large power production and for use in automobiles. Hydrogen is seen as the best option as a fuel; however, hydrogen fuel cells require recirculation of unspent hydrogen. A supersonic ejector is an apt device for recirculation in the operating regimes of a hydrogen fuel cell. Optimal ejectors have to be designed to achieve best performances. The use of the vector evaluated particle swarm optimization technique to optimize supersonic ejectors with a focus on its application for hydrogen recirculation in fuel cells is presented here. Two parameters, compression ratio and efficiency, have been identified as the objective functions to be optimized. Their relation to operating and design parameters of ejector is obtained by control volume based analysis using a constant area mixing approximation. The independent parameters considered are the area ratio and the exit Mach number of the nozzle. The optimization is carried out at a particularentrainment ratio and results in a set of nondominated solutions, the Pareto front. A set of such curves can be used for choosing the optimal design parameters of the ejector.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The optimal design of a multiproduct batch chemical plant is formulated as a multiobjective optimization problem, and the resulting constrained mixed-integer nonlinear program (MINLP) is solved by the nondominated sorting genetic algorithm approach (NSGA-II). By putting bounds on the objective function values, the constrained MINLP problem can be solved efficiently by NSGA-II to generate a set of feasible nondominated solutions in the range desired by the decision-maker in a single run of the algorithm. The evolution of the entire set of nondominated solutions helps the decision-maker to make a better choice of the appropriate design from among several alternatives. The large set of solutions also provides a rich source of excellent initial guesses for solution of the same problem by alternative approaches to achieve any specific target for the objective functions

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We present the theoretical foundations for the multiple rendezvous problem involving design of local control strategies that enable groups of visibility-limited mobile agents to split into subgroups, exhibit simultaneous taxis behavior towards, and eventually rendezvous at, multiple unknown locations of interest. The theoretical results are proved under certain restricted set of assumptions. The algorithm used to solve the above problem is based on a glowworm swarm optimization (GSO) technique, developed earlier, that finds multiple optima of multimodal objective functions. The significant difference between our work and most earlier approaches to agreement problems is the use of a virtual local-decision domain by the agents in order to compute their movements. The range of the virtual domain is adaptive in nature and is bounded above by the maximum sensor/visibility range of the agent. We introduce a new decision domain update rule that enhances the rate of convergence by a factor of approximately two. We use some illustrative simulations to support the algorithmic correctness and theoretical findings of the paper.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This study aims to determine optimal locations of dual trailing-edge flaps and blade stiffness to achieve minimum hub vibration levels in a helicopter, with low penalty in terms of required trailing-edge flap control power. An aeroelastic analysis based on finite elements in space and time is used in conjunction with an optimal control algorithm to determine the flap time history for vibration minimization. Using the aeroelastic analysis, it is found that the objective functions are highly nonlinear and polynomial response surface approximations cannot describe the objectives adequately. A neural network is then used for approximating the objective functions for optimization. Pareto-optimal points minimizing both helicopter vibration and flap power ale obtained using the response surface and neural network metamodels. The two metamodels give useful improved designs resulting in about 27% reduction in hub vibration and about 45% reduction in flap power. However, the design obtained using response surface is less sensitive to small perturbations in the design variables.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Genetic algorithms (GAs) are search methods that are being employed in a multitude of applications with extremely large search spaces. Recently, there has been considerable interest among GA researchers in understanding and formalizing the working of GAs. In an earlier paper, we have introduced the notion of binomially distributed populations as the central idea behind an exact ''populationary'' model of the large-population dynamics of the GA operators for objective functions called ''functions of unitation.'' In this paper, we extend this populationary model of GA dynamics to a more general class of objective functions called functions of unitation variables. We generalize the notion of a binomially distributed population to a generalized binomially distributed population (GBDP). We show that the effects of selection, crossover, and mutation can be exactly modelled after decomposing the population into GBDPs. Based on this generalized model, we have implemented a GA simulator for functions of two unitation variables-GASIM 2, and the distributions predicted by GASIM 2 match with those obtained from actual GA runs. The generalized populationary model of GA dynamics not only presents a novel and natural way of interpreting the workings of GAs with large populations, but it also provides for an efficient implementation of the model as a GA simulator. (C) Elsevier Science Inc. 1997.