940 resultados para Evolutionary optimization methods


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Earthworks tasks are often regarded in transportation projects as some of the most demanding processes. In fact, sequential tasks such as excavation, transportation, spreading and compaction are strongly based on heavy mechanical equipment and repetitive processes, thus becoming as economically demanding as they are time-consuming. Moreover, actual construction requirements originate higher demands for productivity and safety in earthwork constructions. Given the percentual weight of costs and duration of earthworks in infrastructure construction, the optimal usage of every resource in these tasks is paramount. Considering the characteristics of an earthwork construction, it can be looked at as a production line based on resources (mechanical equipment) and dependency relations between sequential tasks, hence being susceptible to optimization. Up to the present, the steady development of Information Technology areas, such as databases, artificial intelligence and operations research, has resulted in the emergence of several technologies with potential application bearing that purpose in mind. Among these, modern optimization methods (also known as metaheuristics), such as evolutionary computation, have the potential to find high quality optimal solutions with a reasonable use of computational resources. In this context, this work describes an optimization algorithm for earthworks equipment allocation based on a modern optimization approach, which takes advantage of the concept that an earthwork construction can be regarded as a production line.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tese de Doutoramento em Engenharia Civil.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The strut-and-tie models are widely used in certain types of structural elements in reinforced concrete and in regions with complexity of the stress state, called regions D, where the distribution of deformations in the cross section is not linear. This paper introduces a numerical technique to determine the strut-and-tie models using a variant of the classical Evolutionary Structural Optimization, which is called Smooth Evolutionary Structural Optimization. The basic idea of this technique is to identify the numerical flow of stresses generated in the structure, setting out in more technical and rational members of strut-and-tie, and to quantify their value for future structural design. This paper presents an index performance based on the evolutionary topology optimization method for automatically generating optimal strut-and-tie models in reinforced concrete structures with stress constraints. In the proposed approach, the element with the lowest Von Mises stress is calculated for element removal, while a performance index is used to monitor the evolutionary optimization process. Thus, a comparative analysis of the strut-and-tie models for beams is proposed with the presentation of examples from the literature that demonstrates the efficiency of this formulation. © 2013 Elsevier Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we deal with the problem of boosting the Optimum-Path Forest (OPF) clustering approach using evolutionary-based optimization techniques. As the OPF classifier performs an exhaustive search to find out the size of sample's neighborhood that allows it to reach the minimum graph cut as a quality measure, we compared several optimization techniques that can obtain close graph cut values to the ones obtained by brute force. Experiments in two public datasets in the context of unsupervised network intrusion detection have showed the evolutionary optimization techniques can find suitable values for the neighborhood faster than the exhaustive search. Additionally, we have showed that it is not necessary to employ many agents for such task, since the neighborhood size is defined by discrete values, with constrain the set of possible solution to a few ones.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The design of a network is a solution to several engineering and science problems. Several network design problems are known to be NP-hard, and population-based metaheuristics like evolutionary algorithms (EAs) have been largely investigated for such problems. Such optimization methods simultaneously generate a large number of potential solutions to investigate the search space in breadth and, consequently, to avoid local optima. Obtaining a potential solution usually involves the construction and maintenance of several spanning trees, or more generally, spanning forests. To efficiently explore the search space, special data structures have been developed to provide operations that manipulate a set of spanning trees (population). For a tree with n nodes, the most efficient data structures available in the literature require time O(n) to generate a new spanning tree that modifies an existing one and to store the new solution. We propose a new data structure, called node-depth-degree representation (NDDR), and we demonstrate that using this encoding, generating a new spanning forest requires average time O(root n). Experiments with an EA based on NDDR applied to large-scale instances of the degree-constrained minimum spanning tree problem have shown that the implementation adds small constants and lower order terms to the theoretical bound.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper aims to provide an improved NSGA-II (Non-Dominated Sorting Genetic Algorithm-version II) which incorporates a parameter-free self-tuning approach by reinforcement learning technique, called Non-Dominated Sorting Genetic Algorithm Based on Reinforcement Learning (NSGA-RL). The proposed method is particularly compared with the classical NSGA-II when applied to a satellite coverage problem. Furthermore, not only the optimization results are compared with results obtained by other multiobjective optimization methods, but also guarantee the advantage of no time-spending and complex parameter tuning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a new approach to the delineation of local labor markets based on evolutionary computation. The aim of the exercise is the division of a given territory into functional regions based on travel-to-work flows. Such regions are defined so that a high degree of inter-regional separation and of intra-regional integration in both cases in terms of commuting flows is guaranteed. Additional requirements include the absence of overlap between delineated regions and the exhaustive coverage of the whole territory. The procedure is based on the maximization of a fitness function that measures aggregate intra-region interaction under constraints of inter-region separation and minimum size. In the experimentation stage, two variations of the fitness function are used, and the process is also applied as a final stage for the optimization of the results from one of the most successful existing methods, which are used by the British authorities for the delineation of travel-to-work areas (TTWAs). The empirical exercise is conducted using real data for a sufficiently large territory that is considered to be representative given the density and variety of travel-to-work patterns that it embraces. The paper includes the quantitative comparison with alternative traditional methods, the assessment of the performance of the set of operators which has been specifically designed to handle the regionalization problem and the evaluation of the convergence process. The robustness of the solutions, something crucial in a research and policy-making context, is also discussed in the paper.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Aims. We determine the age and mass of the three best solar twin candidates in open cluster M 67 through lithium evolutionary models. Methods. We computed a grid of evolutionary models with non-standard mixing at metallicity [Fe/H] = 0.01 with the Toulouse-Geneva evolution code for a range of stellar masses. We estimated the mass and age of 10 solar analogs belonging to the open cluster M 67. We made a detailed study of the three solar twins of the sample, YPB637, YPB1194, and YPB1787. Results. We obtained a very accurate estimation of the mass of our solar analogs in M 67 by interpolating in the grid of evolutionary models. The three solar twins allowed us to estimate the age of the open cluster, which is 3.87(-0.66)(+0.55) Gyr, which is better constrained than former estimates. Conclusions. Our results show that the 3 solar twin candidates have one solar mass within the errors and that M 67 has a solar age within the errors, validating its use as a solar proxy. M 67 is an important cluster when searching for solar twins.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Context. Previous analyses of lithium abundances in main sequence and red giant stars have revealed the action of mixing mechanisms other than convection in stellar interiors. Beryllium abundances in stars with Li abundance determinations can offer valuable complementary information on the nature of these mechanisms. Aims. Our aim is to derive Be abundances along the whole evolutionary sequence of an open cluster. We focus on the well-studied open cluster IC 4651. These Be abundances are used with previously determined Li abundances, in the same sample stars, to investigate the mixing mechanisms in a range of stellar masses and evolutionary stages. Methods. Atmospheric parameters were adopted from a previous abundance analysis by the same authors. New Be abundances have been determined from high-resolution, high signal-to-noise UVES spectra using spectrum synthesis and model atmospheres. The careful synthetic modeling of the Be lines region is used to calculate reliable abundances in rapidly rotating stars. The observed behavior of Be and Li is compared to theoretical predictions from stellar models including rotation-induced mixing, internal gravity waves, atomic diffusion, and thermohaline mixing. Results. Beryllium is detected in all the main sequence and turn-off sample stars, both slow- and fast-rotating stars, including the Li-dip stars, but is not detected in the red giants. Confirming previous results, we find that the Li dip is also a Be dip, although the depletion of Be is more modest than for Li in the corresponding effective temperature range. For post-main-sequence stars, the Be dilution starts earlier within the Hertzsprung gap than expected from classical predictions, as does the Li dilution. A clear dispersion in the Be abundances is also observed. Theoretical stellar models including the hydrodynamical transport processes mentioned above are able to reproduce all the observed features well. These results show a good theoretical understanding of the Li and Be behavior along the color-magnitude diagram of this intermediate-age cluster for stars more massive than 1.2 M(circle dot).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Important research effort has been devoted to the topic of optimal planning of distribution systems. The non linear nature of the system, the need to consider a large number of scenarios and the increasing necessity to deal with uncertainties make optimal planning in distribution systems a difficult task. Heuristic techniques approaches have been proposed to deal with these issues, overcoming some of the inherent difficulties of classic methodologies. This paper considers several methodologies used to address planning problems of electrical power distribution networks, namely mixedinteger linear programming (MILP), ant colony algorithms (AC), genetic algorithms (GA), tabu search (TS), branch exchange (BE), simulated annealing (SA) and the Bender´s decomposition deterministic non-linear optimization technique (BD). Adequacy of theses techniques to deal with uncertainties is discussed. The behaviour of each optimization technique is compared from the point of view of the obtained solution and of the methodology performance. The paper presents results of the application of these optimization techniques to a real case of a 10-kV electrical distribution system with 201 nodes that feeds an urban area.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The filter method is a technique for solving nonlinear programming problems. The filter algorithm has two phases in each iteration. The first one reduces a measure of infeasibility, while in the second the objective function value is reduced. In real optimization problems, usually the objective function is not differentiable or its derivatives are unknown. In these cases it becomes essential to use optimization methods where the calculation of the derivatives or the verification of their existence is not necessary: direct search methods or derivative-free methods are examples of such techniques. In this work we present a new direct search method, based on simplex methods, for general constrained optimization that combines the features of simplex and filter methods. This method neither computes nor approximates derivatives, penalty constants or Lagrange multipliers.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Penalty and Barrier methods are normally used to solve Nonlinear Optimization Problems constrained problems. The problems appear in areas such as engineering and are often characterised by the fact that involved functions (objective and constraints) are non-smooth and/or their derivatives are not know. This means that optimization methods based on derivatives cannot net used. A Java based API was implemented, including only derivative-free optimizationmethods, to solve both constrained and unconstrained problems, which includes Penalty and Barriers methods. In this work a new penalty function, based on Fuzzy Logic, is presented. This function imposes a progressive penalization to solutions that violate the constraints. This means that the function imposes a low penalization when the violation of the constraints is low and a heavy penalisation when the violation is high. The value of the penalization is not known in beforehand, it is the outcome of a fuzzy inference engine. Numerical results comparing the proposed function with two of the classic penalty/barrier functions are presented. Regarding the presented results one can conclude that the prosed penalty function besides being very robust also exhibits a very good performance.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Search Optimization methods are needed to solve optimization problems where the objective function and/or constraints functions might be non differentiable, non convex or might not be possible to determine its analytical expressions either due to its complexity or its cost (monetary, computational, time,...). Many optimization problems in engineering and other fields have these characteristics, because functions values can result from experimental or simulation processes, can be modelled by functions with complex expressions or by noise functions and it is impossible or very difficult to calculate their derivatives. Direct Search Optimization methods only use function values and do not need any derivatives or approximations of them. In this work we present a Java API that including several methods and algorithms, that do not use derivatives, to solve constrained and unconstrained optimization problems. Traditional API access, by installing it on the developer and/or user computer, and remote API access to it, using Web Services, are also presented. Remote access to the API has the advantage of always allow the access to the latest version of the API. For users that simply want to have a tool to solve Nonlinear Optimization Problems and do not want to integrate these methods in applications, also two applications were developed. One is a standalone Java application and the other a Web-based application, both using the developed API.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Constrained nonlinear optimization problems are usually solved using penalty or barrier methods combined with unconstrained optimization methods. Another alternative used to solve constrained nonlinear optimization problems is the lters method. Filters method, introduced by Fletcher and Ley er in 2002, have been widely used in several areas of constrained nonlinear optimization. These methods treat optimization problem as bi-objective attempts to minimize the objective function and a continuous function that aggregates the constraint violation functions. Audet and Dennis have presented the rst lters method for derivative-free nonlinear programming, based on pattern search methods. Motivated by this work we have de- veloped a new direct search method, based on simplex methods, for general constrained optimization, that combines the features of the simplex method and lters method. This work presents a new variant of these methods which combines the lters method with other direct search methods and are proposed some alternatives to aggregate the constraint violation functions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Constrained and unconstrained Nonlinear Optimization Problems often appear in many engineering areas. In some of these cases it is not possible to use derivative based optimization methods because the objective function is not known or it is too complex or the objective function is non-smooth. In these cases derivative based methods cannot be used and Direct Search Methods might be the most suitable optimization methods. An Application Programming Interface (API) including some of these methods was implemented using Java Technology. This API can be accessed either by applications running in the same computer where it is installed or, it can be remotely accessed through a LAN or the Internet, using webservices. From the engineering point of view, the information needed from the API is the solution for the provided problem. On the other hand, from the optimization methods researchers’ point of view, not only the solution for the problem is needed. Also additional information about the iterative process is useful, such as: the number of iterations; the value of the solution at each iteration; the stopping criteria, etc. In this paper are presented the features added to the API to allow users to access to the iterative process data.