16 resultados para Lagrangian bounds in optimization problems
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo
Resumo:
Many engineering sectors are challenged by multi-objective optimization problems. Even if the idea behind these problems is simple and well established, the implementation of any procedure to solve them is not a trivial task. The use of evolutionary algorithms to find candidate solutions is widespread. Usually they supply a discrete picture of the non-dominated solutions, a Pareto set. Although it is very interesting to know the non-dominated solutions, an additional criterion is needed to select one solution to be deployed. To better support the design process, this paper presents a new method of solving non-linear multi-objective optimization problems by adding a control function that will guide the optimization process over the Pareto set that does not need to be found explicitly. The proposed methodology differs from the classical methods that combine the objective functions in a single scale, and is based on a unique run of non-linear single-objective optimizers.
Resumo:
Recently, researches have shown that the performance of metaheuristics can be affected by population initialization. Opposition-based Differential Evolution (ODE), Quasi-Oppositional Differential Evolution (QODE), and Uniform-Quasi-Opposition Differential Evolution (UQODE) are three state-of-the-art methods that improve the performance of the Differential Evolution algorithm based on population initialization and different search strategies. In a different approach to achieve similar results, this paper presents a technique to discover promising regions in a continuous search-space of an optimization problem. Using machine-learning techniques, the algorithm named Smart Sampling (SS) finds regions with high possibility of containing a global optimum. Next, a metaheuristic can be initialized inside each region to find that optimum. SS and DE were combined (originating the SSDE algorithm) to evaluate our approach, and experiments were conducted in the same set of benchmark functions used by ODE, QODE and UQODE authors. Results have shown that the total number of function evaluations required by DE to reach the global optimum can be significantly reduced and that the success rate improves if SS is employed first. Such results are also in consonance with results from the literature, stating the importance of an adequate starting population. Moreover, SS presents better efficacy to find initial populations of superior quality when compared to the other three algorithms that employ oppositional learning. Finally and most important, the SS performance in finding promising regions is independent of the employed metaheuristic with which SS is combined, making SS suitable to improve the performance of a large variety of optimization techniques. (C) 2012 Elsevier Inc. All rights reserved.
Resumo:
This paper proposes two new approaches for the sensitivity analysis of multiobjective design optimization problems whose performance functions are highly susceptible to small variations in the design variables and/or design environment parameters. In both methods, the less sensitive design alternatives are preferred over others during the multiobjective optimization process. While taking the first approach, the designer chooses the design variable and/or parameter that causes uncertainties. The designer then associates a robustness index with each design alternative and adds each index as an objective function in the optimization problem. For the second approach, the designer must know, a priori, the interval of variation in the design variables or in the design environment parameters, because the designer will be accepting the interval of variation in the objective functions. The second method does not require any law of probability distribution of uncontrollable variations. Finally, the authors give two illustrative examples to highlight the contributions of the paper.
Resumo:
In this paper, the effects of uncertainty and expected costs of failure on optimum structural design are investigated, by comparing three distinct formulations of structural optimization problems. Deterministic Design Optimization (DDO) allows one the find the shape or configuration of a structure that is optimum in terms of mechanics, but the formulation grossly neglects parameter uncertainty and its effects on structural safety. Reliability-based Design Optimization (RBDO) has emerged as an alternative to properly model the safety-under-uncertainty part of the problem. With RBDO, one can ensure that a minimum (and measurable) level of safety is achieved by the optimum structure. However, results are dependent on the failure probabilities used as constraints in the analysis. Risk optimization (RO) increases the scope of the problem by addressing the compromising goals of economy and safety. This is accomplished by quantifying the monetary consequences of failure, as well as the costs associated with construction, operation and maintenance. RO yields the optimum topology and the optimum point of balance between economy and safety. Results are compared for some example problems. The broader RO solution is found first, and optimum results are used as constraints in DDO and RBDO. Results show that even when optimum safety coefficients are used as constraints in DDO, the formulation leads to configurations which respect these design constraints, reduce manufacturing costs but increase total expected costs (including expected costs of failure). When (optimum) system failure probability is used as a constraint in RBDO, this solution also reduces manufacturing costs but by increasing total expected costs. This happens when the costs associated with different failure modes are distinct. Hence, a general equivalence between the formulations cannot be established. Optimum structural design considering expected costs of failure cannot be controlled solely by safety factors nor by failure probability constraints, but will depend on actual structural configuration. (c) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Over the past few years, the field of global optimization has been very active, producing different kinds of deterministic and stochastic algorithms for optimization in the continuous domain. These days, the use of evolutionary algorithms (EAs) to solve optimization problems is a common practice due to their competitive performance on complex search spaces. EAs are well known for their ability to deal with nonlinear and complex optimization problems. Differential evolution (DE) algorithms are a family of evolutionary optimization techniques that use a rather greedy and less stochastic approach to problem solving, when compared to classical evolutionary algorithms. The main idea is to construct, at each generation, for each element of the population a mutant vector, which is constructed through a specific mutation operation based on adding differences between randomly selected elements of the population to another element. Due to its simple implementation, minimum mathematical processing and good optimization capability, DE has attracted attention. This paper proposes a new approach to solve electromagnetic design problems that combines the DE algorithm with a generator of chaos sequences. This approach is tested on the design of a loudspeaker model with 17 degrees of freedom, for showing its applicability to electromagnetic problems. The results show that the DE algorithm with chaotic sequences presents better, or at least similar, results when compared to the standard DE algorithm and other evolutionary algorithms available in the literature.
Resumo:
This paper presents a metaheuristic algorithm inspired in evolutionary computation and swarm intelligence concepts and fundamentals of echolocation of micro bats. The aim is to optimize the mono and multiobjective optimization problems related to the brushless DC wheel motor problems, which has 5 design parameters and 6 constraints for the mono-objective problem and 2 objectives, 5 design parameters, and 5 constraints for multiobjective version. Furthermore, results are compared with other optimization approaches proposed in the recent literature, showing the feasibility of this newly introduced technique to high nonlinear problems in electromagnetics.
Resumo:
Biogeography is the science that studies the geographical distribution and the migration of species in an ecosystem. Biogeography-based optimization (BBO) is a recently developed global optimization algorithm as a generalization of biogeography to evolutionary algorithm and has shown its ability to solve complex optimization problems. BBO employs a migration operator to share information between the problem solutions. The problem solutions are identified as habitat, and the sharing of features is called migration. In this paper, a multiobjective BBO, combined with a predator-prey (PPBBO) approach, is proposed and validated in the constrained design of a brushless dc wheel motor. The results demonstrated that the proposed PPBBO approach converged to promising solutions in terms of quality and dominance when compared with the classical BBO in a multiobjective version.
Resumo:
Our previous results on the nonperturbative calculations of the mean current and of the energy-momentum tensor in QED with the T-constant electric field are generalized to arbitrary dimensions. The renormalized mean values are found, and the vacuum polarization contributions and particle creation contributions to these mean values are isolated in the large T limit; we also relate the vacuum polarization contributions to the one-loop effective Euler-Heisenberg Lagrangian. Peculiarities in odd dimensions are considered in detail. We adapt general results obtained in 2 + 1 dimensions to the conditions which are realized in the Dirac model for graphene. We study the quantum electronic and energy transport in the graphene at low carrier density and low temperatures when quantum interference effects are important. Our description of the quantum transport in the graphene is based on the so-called generalized Furry picture in QED where the strong external field is taken into account nonperturbatively; this approach is not restricted to a semiclassical approximation for carriers and does not use any statistical assumptions inherent in the Boltzmann transport theory. In addition, we consider the evolution of the mean electromagnetic field in the graphene, taking into account the backreaction of the matter field to the applied external field. We find solutions of the corresponding Dirac-Maxwell set of equations and with their help we calculate the effective mean electromagnetic field and effective mean values of the current and the energy-momentum tensor. The nonlinear and linear I-V characteristics experimentally observed in both low-and high-mobility graphene samples are quite well explained in the framework of the proposed approach, their peculiarities being essentially due to the carrier creation from the vacuum by the applied electric field. DOI: 10.1103/PhysRevD.86.125022
Resumo:
The present action research article is linked to an ergonomics project in a university hospital. The author's proposal is to focus action on the effective worker involvement required for the creation of spaces / mechanisms within organizations where people can enhance cooperation and deliberation on matters relating to work. For this purpose, a committee was introduced to assist in finding problems and solutions directly in work situations, so that workers could experience relative autonomy allowing them to develop procedures and choose tools appropriate to their own real needs. Based on this organizational implementation and on subsequent interviews, the practical results are analyzed and related to employee involvement. One can conclude that workers in all areas of the organization can be active elements for improving working conditions and productivity in companies.
Resumo:
Drugs are important risk factors for traffic accidents. In Brazil, truck drivers report using amphetamines to maintain their extensive work schedule and stay awake. These drugs can be obtained without prescription easily on Brazilian roads. The use of these stimulants can result in health problems and can be associated with traffic accidents. There are Brazilian studies that show that drivers use drugs. However, these studies are questionnaire-based and do not always reflect real-life situations. The purpose of this study was to demonstrate the prevalence of drug use by truck drivers on the roads of Sao Paulo State, Brazil, during 2009. Drivers of large trucks were randomly stopped by police officers on the interstate roads during morning hours. After being informed of the goals of the study, the drivers gave written informed consent before providing a urine sample. In addition, a questionnaire concerning sociodemographic characteristics and health information was administered. Urine samples were screened for amphetamines, cocaine, and cannabinoids by immunoassay and the confirmation was performed using gas chromatography-mass spectrometry (GC-MS). Of the 488 drivers stopped, 456 (93.4%) provided urine samples, and 9.3% of them (n = 42) tested positive for drugs. Amphetamines were the most commonly found (n = 26) drug, representing 61.9% of the positive samples. Ten cases tested positive for cocaine (23.8%), and five for cannabinoids (11.9%). All drivers were male with a mean age of 40 +/- 10.8 years, and 29.3% of them reported some health problem (diabetes, high blood pressure and/or stress). A high incidence of truck drivers who tested positive for drug use was found, among other reported health problems. Thus, there is an evident need to promote a healthier lifestyle among professional drivers and a need for preventive measures aimed at controlling the use of drugs by truck drivers in Brazil. (C) 2011 Elsevier Ireland Ltd. All rights reserved.
Resumo:
In this work, a new enrichment space to accommodate jumps in the pressure field at immersed interfaces in finite element formulations, is proposed. The new enrichment adds two degrees of freedom per element that can be eliminated by means of static condensation. The new space is tested and compared with the classical P1 space and to the space proposed by Ausas et al (Comp. Meth. Appl. Mech. Eng., Vol. 199, 10191031, 2010) in several problems involving jumps in the viscosity and/or the presence of singular forces at interfaces not conforming with the element edges. The combination of this enrichment space with another enrichment that accommodates discontinuities in the pressure gradient has also been explored, exhibiting excellent results in problems involving jumps in the density or the volume forces. Copyright (c) 2011 John Wiley & Sons, Ltd.
Resumo:
We derive lower bounds on the density of sources of ultra-high energy cosmic rays from the lack of significant clustering in the arrival directions of the highest energy events detected at the Pierre Auger Observatory. The density of uniformly distributed sources of equal intrinsic intensity was found to be larger than ~(0.06 - 5) × '10 POT. -4' 'Mpc POT. -3' at 95% CL, depending on the magnitude of the magnetic deflections. Similar bounds, in the range (0.2 - 7) × '10 POT. -4' 'Mpc POT. -3', were obtained for sources following the local matter distribution.
Resumo:
At each outer iteration of standard Augmented Lagrangian methods one tries to solve a box-constrained optimization problem with some prescribed tolerance. In the continuous world, using exact arithmetic, this subproblem is always solvable. Therefore, the possibility of finishing the subproblem resolution without satisfying the theoretical stopping conditions is not contemplated in usual convergence theories. However, in practice, one might not be able to solve the subproblem up to the required precision. This may be due to different reasons. One of them is that the presence of an excessively large penalty parameter could impair the performance of the box-constraint optimization solver. In this paper a practical strategy for decreasing the penalty parameter in situations like the one mentioned above is proposed. More generally, the different decisions that may be taken when, in practice, one is not able to solve the Augmented Lagrangian subproblem will be discussed. As a result, an improved Augmented Lagrangian method is presented, which takes into account numerical difficulties in a satisfactory way, preserving suitable convergence theory. Numerical experiments are presented involving all the CUTEr collection test problems.
The boundedness of penalty parameters in an augmented Lagrangian method with constrained subproblems
Resumo:
Augmented Lagrangian methods are effective tools for solving large-scale nonlinear programming problems. At each outer iteration, a minimization subproblem with simple constraints, whose objective function depends on updated Lagrange multipliers and penalty parameters, is approximately solved. When the penalty parameter becomes very large, solving the subproblem becomes difficult; therefore, the effectiveness of this approach is associated with the boundedness of the penalty parameters. In this paper, it is proved that under more natural assumptions than the ones employed until now, penalty parameters are bounded. For proving the new boundedness result, the original algorithm has been slightly modified. Numerical consequences of the modifications are discussed and computational experiments are presented.
Resumo:
This work develops a computational approach for boundary and initial-value problems by using operational matrices, in order to run an evolutive process in a Hilbert space. Besides, upper bounds for errors in the solutions and in their derivatives can be estimated providing accuracy measures.