971 resultados para Pareto analyysi
Resumo:
The roots of swarm intelligence are deeply embedded in the biological study of self-organized behaviors in social insects. Particle swarm optimization (PSO) is one of the modern metaheuristics of swarm intelligence, which can be effectively used to solve nonlinear and non-continuous optimization problems. The basic principle of PSO algorithm is formed on the assumption that potential solutions (particles) will be flown through hyperspace with acceleration towards more optimum solutions. Each particle adjusts its flying according to the flying experiences of both itself and its companions using equations of position and velocity. During the process, the coordinates in hyperspace associated with its previous best fitness solution and the overall best value attained so far by other particles within the group are kept track and recorded in the memory. In recent years, PSO approaches have been successfully implemented to different problem domains with multiple objectives. In this paper, a multiobjective PSO approach, based on concepts of Pareto optimality, dominance, archiving external with elite particles and truncated Cauchy distribution, is proposed and applied in the design with the constraints presence of a brushless DC (Direct Current) wheel motor. Promising results in terms of convergence and spacing performance metrics indicate that the proposed multiobjective PSO scheme is capable of producing good solutions.
Mitigation of the torque ripple of a switched reluctance motor through a multiobjective optimization
Resumo:
The purpose of this work is to perform a multiobjective optimization in a 4:2 switched reluctance motor aiming both to maximize the mitigation of the torque ripple and to minimize the degradations of the starting and mean torques. To accomplish this task the Pareto Archived Evolution Strategy was implemented jointly with the Kriging Method, which acts as a surrogate function. The technique was applied on the optimization of some rotor geometrical parameters with the aid of finite element simulations to evaluate the approximation points for the Kriging model. The numerical results were compared to those from tests.
Resumo:
The cost of a new ship design heavily depends on the principal dimensions of the ship; however, dimensions minimization often conflicts with the minimum oil outflow (in the event of an accidental spill). This study demonstrates one rational methodology for selecting the optimal dimensions and coefficients of form of tankers via the use of a genetic algorithm. Therein, a multi-objective optimization problem was formulated by using two objective attributes in the evaluation of each design, specifically, total cost and mean oil outflow. In addition, a procedure that can be used to balance the designs in terms of weight and useful space is proposed. A genetic algorithm was implemented to search for optimal design parameters and to identify the nondominated Pareto frontier. At the end of this study, three real ships are used as case studies. [DOI:10.1115/1.4002740]
Resumo:
Purpose - Using Brandenburger and Nalebuff`s 1995 co-opetition model as a reference, the purpose of this paper is to seek to develop a tool that, based on the tenets of classical game theory, would enable scholars and managers to identify which games may be played in response to the different conflict of interest situations faced by companies in their business environments. Design/methodology/approach - The literature on game theory and business strategy are reviewed and a conceptual model, the strategic games matrix (SGM), is developed. Two novel games are described and modeled. Findings - The co-opetition model is not sufficient to realistically represent most of the conflict of interest situations faced by companies. It seeks to address this problem through development of the SGM, which expands upon Brandenburger and Nalebuff`s model by providing a broader perspective, through incorporation of an additional dimension (power ratio between players) and three novel, respectively, (rival, individualistic, and associative). Practical implications - This proposed model, based on the concepts of game theory, should be used to train decision- and policy-makers to better understand, interpret and formulate conflict management strategies. Originality/value - A practical and original tool to use game models in conflict of interest situations is generated. Basic classical games, such as Nash, Stackelberg, Pareto, and Minimax, are mapped on the SGM to suggest in which situations they Could be useful. Two innovative games are described to fit four different types of conflict situations that so far have no corresponding game in the literature. A test application of the SGM to a classic Intel Corporation strategic management case, in the complex personal computer industry, shows that the proposed method is able to describe, to interpret, to analyze, and to prescribe optimal competitive and/or cooperative strategies for each conflict of interest situation.
Resumo:
The following are notes that have been distributed by me over the last few years to students in Environmental Economics at The University of Queensland. They give particular attention to whether externalities are Pareto or Kaldor-Hicks relevant from a policy point of view. Externalities are Kaldor-Hicks or Pareto irrelevant if no change is possible for which gainers could compensate losers. Both absolute and marginal externalities may be Kaldor-Hicks relevant. Infra-marginal negative externalities are often, but not always, Kaldor-Hicks irrelevant. There are at least two cases where such externalities can be relevant. First, the absolute impact of the negative externality may be so great that the source of the externality should be eliminated. Secondly, if the externality arises from production, its nature may depend on the type of production technique adopted. Although for the technique adopted, an infra-marginal negative externality occurs. That is Paretian irrelevant given that choice of this technique is the only available possibility, alternative techniques may actually be available in practice. Some of these may generate even smaller total external effects and be socially preferable. Both cases are outlined and illustrated in these notes. The analysis reveals the dangers of relying on marginalism for deciding on environmental policy. Total (external) effects are often of great social and economic importance and appropriate social choices cannot be made on the basis of marginalism alone.
Resumo:
Este trabalho tem por objetivo discutir o surgimento de um programa de pesquisa na Ciência Econômica, no que concerne a análise das assimetrias de informação, as diferenças epistemológicas e as implicações em termos de equilíbrio ótimo de Pareto, em contraponto à abordagem neoclássica standard. Em busca de tal objetivo, foi necessário destacar o método de ambos os paradigmas; todavia, era igualmente necessário discutir a filosofia/epistemologia da ciência envolvida e que serviria de base para uma abordagem relacionada a mudanças paradigmáticas na ciência. No capítulo 1, discutimos a epistemologia da ciência, a partir de três autores: Popper, Kuhn e Lakatos. Definimos o conjunto de hipóteses que podem ser associadas ao método empregado pela Escola Neoclássica, a partir da filosofia da ciência proposta por Lakatos. Em seguida, no capítulo 2, fizemos uma longa exposição do método neoclássico, definindo os axiomas inerentes às preferências bem-comportadas, apresentando algebricamente o equilíbrio geral walrasiano, exemplificando o relaxamento de hipóteses auxiliares do modelo neoclássico a partir de Friedman e, por fim, aplicando o instrumental neoclássico ao relaxamento da hipótese auxiliar de perfeição da informação, a partir do modelo desenvolvido por Grossman & Stiglitz (1976), bem como da expansão matemática desenvolvida pelo presente trabalho. Finalmente, encerramos a presente dissertação com o capítulo 3, no qual, basicamente, expomos as principais contribuições de autores como Stiglitz, Akerlof e Arrow, no que concerne a mercados permeados por informações assimétricas e comportamentos oportunistas. Procuramos mostrar as consequências para o próprio mercado, chegando a resultados em que o mesmo era extinto. Apresentamos a segunda parte do modelo de Grossman & Stiglitz, enfatizando a natureza imperfeita do sistema de preços, sua incapacidade de transmitir todas as informações sobre os bens ao conjunto dos agentes, e, por fim, discutimos tópicos variados ligados à Economia da Informação.
Resumo:
Neste artigo são examinadas, de início, as principais variáveis inerentes à luta política nas organizações a partir de uma interpretação das teorias de Maquiavel. Em seguida, a análise é completada com a aplicação dos sistemas de manutenção do poder propostos por Pareto. Por último, são examinados os impactos do jogo e da manipulação política sobre as organizações, em especial sobre a eficiência e a busca da produtividade.
Resumo:
Este artigo integra elementos da teoria da agência, da teoria dos direitos de propriedade e da teoria das finanças para desenvolver uma teoria da estrutura de propriedade da firma. Definimos o conceito de custos de agência, demonstramos a sua relação com a questão da "separação e controle", investigamos a natureza dos custos de agência resultantes da presença de capital de terceiros e capital próprio externo, demonstramos quem arca com esses custos e por quê, e investigamos o ótimo de Pareto para a sua existência. Também fornecemos uma definição de firma e mostramos como a nossa análise dos fatores que influenciam a criação e a emissão de capital de terceiros e os direitos sobre o capital próprio cobre um caso especial do lado da oferta no que se refere à totalidade do problema dos mercados.
Resumo:
In practical applications of optimization it is common to have several conflicting objective functions to optimize. Frequently, these functions are subject to noise or can be of black-box type, preventing the use of derivative-based techniques. We propose a novel multiobjective derivative-free methodology, calling it direct multisearch (DMS), which does not aggregate any of the objective functions. Our framework is inspired by the search/poll paradigm of direct-search methods of directional type and uses the concept of Pareto dominance to maintain a list of nondominated points (from which the new iterates or poll centers are chosen). The aim of our method is to generate as many points in the Pareto front as possible from the polling procedure itself, while keeping the whole framework general enough to accommodate other disseminating strategies, in particular, when using the (here also) optional search step. DMS generalizes to multiobjective optimization (MOO) all direct-search methods of directional type. We prove under the common assumptions used in direct search for single objective optimization that at least one limit point of the sequence of iterates generated by DMS lies in (a stationary form of) the Pareto front. However, extensive computational experience has shown that our methodology has an impressive capability of generating the whole Pareto front, even without using a search step. Two by-products of this paper are (i) the development of a collection of test problems for MOO and (ii) the extension of performance and data profiles to MOO, allowing a comparison of several solvers on a large set of test problems, in terms of their efficiency and robustness to determine Pareto fronts.
Resumo:
A Lei de Potência é uma particularidade de um sistema não linear, revelando um sistema complexo próximo da auto-organização. Algumas características de sistemas naturais e artificiais, tais como dimensão populacional das cidades, valor dos rendimentos pessoais, frequência de ocorrência de palavras em textos e magnitude de sismos, seguem distribuições do tipo Lei de Potência. Estas distribuições indicam que pequenas ocorrências são muito comuns e grandes ocorrências são raras, podendo porém verificar-se com razoável probabilidade. A finalidade deste trabalho visa a identificação de fenómenos associados às Leis de Potência. Mostra-se o comportamento típico destes fenómenos, com os dados retirados dos vários casos de estudo e com a ajuda de uma meta-análise. As Leis de Potência em sistemas naturais e artificiais apresentam uma proximidade a um padrão, quando os valores são normalizados (frequências relativas) para dar origem a um meta-gráfico.
Resumo:
Solvent extraction is considered as a multi-criteria optimization problem, since several chemical species with similar extraction kinetic properties are frequently present in the aqueous phase and the selective extraction is not practicable. This optimization, applied to mixer–settler units, considers the best parameters and operating conditions, as well as the best structure or process flow-sheet. Global process optimization is performed for a specific flow-sheet and a comparison of Pareto curves for different flow-sheets is made. The positive weight sum approach linked to the sequential quadratic programming method is used to obtain the Pareto set. In all investigated structures, recovery increases with hold-up, residence time and agitation speed, while the purity has an opposite behaviour. For the same treatment capacity, counter-current arrangements are shown to promote recovery without significant impairment in purity. Recycling the aqueous phase is shown to be irrelevant, but organic recycling with as many stages as economically feasible clearly improves the design criteria and reduces the most efficient organic flow-rate.
Resumo:
Multi-objective particle swarm optimization (MOPSO) is a search algorithm based on social behavior. Most of the existing multi-objective particle swarm optimization schemes are based on Pareto optimality and aim to obtain a representative non-dominated Pareto front for a given problem. Several approaches have been proposed to study the convergence and performance of the algorithm, particularly by accessing the final results. In the present paper, a different approach is proposed, by using Shannon entropy to analyzethe MOPSO dynamics along the algorithm execution. The results indicate that Shannon entropy can be used as an indicator of diversity and convergence for MOPSO problems.
Resumo:
Kinematic redundancy occurs when a manipulator possesses more degrees of freedom than those required to execute a given task. Several kinematic techniques for redundant manipulators control the gripper through the pseudo-inverse of the Jacobian, but lead to a kind of chaotic inner motion with unpredictable arm configurations. Such algorithms are not easy to adapt to optimization schemes and, moreover, often there are multiple optimization objectives that can conflict between them. Unlike single optimization, where one attempts to find the best solution, in multi-objective optimization there is no single solution that is optimum with respect to all indices. Therefore, trajectory planning of redundant robots remains an important area of research and more efficient optimization algorithms are needed. This paper presents a new technique to solve the inverse kinematics of redundant manipulators, using a multi-objective genetic algorithm. This scheme combines the closed-loop pseudo-inverse method with a multi-objective genetic algorithm to control the joint positions. Simulations for manipulators with three or four rotational joints, considering the optimization of two objectives in a workspace without and with obstacles are developed. The results reveal that it is possible to choose several solutions from the Pareto optimal front according to the importance of each individual objective.
Resumo:
Variations of manufacturing process parameters and environmental aspects may affect the quality and performance of composite materials, which consequently affects their structural behaviour. Reliability-based design optimisation (RBDO) and robust design optimisation (RDO) searches for safe structural systems with minimal variability of response when subjected to uncertainties in material design parameters. An approach that simultaneously considers reliability and robustness is proposed in this paper. Depending on a given reliability index imposed on composite structures, a trade-off is established between the performance targets and robustness. Robustness is expressed in terms of the coefficient of variation of the constrained structural response weighted by its nominal value. The Pareto normed front is built and the nearest point to the origin is estimated as the best solution of the bi-objective optimisation problem.
Resumo:
In the last twenty years genetic algorithms (GAs) were applied in a plethora of fields such as: control, system identification, robotics, planning and scheduling, image processing, and pattern and speech recognition (Bäck et al., 1997). In robotics the problems of trajectory planning, collision avoidance and manipulator structure design considering a single criteria has been solved using several techniques (Alander, 2003). Most engineering applications require the optimization of several criteria simultaneously. Often the problems are complex, include discrete and continuous variables and there is no prior knowledge about the search space. These kind of problems are very more complex, since they consider multiple design criteria simultaneously within the optimization procedure. This is known as a multi-criteria (or multiobjective) optimization, that has been addressed successfully through GAs (Deb, 2001). The overall aim of multi-criteria evolutionary algorithms is to achieve a set of non-dominated optimal solutions known as Pareto front. At the end of the optimization procedure, instead of a single optimal (or near optimal) solution, the decision maker can select a solution from the Pareto front. Some of the key issues in multi-criteria GAs are: i) the number of objectives, ii) to obtain a Pareto front as wide as possible and iii) to achieve a Pareto front uniformly spread. Indeed, multi-objective techniques using GAs have been increasing in relevance as a research area. In 1989, Goldberg suggested the use of a GA to solve multi-objective problems and since then other researchers have been developing new methods, such as the multi-objective genetic algorithm (MOGA) (Fonseca & Fleming, 1995), the non-dominated sorted genetic algorithm (NSGA) (Deb, 2001), and the niched Pareto genetic algorithm (NPGA) (Horn et al., 1994), among several other variants (Coello, 1998). In this work the trajectory planning problem considers: i) robots with 2 and 3 degrees of freedom (dof ), ii) the inclusion of obstacles in the workspace and iii) up to five criteria that are used to qualify the evolving trajectory, namely the: joint traveling distance, joint velocity, end effector / Cartesian distance, end effector / Cartesian velocity and energy involved. These criteria are used to minimize the joint and end effector traveled distance, trajectory ripple and energy required by the manipulator to reach at destination point. Bearing this ideas in mind, the paper addresses the planning of robot trajectories, meaning the development of an algorithm to find a continuous motion that takes the manipulator from a given starting configuration up to a desired end position without colliding with any obstacle in the workspace. The chapter is organized as follows. Section 2 describes the trajectory planning and several approaches proposed in the literature. Section 3 formulates the problem, namely the representation adopted to solve the trajectory planning and the objectives considered in the optimization. Section 4 studies the algorithm convergence. Section 5 studies a 2R manipulator (i.e., a robot with two rotational joints/links) when the optimization trajectory considers two and five objectives. Sections 6 and 7 show the results for the 3R redundant manipulator with five goals and for other complementary experiments are described, respectively. Finally, section 8 draws the main conclusions.