90 resultados para Multi-objective optimization problem
Resumo:
O objectivo deste trabalho é optimizar o planeamento de produção tendo como meta a redução do custo total da energia eléctrica consumida. Este trabalho está dividido em 3 etapas distintas: na 1ª etapa foi feito um levantamento do problema, das restrições do mesmo e da escolha do modelo para a sua resolução. Na etapa seguinte, fez-se a escolha da ferramenta a usar, que foi o Xpress, e fez-se a implementação do problema nessa mesma ferramenta. E por fim, na 3ª etapa, foi feita a validação do modelo e análise das soluções obtidas com comparações com que o era feito antes. Recorrendo a programação inteira foi desenvolvido um modelo de optimização para atingir o objectivo em causa e consequentemente foi escrito o código que reflectisse o modelo matemático. Todos os passos necessários à sua implementação foram concluídos e validados com comparação com o que antes se fazia, notando-se assim melhorias ao nível de eficiência energética na ordem dos 8%, mas também uma melhoria no aproveitamento de recursos humanos e tempo que eram despendidos para desenvolver planos de produção de forma manual. Essa melhoria temporal que se compreende entre quatro a seis horas semanais pode ser aplicada noutras actividades da empresa com maior valor acrescentado.
Resumo:
In recent years several countries have set up policies that allow exchange of kidneys between two or more incompatible patient–donor pairs. These policies lead to what is commonly known as kidney exchange programs. The underlying optimization problems can be formulated as integer programming models. Previously proposed models for kidney exchange programs have exponential numbers of constraints or variables, which makes them fairly difficult to solve when the problem size is large. In this work we propose two compact formulations for the problem, explain how these formulations can be adapted to address some problem variants, and provide results on the dominance of some models over others. Finally we present a systematic comparison between our models and two previously proposed ones via thorough computational analysis. Results show that compact formulations have advantages over non-compact ones when the problem size is large.
Resumo:
It is generally challenging to determine end-to-end delays of applications for maximizing the aggregate system utility subject to timing constraints. Many practical approaches suggest the use of intermediate deadline of tasks in order to control and upper-bound their end-to-end delays. This paper proposes a unified framework for different time-sensitive, global optimization problems, and solves them in a distributed manner using Lagrangian duality. The framework uses global viewpoints to assign intermediate deadlines, taking resource contention among tasks into consideration. For soft real-time tasks, the proposed framework effectively addresses the deadline assignment problem while maximizing the aggregate quality of service. For hard real-time tasks, we show that existing heuristic solutions to the deadline assignment problem can be incorporated into the proposed framework, enriching their mathematical interpretation.
Resumo:
In distributed soft real-time systems, maximizing the aggregate quality-of-service (QoS) is a typical system-wide goal, and addressing the problem through distributed optimization is challenging. Subtasks are subject to unpredictable failures in many practical environments, and this makes the problem much harder. In this paper, we present a robust optimization framework for maximizing the aggregate QoS in the presence of random failures. We introduce the notion of K-failure to bound the effect of random failures on schedulability. Using this notion we define the concept of K-robustness that quantifies the degree of robustness on QoS guarantee in a probabilistic sense. The parameter K helps to tradeoff achievable QoS versus robustness. The proposed robust framework produces optimal solutions through distributed computations on the basis of Lagrangian duality, and we present some implementation techniques. Our simulation results show that the proposed framework can probabilistically guarantee sub-optimal QoS which remains feasible even in the presence of random failures.
Resumo:
Penalty and Barrier methods are normally used to solve Nonlinear Optimization Problems constrained problems. The problems appear in areas such as engineering and are often characterised by the fact that involved functions (objective and constraints) are non-smooth and/or their derivatives are not know. This means that optimization methods based on derivatives cannot net used. A Java based API was implemented, including only derivative-free optimizationmethods, to solve both constrained and unconstrained problems, which includes Penalty and Barriers methods. In this work a new penalty function, based on Fuzzy Logic, is presented. This function imposes a progressive penalization to solutions that violate the constraints. This means that the function imposes a low penalization when the violation of the constraints is low and a heavy penalisation when the violation is high. The value of the penalization is not known in beforehand, it is the outcome of a fuzzy inference engine. Numerical results comparing the proposed function with two of the classic penalty/barrier functions are presented. Regarding the presented results one can conclude that the prosed penalty function besides being very robust also exhibits a very good performance.
Resumo:
Search Optimization methods are needed to solve optimization problems where the objective function and/or constraints functions might be non differentiable, non convex or might not be possible to determine its analytical expressions either due to its complexity or its cost (monetary, computational, time,...). Many optimization problems in engineering and other fields have these characteristics, because functions values can result from experimental or simulation processes, can be modelled by functions with complex expressions or by noise functions and it is impossible or very difficult to calculate their derivatives. Direct Search Optimization methods only use function values and do not need any derivatives or approximations of them. In this work we present a Java API that including several methods and algorithms, that do not use derivatives, to solve constrained and unconstrained optimization problems. Traditional API access, by installing it on the developer and/or user computer, and remote API access to it, using Web Services, are also presented. Remote access to the API has the advantage of always allow the access to the latest version of the API. For users that simply want to have a tool to solve Nonlinear Optimization Problems and do not want to integrate these methods in applications, also two applications were developed. One is a standalone Java application and the other a Web-based application, both using the developed API.
Resumo:
Constrained and unconstrained Nonlinear Optimization Problems often appear in many engineering areas. In some of these cases it is not possible to use derivative based optimization methods because the objective function is not known or it is too complex or the objective function is non-smooth. In these cases derivative based methods cannot be used and Direct Search Methods might be the most suitable optimization methods. An Application Programming Interface (API) including some of these methods was implemented using Java Technology. This API can be accessed either by applications running in the same computer where it is installed or, it can be remotely accessed through a LAN or the Internet, using webservices. From the engineering point of view, the information needed from the API is the solution for the provided problem. On the other hand, from the optimization methods researchers’ point of view, not only the solution for the problem is needed. Also additional information about the iterative process is useful, such as: the number of iterations; the value of the solution at each iteration; the stopping criteria, etc. In this paper are presented the features added to the API to allow users to access to the iterative process data.
Resumo:
Finding the optimal value for a problem is usual in many areas of knowledge where in many cases it is needed to solve Nonlinear Optimization Problems. For some of those problems it is not possible to determine the expression for its objective function and/or its constraints, they are the result of experimental procedures, might be non-smooth, among other reasons. To solve such problems it was implemented an API contained methods to solve both constrained and unconstrained problems. This API was developed to be used either locally on the computer where the application is being executed or remotely on a server. To obtain the maximum flexibility both from the programmers’ and users’ points of view, problems can be defined as a Java class (because this API was developed in Java) or as a simple text input that is sent to the API. For this last one to be possible it was also implemented on the API an expression evaluator. One of the drawbacks of this expression evaluator is that it is slower than the Java native code. In this paper it is presented a solution that combines both options: the problem can be expressed at run-time as a string of chars that are converted to Java code, compiled and loaded dynamically. To wide the target audience of the API, this new expression evaluator is also compatible with the AMPL format.
Resumo:
Nonlinear Optimization Problems are usual in many engineering fields. Due to its characteristics the objective function of some problems might not be differentiable or its derivatives have complex expressions. There are even cases where an analytical expression of the objective function might not be possible to determine either due to its complexity or its cost (monetary, computational, time, ...). In these cases Nonlinear Optimization methods must be used. An API, including several methods and algorithms to solve constrained and unconstrained optimization problems was implemented. This API can be accessed not only as traditionally, by installing it on the developer and/or user computer, but it can also be accessed remotely using Web Services. As long as there is a network connection to the server where the API is installed, applications always access to the latest API version. Also an Web-based application, using the proposed API, was developed. This application is to be used by users that do not want to integrate methods in applications, and simply want to have a tool to solve Nonlinear Optimization Problems.
Resumo:
Engineering Education includes not only teaching theoretical fundamental concepts but also its verification during practical lessons in laboratories. The usual strategies to carry out this action are frequently based on Problem Based Learning, starting from a given state and proceeding forward to a target state. The possibility or the effectiveness of this procedure depends on previous states and if the present state was caused or resulted from earlier ones. This often happens in engineering education when the achieved results do not match the desired ones, e.g. when programming code is being developed or when the cause of the wrong behavior of an electronic circuit is being identified. It is thus important to also prepare students to proceed in the reverse way, i.e. given a start state generate the explanation or even the principles that underlie it. Later on, this sort of skills will be important. For instance, to a doctor making a patient?s story or to an engineer discovering the source of a malfunction. This learning methodology presents pedagogical advantages besides the enhanced preparation of students to their future work. The work presented on his document describes an automation project developed by a group of students in an engineering polytechnic school laboratory. The main objective was to improve the performance of a Braille machine. However, in a scenario of Reverse Problem-Based learning, students had first to discover and characterize the entire machine's function before being allowed (and being able) to propose a solution for the existing problem.
Resumo:
Manufacturing processes need permanently to innovate and optimize because any can be susceptible to continuous improvement. Innovation and commitment to the development of these new solutions resulting from existing expertise and the continuing need to increase productivity, flexibility and ensuring the necessary quality of the manufactured products. To increase flexibility, it is necessary to significantly reduce set-up times and lead time in order to ensure the delivery of products ever faster. This objective can be achieved through a normalization of the pultrusion line elements. Implicitly, there is an increase of productivity by this way. This work is intended to optimize the pultrusion process of structural profiles. We consider all elements of the system from the storehouse of the fibers (rack) to the pultrusion die. Particular attention was devoted to (a) the guidance system of the fibers and webs, (b) the resin container where the fibers are impregnated, (c) standard plates positioning of the fibers towards the entrance to the spinneret and also (d) reviewed the whole process of assembling and fixing the die as well as its the heating system. With the implementation of these new systems was achieved a significant saving of time set-up and were clearly reduced the unit costs of production. Quality assurance was also increased.
Resumo:
The environmental management domain is vast and encompasses many identifiable activities: impact assessment, planning, project evaluation, etc. In particular, this paper focusses on the modelling of the project evaluation activity. The environmental decision support system under development aims to provide assistance to project developers in the selection of adequate locations, guaranteeing the compliance with the applicable regulations and the existing development plans as well as satisfying the specified project requirements. The inherent multidisciplinarity features of this activity lead to the adoption of the Multi-Agent paradigm, and, in particular, to the modelling of the involved agencies as a community of cooperative autonomous agents, where each agency contributes with its share of problem solving to the final system’s recommendation. To achieve this behaviour the many conclusions of the individual agencies have to be justifiably accommodated: not only they may differ, but can be interdependent, complementary, irreconcilable, or simply, independent. We propose different solutions (involving both local and global consistency) to support the adequate merge of the distinct perspectives that inevitably arise during this type of decision making.
Resumo:
The ability to respond sensibly to changing and conflicting beliefs is an integral part of intelligent agency. To this end, we outline the design and implementation of a Distributed Assumption-based Truth Maintenance System (DATMS) appropriate for controlling cooperative problem solving in a dynamic real world multi-agent community. Our DATMS works on the principle of local coherence which means that different agents can have different perspectives on the same fact provided that these stances are appropriately justified. The belief revision algorithm is presented, the meta-level code needed to ensure that all system-wide queries can be uniquely answered is described, and the DATMS’ implementation in a general purpose multi-agent shell is discussed.
Resumo:
Multi-agent architectures are well suited for complex inherently distributed problem solving domains. From the many challenging aspects that arise within this framework, a crucial one emerges: how to incorporate dynamic and conflicting agent beliefs? While the belief revision activity in a single agent scenario is concentrated on incorporating new information while preserving consistency, in a multi-agent system it also has to deal with possible conflicts between the agents perspectives. To provide an adequate framework, each agent, built as a combination of an assumption based belief revision system and a cooperation layer, was enriched with additional features: a distributed search control mechanism allowing dynamic context management, and a set of different distributed consistency methodologies. As a result, a Distributed Belief Revision Testbed (DiBeRT) was developed. This paper is a preliminary report presenting some of DiBeRT contributions: a concise representation of external beliefs; a simple and innovative methodology to achieve distributed context management; and a reduced inter-agent data exchange format.
Resumo:
In a real world multiagent system, where the agents are faced with partial, incomplete and intrinsically dynamic knowledge, conflicts are inevitable. Frequently, different agents have goals or beliefs that cannot hold simultaneously. Conflict resolution methodologies have to be adopted to overcome such undesirable occurrences. In this paper we investigate the application of distributed belief revision techniques as the support for conflict resolution in the analysis of the validity of the candidate beams to be produced in the CERN particle accelerators. This CERN multiagent system contains a higher hierarchy agent, the Specialist agent, which makes use of meta-knowledge (on how the con- flicting beliefs have been produced by the other agents) in order to detect which beliefs should be abandoned. Upon solving a conflict, the Specialist instructs the involved agents to revise their beliefs accordingly. Conflicts in the problem domain are mapped into conflicting beliefs of the distributed belief revision system, where they can be handled by proven formal methods. This technique builds on well established concepts and combines them in a new way to solve important problems. We find this approach generally applicable in several domains.