9 resultados para Multi-Criteria Problems
em Universidade Federal do Rio Grande do Norte(UFRN)
Resumo:
The main goal of this dissertation is to develop a Multi Criteria Decision Aid Model to be used in Oils and Gas perforation rigs contracts choices. The developed model should permit the utilization of multiples criterions, covering problems that exist with models that mainly use the price of the contracts as its decision criterion. The AHP has been chosen because its large utilization, not only academic, but in many other areas, its simplicity of use and flexibility, and also fill all the requirements necessary to complete the task. The development of the model was conducted by interviews and surveys with one specialist in this specific area, who also acts as the main actor on the decision process. The final model consists in six criterions: Costs, mobility, automation, technical support, how fast the service could be concluded and availability to start the operations. Three rigs were chosen as possible solutions for the problem. The results reached by the utilizations of the model suggests that the utilization of AHP as a decision support system in this kind of situation is possible, allowing a simplifications of the problem, and also it s a useful tool to improve every one involved on the process s knowledge about the problem subject, and its possible solutions
Resumo:
Anthropic disturbances in watersheds, such as inappropriate building development, disorderly land occupation and unplanned land use, may strengthen the sediment yield and the inflow into the estuary, leading to siltation, changes in the reach channel conformation, and ecosystem/water quality problems. Faced with such context, this study aims to assess the applicability of SWAT model to estimate, even in a preliminary way, the sediment yield distribution along the Potengi River watershed, as well as its contribution to the estuary. Furthermore, an assessment of its erosion susceptibility was used for comparison. The susceptibility map was developed by overlaying rainfall erosivity, soil erodibility, the slope of the terrain and land cover. In order to overlap these maps, a multi-criteria analysis through AHP method was applied. The SWAT was run using a five year period (1997-2001), considering three different scenarios based on different sorts of human interference: a) agriculture; b) pasture; and c) no interference (background). Results were analyzed in terms of surface runoff, sediment yield and their propagation along each river section, so that it was possible to find that the regions in the extreme west of the watershed and in the downstream portions returned higher values of sediment yield, reaching respectively 2.8 e 5.1 ton/ha.year, whereas central areas, which were less susceptible, returned the lowest values, never more than 0.7 ton/ha.ano. It was also noticed that in the west sub-watersheds, where one can observe the headwaters, sediment yield was naturally forced by high declivity and weak soils. In another hand, results suggest that the eastern part would not contribute to the sediment inflow into the estuary in a significant way, and the larger part of the sediment yield in that place is due to anthropic activities. For the central region, the analysis of sediment propagation indicates deposition predominance in opposition to transport. Thus, it s not expected that isolated rain storms occurring in the upstream river portions would significantly provide the estuary with sediment. Because the model calibration process hasn t been done yet, it becomes essential to emphasize that values presented here as results should not be applied for pratical aims. Even so, this work warns about the risks of a growth in the alteration of natural land cover, mainly in areas closer to the headwaters and in the downstream Potengi River
Resumo:
The objective of this dissertation is to propose a Multi Criteria Decision Aid Model to be used by the costumers of the travel agencies and help them to choose the best package travel. The main objective is to contribute for the simplification of the travel package decision choice from the identification of the models of values and preference of the customers and applying them to the existing package. It is used the Analytic Hierarchy Process (AHP) method to structuralize a decision hierarchic model composed by six criteria (package cost, hotel category, security of the city, travel time, direct flight and position in ranking of the 10 most visited destination) and five real alternatives of packages for a holiday of three days created from travel agency data. The decision analysis was realized for the choice of a travel package by a group composed by two couples that regularly travels together, to which was asked to do a pairwise judgment of the criteria and the alternatives. The mains results show that, although been a group that travels together, there are different models of values in the weights of the criteria and a certain convergence in the scales of preferences of the alternatives in the criteria. It was not pointed a dominant alternative for all the members of the group separately, but an analysis of a total utility of the group shows a classification and an order of the travel packages and an alternative clearly in front of the others. The sensitivity analysis revels that there are changes in the ranking, but the two alternatives best classified in the normal analysis are the same ones in the sensitivity analysis, although with the positions changed. The analysis also led to a simplification of the process with the exclusion of alternatives dominated for the others ones. As main conclusion, it is evaluated that the model and method suggested allow a simplification of the decision process in the choice of travel packages
Resumo:
This master thesis has the objective of investigating the strategic decision criteria of participants of Local Production Arrangements (LPA) in Brazil. The LPA s are an initiative of support agents to enterprises with the purpose of organizing joint actions for the development of groups (clusters) of enterprises. The choice of the actions is a decision of the participating enterprises and this paper aims at applying a Multi-criteria Analysis Method to analyze the criteria of entrepreneurs that are participating of a LPA. The used method is the Process of Analytical Hierarchy (PAH) and an application is presented along with questionnaires to participants of a ceramic LPA in the northeast of Brazil. The main results show that, in first place, from the implicit strategy of each enterprise there is only one objective for the LPA group and so, at the beginning, an action decided by all of them tends to favor some more than others. In second place, it was observed that there are general inconsistencies between the strategic objectives and the importance as to criteria, even though there have been cases of coherency. As the main conclusion it is pointed that the use of Methods of MCDA is useful to improve the decision making process and to bring more transparency to the logic of the found results
Resumo:
Multi-objective problems may have many optimal solutions, which together form the Pareto optimal set. A class of heuristic algorithms for those problems, in this work called optimizers, produces approximations of this optimal set. The approximation set kept by the optmizer may be limited or unlimited. The benefit of using an unlimited archive is to guarantee that all the nondominated solutions generated in the process will be saved. However, due to the large number of solutions that can be generated, to keep an archive and compare frequently new solutions to the stored ones may demand a high computational cost. The alternative is to use a limited archive. The problem that emerges from this situation is the need of discarding nondominated solutions when the archive is full. Some techniques were proposed to handle this problem, but investigations show that none of them can surely prevent the deterioration of the archives. This work investigates a technique to be used together with the previously proposed ideas in the literature to deal with limited archives. The technique consists on keeping discarded solutions in a secondary archive, and periodically recycle these solutions, bringing them back to the optimization. Three methods of recycling are presented. In order to verify if these ideas are capable to improve the archive content during the optimization, they were implemented together with other techniques from the literature. An computational experiment with NSGA-II, SPEA2, PAES, MOEA/D and NSGA-III algorithms, applied to many classes of problems is presented. The potential and the difficulties of the proposed techniques are evaluated based on statistical tests.
Resumo:
Multi-objective problems may have many optimal solutions, which together form the Pareto optimal set. A class of heuristic algorithms for those problems, in this work called optimizers, produces approximations of this optimal set. The approximation set kept by the optmizer may be limited or unlimited. The benefit of using an unlimited archive is to guarantee that all the nondominated solutions generated in the process will be saved. However, due to the large number of solutions that can be generated, to keep an archive and compare frequently new solutions to the stored ones may demand a high computational cost. The alternative is to use a limited archive. The problem that emerges from this situation is the need of discarding nondominated solutions when the archive is full. Some techniques were proposed to handle this problem, but investigations show that none of them can surely prevent the deterioration of the archives. This work investigates a technique to be used together with the previously proposed ideas in the literature to deal with limited archives. The technique consists on keeping discarded solutions in a secondary archive, and periodically recycle these solutions, bringing them back to the optimization. Three methods of recycling are presented. In order to verify if these ideas are capable to improve the archive content during the optimization, they were implemented together with other techniques from the literature. An computational experiment with NSGA-II, SPEA2, PAES, MOEA/D and NSGA-III algorithms, applied to many classes of problems is presented. The potential and the difficulties of the proposed techniques are evaluated based on statistical tests.
Resumo:
We propose a new paradigm for collective learning in multi-agent systems (MAS) as a solution to the problem in which several agents acting over the same environment must learn how to perform tasks, simultaneously, based on feedbacks given by each one of the other agents. We introduce the proposed paradigm in the form of a reinforcement learning algorithm, nominating it as reinforcement learning with influence values. While learning by rewards, each agent evaluates the relation between the current state and/or action executed at this state (actual believe) together with the reward obtained after all agents that are interacting perform their actions. The reward is a result of the interference of others. The agent considers the opinions of all its colleagues in order to attempt to change the values of its states and/or actions. The idea is that the system, as a whole, must reach an equilibrium, where all agents get satisfied with the obtained results. This means that the values of the state/actions pairs match the reward obtained by each agent. This dynamical way of setting the values for states and/or actions makes this new reinforcement learning paradigm the first to include, naturally, the fact that the presence of other agents in the environment turns it a dynamical model. As a direct result, we implicitly include the internal state, the actions and the rewards obtained by all the other agents in the internal state of each agent. This makes our proposal the first complete solution to the conceptual problem that rises when applying reinforcement learning in multi-agent systems, which is caused by the difference existent between the environment and agent models. With basis on the proposed model, we create the IVQ-learning algorithm that is exhaustive tested in repetitive games with two, three and four agents and in stochastic games that need cooperation and in games that need collaboration. This algorithm shows to be a good option for obtaining solutions that guarantee convergence to the Nash optimum equilibrium in cooperative problems. Experiments performed clear shows that the proposed paradigm is theoretical and experimentally superior to the traditional approaches. Yet, with the creation of this new paradigm the set of reinforcement learning applications in MAS grows up. That is, besides the possibility of applying the algorithm in traditional learning problems in MAS, as for example coordination of tasks in multi-robot systems, it is possible to apply reinforcement learning in problems that are essentially collaborative
Resumo:
Although some individual techniques of supervised Machine Learning (ML), also known as classifiers, or algorithms of classification, to supply solutions that, most of the time, are considered efficient, have experimental results gotten with the use of large sets of pattern and/or that they have a expressive amount of irrelevant data or incomplete characteristic, that show a decrease in the efficiency of the precision of these techniques. In other words, such techniques can t do an recognition of patterns of an efficient form in complex problems. With the intention to get better performance and efficiency of these ML techniques, were thought about the idea to using some types of LM algorithms work jointly, thus origin to the term Multi-Classifier System (MCS). The MCS s presents, as component, different of LM algorithms, called of base classifiers, and realized a combination of results gotten for these algorithms to reach the final result. So that the MCS has a better performance that the base classifiers, the results gotten for each base classifier must present an certain diversity, in other words, a difference between the results gotten for each classifier that compose the system. It can be said that it does not make signification to have MCS s whose base classifiers have identical answers to the sames patterns. Although the MCS s present better results that the individually systems, has always the search to improve the results gotten for this type of system. Aim at this improvement and a better consistency in the results, as well as a larger diversity of the classifiers of a MCS, comes being recently searched methodologies that present as characteristic the use of weights, or confidence values. These weights can describe the importance that certain classifier supplied when associating with each pattern to a determined class. These weights still are used, in associate with the exits of the classifiers, during the process of recognition (use) of the MCS s. Exist different ways of calculating these weights and can be divided in two categories: the static weights and the dynamic weights. The first category of weights is characterizes for not having the modification of its values during the classification process, different it occurs with the second category, where the values suffers modifications during the classification process. In this work an analysis will be made to verify if the use of the weights, statics as much as dynamics, they can increase the perfomance of the MCS s in comparison with the individually systems. Moreover, will be made an analysis in the diversity gotten for the MCS s, for this mode verify if it has some relation between the use of the weights in the MCS s with different levels of diversity
Resumo:
Multi-Cloud Applications are composed of services offered by multiple cloud platforms where the user/developer has full knowledge of the use of such platforms. The use of multiple cloud platforms avoids the following problems: (i) vendor lock-in, which is dependency on the application of a certain cloud platform, which is prejudicial in the case of degradation or failure of platform services, or even price increasing on service usage; (ii) degradation or failure of the application due to fluctuations in quality of service (QoS) provided by some cloud platform, or even due to a failure of any service. In multi-cloud scenario is possible to change a service in failure or with QoS problems for an equivalent of another cloud platform. So that an application can adopt the perspective multi-cloud is necessary to create mechanisms that are able to select which cloud services/platforms should be used in accordance with the requirements determined by the programmer/user. In this context, the major challenges in terms of development of such applications include questions such as: (i) the choice of which underlying services and cloud computing platforms should be used based on the defined user requirements in terms of functionality and quality (ii) the need to continually monitor the dynamic information (such as response time, availability, price, availability), related to cloud services, in addition to the wide variety of services, and (iii) the need to adapt the application if QoS violations affect user defined requirements. This PhD thesis proposes an approach for dynamic adaptation of multi-cloud applications to be applied when a service is unavailable or when the requirements set by the user/developer point out that other available multi-cloud configuration meets more efficiently. Thus, this work proposes a strategy composed of two phases. The first phase consists of the application modeling, exploring the similarities representation capacity and variability proposals in the context of the paradigm of Software Product Lines (SPL). In this phase it is used an extended feature model to specify the cloud service configuration to be used by the application (similarities) and the different possible providers for each service (variability). Furthermore, the non-functional requirements associated with cloud services are specified by properties in this model by describing dynamic information about these services. The second phase consists of an autonomic process based on MAPE-K control loop, which is responsible for selecting, optimally, a multicloud configuration that meets the established requirements, and perform the adaptation. The adaptation strategy proposed is independent of the used programming technique for performing the adaptation. In this work we implement the adaptation strategy using various programming techniques such as aspect-oriented programming, context-oriented programming and components and services oriented programming. Based on the proposed steps, we tried to assess the following: (i) the process of modeling and the specification of non-functional requirements can ensure effective monitoring of user satisfaction; (ii) if the optimal selection process presents significant gains compared to sequential approach; and (iii) which techniques have the best trade-off when compared efforts to development/modularity and performance.