10 resultados para Pareto-optimal solutions
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
Multi-objective optimization algorithms aim at finding Pareto-optimal solutions. Recovering Pareto fronts or Pareto sets from a limited number of function evaluations are challenging problems. A popular approach in the case of expensive-to-evaluate functions is to appeal to metamodels. Kriging has been shown efficient as a base for sequential multi-objective optimization, notably through infill sampling criteria balancing exploitation and exploration such as the Expected Hypervolume Improvement. Here we consider Kriging metamodels not only for selecting new points, but as a tool for estimating the whole Pareto front and quantifying how much uncertainty remains on it at any stage of Kriging-based multi-objective optimization algorithms. Our approach relies on the Gaussian process interpretation of Kriging, and bases upon conditional simulations. Using concepts from random set theory, we propose to adapt the Vorob’ev expectation and deviation to capture the variability of the set of non-dominated points. Numerical experiments illustrate the potential of the proposed workflow, and it is shown on examples how Gaussian process simulations and the estimated Vorob’ev deviation can be used to monitor the ability of Kriging-based multi-objective optimization algorithms to accurately learn the Pareto front.
Resumo:
Agents with single-peaked preferences share a resource coming from different suppliers; each agent is connected to only a subset of suppliers. Examples include workload balancing, sharing earmarked funds, and rationing utilities after a storm. Unlike in the one supplier model, in a Pareto optimal allocation agents who get more than their peak from underdemanded suppliers, coexist with agents who get less from overdemanded suppliers. Our Egalitarian solution is the Lorenz dominant Pareto optimal allocation. It treats agents with equal demands as equally as the connectivity constraints allow. Together, Strategyproofness, Pareto Optimality, and Equal Treatment of Equals, characterize our solution.
Resumo:
We present a real-world staff-assignment problem that was reported to us by a provider of an online workforce scheduling software. The problem consists of assigning employees to work shifts subject to a large variety of requirements related to work laws, work shift compatibility, workload balancing, and personal preferences of employees. A target value is given for each requirement, and all possible deviations from these values are associated with acceptance levels. The objective is to minimize the total number of deviations in ascending order of the acceptance levels. We present an exact lexicographic goal programming MILP formulation and an MILP-based heuristic. The heuristic consists of two phases: in the first phase a feasible schedule is built and in the second phase parts of the schedule are iteratively re-optimized by applying an exact MILP model. A major advantage of such MILP-based approaches is the flexibility to account for additional constraints or modified planning objectives, which is important as the requirements may vary depending on the company or planning period. The applicability of the heuristic is demonstrated for a test set derived from real-world data. Our computational results indicate that the heuristic is able to devise optimal solutions to non-trivial problem instances, and outperforms the exact lexicographic goal programming formulation on medium- and large-sized problem instances.
Resumo:
Human resources managers often use assessment centers to evaluate candidates for a job position. During an assessment center, the candidates perform a series of exercises. The exercises require one or two assessors (e.g., managers or psychologists) that observe and evaluate the candidate. If an exercise is designed as a role-play, an actor is required as well which plays, e.g., an unhappy customer with whom the candidate has to deal with. Besides performing the exercises, the candidates have a lunch break within a prescribed time window. Each candidate should be observed by approximately half the number of the assessors. Moreover, an assessor cannot be assigned to a candidate if they personally know each other. The planning problem consists of determining (1) resource-feasible start times of all exercises and lunch breaks and (2) a feasible assignment of assessors to candidates, such that the assessment center duration is minimized. We propose a list-scheduling heuristic that generates feasible schedules for such assessment centers. We develop novel procedures for devising an appropriate scheduling list and for incorporating the problem-specific constraints. Our computational results indicate that our approach is capable of devising optimal or near-optimal solutions to real-world instances within short CPU time.
Resumo:
We construct an empirically informed computational model of fiscal federalism, testing whether horizontal or vertical equalization can solve the fiscal externality problem in an environment in which heterogeneous agents can move and vote. The model expands on the literature by considering the case of progressive local taxation. Although the consequences of progressive taxation under fiscal federalism are well understood, they have not been studied in a context with tax equalization, despite widespread implementation. The model also expands on the literature by comparing the standard median voter model with a realistic alternative voting mechanism. We find that fiscal federalism with progressive taxation naturally leads to segregation as well as inefficient and inequitable public goods provision while the alternative voting mechanism generates more efficient, though less equitable, public goods provision. Equalization policy, under both types of voting, is largely undermined by micro-actors' choices. For this reason, the model also does not find the anticipated effects of vertical equalization discouraging public goods spending among wealthy jurisdictions and horizontal encouraging it among poor jurisdictions. Finally, we identify two optimal scenarios, superior to both complete centralization and complete devolution. These scenarios are not only Pareto optimal, but also conform to a Rawlsian view of justice, offering the best possible outcome for the worst-off. Despite offering the best possible outcomes, both scenarios still entail significant economic segregation and inequitable public goods provision. Under the optimal scenarios agents shift the bulk of revenue collection to the federal government, with few jurisdictions maintaining a small local tax.
Resumo:
This paper presents a parallel surrogate-based global optimization method for computationally expensive objective functions that is more effective for larger numbers of processors. To reach this goal, we integrated concepts from multi-objective optimization and tabu search into, single objective, surrogate optimization. Our proposed derivative-free algorithm, called SOP, uses non-dominated sorting of points for which the expensive function has been previously evaluated. The two objectives are the expensive function value of the point and the minimum distance of the point to previously evaluated points. Based on the results of non-dominated sorting, P points from the sorted fronts are selected as centers from which many candidate points are generated by random perturbations. Based on surrogate approximation, the best candidate point is subsequently selected for expensive evaluation for each of the P centers, with simultaneous computation on P processors. Centers that previously did not generate good solutions are tabu with a given tenure. We show almost sure convergence of this algorithm under some conditions. The performance of SOP is compared with two RBF based methods. The test results show that SOP is an efficient method that can reduce time required to find a good near optimal solution. In a number of cases the efficiency of SOP is so good that SOP with 8 processors found an accurate answer in less wall-clock time than the other algorithms did with 32 processors.
Resumo:
Resuscitation from hemorrhagic shock relies on fluid retransfusion. However, the optimal properties of the fluid have not been established. The aim of the present study was to test the influence of the concentration of hydroxyethyl starch (HES) solution on plasma viscosity and colloid osmotic pressure (COP), systemic and microcirculatory recovery, and oxygen delivery and consumption after resuscitation, which were assessed in the hamster chamber window preparation by intravital microscopy. Awake hamsters were subjected to 50% hemorrhage and were resuscitated with 25% of the estimated blood volume with 5%, 10%, or 20% HES solution. The increase in concentration led to an increase in COP (from 20 to 70 and 194 mmHg) and viscosity (from 1.7 to 3.8 and 14.4 cP). Cardiac index and microcirculatory and metabolic recovery were improved with HES 10% and 20% when compared with 5% HES. Oxygen delivery and consumption in the dorsal skinfold chamber was more than doubled with HES 10% and 20% when compared with HES 5%. This was attributed to the beneficial effect of restored or increased plasma COP and plasma viscosity as obtained with HES 10% and 20%, leading to improved microcirculatory blood flow values early in the resuscitation period. The increase in COP led to an increase in blood volume as shown by a reduction in hematocrit. Mean arterial pressure was significantly improved in animals receiving 10% and 20% solutions. In conclusion, the present results show that the increase in the concentration of HES, leading to hyperoncotic and hyperviscous solutions, is beneficial for resuscitation from hemorrhagic shock because normalization of COP and viscosity led to a rapid recovery of microcirculatory parameters.
Resumo:
We investigate a class of optimal control problems that exhibit constant exogenously given delays in the control in the equation of motion of the differential states. Therefore, we formulate an exemplary optimal control problem with one stock and one control variable and review some analytic properties of an optimal solution. However, analytical considerations are quite limited in case of delayed optimal control problems. In order to overcome these limits, we reformulate the problem and apply direct numerical methods to calculate approximate solutions that give a better understanding of this class of optimization problems. In particular, we present two possibilities to reformulate the delayed optimal control problem into an instantaneous optimal control problem and show how these can be solved numerically with a stateof- the-art direct method by applying Bock’s direct multiple shooting algorithm. We further demonstrate the strength of our approach by two economic examples.
Resumo:
Equipped with state-of-the-art smartphones and mobile devices, today's highly interconnected urban population is increasingly dependent on these gadgets to organize and plan their daily lives. These applications often rely on current (or preferred) locations of individual users or a group of users to provide the desired service, which jeopardizes their privacy; users do not necessarily want to reveal their current (or preferred) locations to the service provider or to other, possibly untrusted, users. In this paper, we propose privacy-preserving algorithms for determining an optimal meeting location for a group of users. We perform a thorough privacy evaluation by formally quantifying privacy-loss of the proposed approaches. In order to study the performance of our algorithms in a real deployment, we implement and test their execution efficiency on Nokia smartphones. By means of a targeted user-study, we attempt to get an insight into the privacy-awareness of users in location-based services and the usability of the proposed solutions.
Resumo:
Vein grafts are still the most commonly used graft material in cardiovascular surgery and much effort has been spent in recent years on investigating the optimal harvesting technique. One other related topic of similar importance remained more or less an incidental one. The storage solutions of vein grafts following procurement and prior to implantation are, despite their assumed impact, a relatively neglected theme. There is no doubt that the endothelium plays a key role in long-term patency of vein grafts, but the effects of the different storage solutions on the endothelium remain unclear : In a review of the literature, we could find 20 specific papers that addressed the question, of which the currently available preservation solutions are superior, harmless, damaging or ineffective. The focus lies on saline and autologous whole blood. Besides these two storage media, novel or alternative solutions have been investigated with surprising findings. In addition, a few words will be spent on potential alternatives and novel solutions on the market. As there is currently no randomized clinical trial regarding saline versus autologous whole blood available, this review compares all previous studies and methods of analysis to provide a certain level of evidence on this topic. In summary, saline has negative effects on the endothelial layers and therefore may compromise graft patency. Related factors, such as distension pressure, may outbalance the initial benefit of autologous whole blood or storage solutions and intensify the harmful effects of warm saline. In addition, there is no uniform consent on the superiority of autologous whole blood for vein graft storage. This may open the door to alternatives such as the University of Wisconsin solution or one of the specific designed storage solutions like TiProtec™ or Somaluthion™. Whether these preservation solutions are superior or advantageous remains the subject of further studies.