14 resultados para Pareto optimality
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
Agents with single-peaked preferences share a resource coming from different suppliers; each agent is connected to only a subset of suppliers. Examples include workload balancing, sharing earmarked funds, and rationing utilities after a storm. Unlike in the one supplier model, in a Pareto optimal allocation agents who get more than their peak from underdemanded suppliers, coexist with agents who get less from overdemanded suppliers. Our Egalitarian solution is the Lorenz dominant Pareto optimal allocation. It treats agents with equal demands as equally as the connectivity constraints allow. Together, Strategyproofness, Pareto Optimality, and Equal Treatment of Equals, characterize our solution.
Resumo:
Multi-objective optimization algorithms aim at finding Pareto-optimal solutions. Recovering Pareto fronts or Pareto sets from a limited number of function evaluations are challenging problems. A popular approach in the case of expensive-to-evaluate functions is to appeal to metamodels. Kriging has been shown efficient as a base for sequential multi-objective optimization, notably through infill sampling criteria balancing exploitation and exploration such as the Expected Hypervolume Improvement. Here we consider Kriging metamodels not only for selecting new points, but as a tool for estimating the whole Pareto front and quantifying how much uncertainty remains on it at any stage of Kriging-based multi-objective optimization algorithms. Our approach relies on the Gaussian process interpretation of Kriging, and bases upon conditional simulations. Using concepts from random set theory, we propose to adapt the Vorob’ev expectation and deviation to capture the variability of the set of non-dominated points. Numerical experiments illustrate the potential of the proposed workflow, and it is shown on examples how Gaussian process simulations and the estimated Vorob’ev deviation can be used to monitor the ability of Kriging-based multi-objective optimization algorithms to accurately learn the Pareto front.
Resumo:
This paper presents a parallel surrogate-based global optimization method for computationally expensive objective functions that is more effective for larger numbers of processors. To reach this goal, we integrated concepts from multi-objective optimization and tabu search into, single objective, surrogate optimization. Our proposed derivative-free algorithm, called SOP, uses non-dominated sorting of points for which the expensive function has been previously evaluated. The two objectives are the expensive function value of the point and the minimum distance of the point to previously evaluated points. Based on the results of non-dominated sorting, P points from the sorted fronts are selected as centers from which many candidate points are generated by random perturbations. Based on surrogate approximation, the best candidate point is subsequently selected for expensive evaluation for each of the P centers, with simultaneous computation on P processors. Centers that previously did not generate good solutions are tabu with a given tenure. We show almost sure convergence of this algorithm under some conditions. The performance of SOP is compared with two RBF based methods. The test results show that SOP is an efficient method that can reduce time required to find a good near optimal solution. In a number of cases the efficiency of SOP is so good that SOP with 8 processors found an accurate answer in less wall-clock time than the other algorithms did with 32 processors.
Resumo:
We present an analysis of daily extreme precipitation events for the extended winter season (October–March) at 20 Mediterranean coastal sites covering the period 1950–2006. The heavy tailed behaviour of precipitation extremes and estimated return levels, including associated uncertainties, are derived applying a procedure based on the Generalized Pareto Distribution, in combination with recently developed methods. Precipitation extremes have an important contribution to make seasonal totals (approximately 60% for all series). Three stations (one in the western Mediterranean and the others in the eastern basin) have a 5-year return level above 100 mm, while the lowest value (estimated for two Italian series) is equal to 58 mm. As for the 50-year return level, an Italian station (Genoa) has the highest value of 264 mm, while the other values range from 82 to 200 mm. Furthermore, six series (from stations located in France, Italy, Greece, and Cyprus) show a significant negative tendency in the probability of observing an extreme event. The relationship between extreme precipitation events and the large scale atmospheric circulation at the upper, mid and low troposphere is investigated by using NCEP/NCAR reanalysis data. A 2-step classification procedure identifies three significant anomaly patterns both for the western-central and eastern part of the Mediterranean basin. In the western Mediterranean, the anomalous southwesterly surface to mid-tropospheric flow is connected with enhanced moisture transport from the Atlantic. During ≥5-year return level events, the subtropical jet stream axis is aligned with the African coastline and interacts with the eddy-driven jet stream. This is connected with enhanced large scale ascending motions, instability and leads to the development of severe precipitation events. For the eastern Mediterranean extreme precipitation events, the identified anomaly patterns suggest warm air advection connected with anomalous ascent motions and an increase of the low- to mid-tropospheric moisture. Furthermore, the jet stream position (during ≥5-year return level events) supports the eastern basin being in a divergence area, where ascent motions are favoured. Our results contribute to an improved understanding of daily precipitation extremes in the cold season and associated large scale atmospheric features.
Resumo:
The concept of elementary vector is generalised to the case where the steady-state space of the metabolic network is not a flux cone but is a general polyhedron due to further inhomogeneous constraints on the flows through some of the reactions. On one hand, this allows to selectively enumerate elementary modes which satisfy certain optimality criteria and this can yield a large computational gain compared with full enumeration. On the other hand, in contrast to the single optimum found by executing a linear program, this enables a comprehensive description of the set of alternate optima often encountered in flux balance analysis. The concepts are illustrated on a metabolic network model of human cardiac mitochondria.
Resumo:
Aims Phenotypic optimality models neglect genetics. However, especially when heterozygous genotypes ire fittest, evolving allele, genotype and phenotype frequencies may not correspond to predicted optima. This was not previously addressed for organisms with complex life histories. Methods Therefore, we modelled the evolution of a fitness-relevant trait of clonal plants, stolon internode length. We explored the likely case of air asymmetric unimodal fitness profile with three model types. In constant selection models (CSMs), which are gametic, but not spatially explicit, evolving allele frequencies in the one-locus and five-loci cases did not correspond to optimum stolon internode length predicted by the spatially explicit, but not gametic, phenotypic model. This deviation was due to the asymmetry of the fitness profile. Gametic, spatially explicit individual-based (SEIB) modeling allowed us relaxing the CSM assumptions of constant selection with exclusively sexual reproduction. Important findings For entirely vegetative or sexual reproduction, predictions. of the gametic SEIB model were close to the ones of spatially explicit CSMs gametic phenotypic models, hut for mixed modes of reproduction they appoximated those of gametic, not spatially explicit CSMs. Thus, in contrast to gametic SEIB models, phenotypic models and, especially for few loci, also CSMs can be very misleading. We conclude that the evolution of trails governed by few quantitative trait loci appears hardly predictable by simple models, that genetic algorithms aiming at technical optimization may actually, miss the optimum and that selection may lead to loci with smaller effects, in derived compared with ancestral lines.
Resource-allocation capabilities of commercial project management software. An experimental analysis
Resumo:
When project managers determine schedules for resource-constrained projects, they commonly use commercial project management software packages. Which resource-allocation methods are implemented in these packages is proprietary information. The resource-allocation problem is in general computationally difficult to solve to optimality. Hence, the question arises if and how various project management software packages differ in quality with respect to their resource-allocation capabilities. None of the few existing papers on this subject uses a sizeable data set and recent versions of common software packages. We experimentally analyze the resource-allocation capabilities of Acos Plus.1, AdeptTracker Professional, CS Project Professional, Microsoft Office Project 2007, Primavera P6, Sciforma PS8, and Turbo Project Professional. Our analysis is based on 1560 instances of the precedence- and resource-constrained project scheduling problem RCPSP. The experiment shows that using the resource-allocation feature of these packages may lead to a project duration increase of almost 115% above the best known feasible schedule. The increase gets larger with increasing resource scarcity and with increasing number of activities. We investigate the impact of different complexity scenarios and priority rules on the project duration obtained by the software packages. We provide a decision table to support managers in selecting a software package and a priority rule.
Resumo:
In process industries, make-and-pack production is used to produce food and beverages, chemicals, and metal products, among others. This type of production process allows the fabrication of a wide range of products in relatively small amounts using the same equipment. In this article, we consider a real-world production process (cf. Honkomp et al. 2000. The curse of reality – why process scheduling optimization problems are diffcult in practice. Computers & Chemical Engineering, 24, 323–328.) comprising sequence-dependent changeover times, multipurpose storage units with limited capacities, quarantine times, batch splitting, partial equipment connectivity, and transfer times. The planning problem consists of computing a production schedule such that a given demand of packed products is fulfilled, all technological constraints are satisfied, and the production makespan is minimised. None of the models in the literature covers all of the technological constraints that occur in such make-and-pack production processes. To close this gap, we develop an efficient mixed-integer linear programming model that is based on a continuous time domain and general-precedence variables. We propose novel types of symmetry-breaking constraints and a preprocessing procedure to improve the model performance. In an experimental analysis, we show that small- and moderate-sized instances can be solved to optimality within short CPU times.
Resumo:
Greedy routing can be used in mobile ad-hoc networks as geographic routing protocol. This paper proposes to use greedy routing also in overlay networks by positioning overlay nodes into a multi-dimensional Euclidean space. Greedy routing can only be applied when a routing decision makes progress towards the final destination. Our proposed overlay network is built such that there will be always progress at each forwarding node. This is achieved by constructing at each node a so-called nearest neighbor convex set (NNCS). NNCSs can be used for various applications such as multicast routing, service discovery and Quality-of-Service routing. NNCS has been compared with Pastry, another topology-aware overlay network. NNCS has superior relative path stretches indicating the optimality of a path.
Resumo:
The Solver Add-in of Microsoft Excel is widely used in courses on Operations Research and in industrial applications. Since the 2010 version of Microsoft Excel, the Solver Add-in comprises a so-called evolutionary solver. We analyze how this metaheuristic can be applied to the resource-constrained project scheduling problem (RCPSP). We present an implementation of a schedule-generation scheme in a spreadsheet, which combined with the evolutionary solver can be used for devising good feasible schedules. Our computational results indicate that using this approach, non-trivial instances of the RCPSP can be (approximately) solved to optimality.
Resumo:
We prove exponential rates of convergence of hp-version discontinuous Galerkin (dG) interior penalty finite element methods for second-order elliptic problems with mixed Dirichlet-Neumann boundary conditions in axiparallel polyhedra. The dG discretizations are based on axiparallel, σ-geometric anisotropic meshes of mapped hexahedra and anisotropic polynomial degree distributions of μ-bounded variation. We consider piecewise analytic solutions which belong to a larger analytic class than those for the pure Dirichlet problem considered in [11, 12]. For such solutions, we establish the exponential convergence of a nonconforming dG interpolant given by local L 2 -projections on elements away from corners and edges, and by suitable local low-order quasi-interpolants on elements at corners and edges. Due to the appearance of non-homogeneous, weighted norms in the analytic regularity class, new arguments are introduced to bound the dG consistency errors in elements abutting on Neumann edges. The non-homogeneous norms also entail some crucial modifications of the stability and quasi-optimality proofs, as well as of the analysis for the anisotropic interpolation operators. The exponential convergence bounds for the dG interpolant constructed in this paper generalize the results of [11, 12] for the pure Dirichlet case.
Resumo:
We construct an empirically informed computational model of fiscal federalism, testing whether horizontal or vertical equalization can solve the fiscal externality problem in an environment in which heterogeneous agents can move and vote. The model expands on the literature by considering the case of progressive local taxation. Although the consequences of progressive taxation under fiscal federalism are well understood, they have not been studied in a context with tax equalization, despite widespread implementation. The model also expands on the literature by comparing the standard median voter model with a realistic alternative voting mechanism. We find that fiscal federalism with progressive taxation naturally leads to segregation as well as inefficient and inequitable public goods provision while the alternative voting mechanism generates more efficient, though less equitable, public goods provision. Equalization policy, under both types of voting, is largely undermined by micro-actors' choices. For this reason, the model also does not find the anticipated effects of vertical equalization discouraging public goods spending among wealthy jurisdictions and horizontal encouraging it among poor jurisdictions. Finally, we identify two optimal scenarios, superior to both complete centralization and complete devolution. These scenarios are not only Pareto optimal, but also conform to a Rawlsian view of justice, offering the best possible outcome for the worst-off. Despite offering the best possible outcomes, both scenarios still entail significant economic segregation and inequitable public goods provision. Under the optimal scenarios agents shift the bulk of revenue collection to the federal government, with few jurisdictions maintaining a small local tax.