42 resultados para Hard combinatorial scheduling
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
The set covering problem is an NP-hard combinatorial optimization problemthat arises in applications ranging from crew scheduling in airlines todriver scheduling in public mass transport. In this paper we analyze searchspace characteristics of a widely used set of benchmark instances throughan analysis of the fitness-distance correlation. This analysis shows thatthere exist several classes of set covering instances that have a largelydifferent behavior. For instances with high fitness distance correlation,we propose new ways of generating core problems and analyze the performanceof algorithms exploiting these core problems.
Resumo:
The standard one-machine scheduling problem consists in schedulinga set of jobs in one machine which can handle only one job at atime, minimizing the maximum lateness. Each job is available forprocessing at its release date, requires a known processing timeand after finishing the processing, it is delivery after a certaintime. There also can exists precedence constraints between pairsof jobs, requiring that the first jobs must be completed beforethe second job can start. An extension of this problem consistsin assigning a time interval between the processing of the jobsassociated with the precedence constrains, known by finish-starttime-lags. In presence of this constraints, the problem is NP-hardeven if preemption is allowed. In this work, we consider a specialcase of the one-machine preemption scheduling problem with time-lags, where the time-lags have a chain form, and propose apolynomial algorithm to solve it. The algorithm consist in apolynomial number of calls of the preemption version of the LongestTail Heuristic. One of the applicability of the method is to obtainlower bounds for NP-hard one-machine and job-shop schedulingproblems. We present some computational results of thisapplication, followed by some conclusions.
Resumo:
We present new metaheuristics for solving real crew scheduling problemsin a public transportation bus company. Since the crews of thesecompanies are drivers, we will designate the problem by the bus-driverscheduling problem. Crew scheduling problems are well known and severalmathematical programming based techniques have been proposed to solvethem, in particular using the set-covering formulation. However, inpractice, there exists the need for improvement in terms of computationalefficiency and capacity of solving large-scale instances. Moreover, thereal bus-driver scheduling problems that we consider can present variantaspects of the set covering, as for example a different objectivefunction, implying that alternative solutions methods have to bedeveloped. We propose metaheuristics based on the following approaches:GRASP (greedy randomized adaptive search procedure), tabu search andgenetic algorithms. These metaheuristics also present some innovationfeatures based on and genetic algorithms. These metaheuristics alsopresent some innovation features based on the structure of the crewscheduling problem, that guide the search efficiently and able them tofind good solutions. Some of these new features can also be applied inthe development of heuristics to other combinatorial optimizationproblems. A summary of computational results with real-data problems ispresented.
Resumo:
Peer-reviewed
Resumo:
Sudoku problems are some of the most known and enjoyed pastimes, with a never diminishing popularity, but, for the last few years those problems have gone from an entertainment to an interesting research area, a twofold interesting area, in fact. On the one side Sudoku problems, being a variant of Gerechte Designs and Latin Squares, are being actively used for experimental design, as in [8, 44, 39, 9]. On the other hand, Sudoku problems, as simple as they seem, are really hard structured combinatorial search problems, and thanks to their characteristics and behavior, they can be used as benchmark problems for refining and testing solving algorithms and approaches. Also, thanks to their high inner structure, their study can contribute more than studies of random problems to our goal of solving real-world problems and applications and understanding problem characteristics that make them hard to solve. In this work we use two techniques for solving and modeling Sudoku problems, namely, Constraint Satisfaction Problem (CSP) and Satisfiability Problem (SAT) approaches. To this effect we define the Generalized Sudoku Problem (GSP), where regions can be of rectangular shape, problems can be of any order, and solution existence is not guaranteed. With respect to the worst-case complexity, we prove that GSP with block regions of m rows and n columns with m = n is NP-complete. For studying the empirical hardness of GSP, we define a series of instance generators, that differ in the balancing level they guarantee between the constraints of the problem, by finely controlling how the holes are distributed in the cells of the GSP. Experimentally, we show that the more balanced are the constraints, the higher the complexity of solving the GSP instances, and that GSP is harder than the Quasigroup Completion Problem (QCP), a problem generalized by GSP. Finally, we provide a study of the correlation between backbone variables – variables with the same value in all the solutions of an instance– and hardness of GSP.
Resumo:
We discuss metric and combinatorial properties of Thompson's group T, such as the normal forms for elements and uniqueness of tree pair diagrams. We relate these properties to those of Thompson's group F when possible, and highlight combinatorial differences between the two groups. We define a set of unique normal forms for elements of T arising from minimal factorizations of elements into convenient pieces. We show that the number of carets in a reduced representative of T estimates the word length, that F is undistorted in T, and that cyclic subgroups of T are undistorted. We show that every element of T has a power which is conjugate to an element of F and describe how to recognize torsion elements in T.
Resumo:
Report for the scientific sojourn at the University of California at Berkeley between September 2007 to February 2008. The globalization combined with the success of containerization has brought about tremendous increases in the transportation of containers across the world. This leads to an increasing size of container ships which causes higher demands on seaport container terminals and their equipment. In this situation, the success of container terminals resides in a fast transhipment process with reduced costs. For these reasons it is necessary to optimize the terminal’s processes. There are three main logistic processes in a seaport container terminal: loading and unloading of containerships, storage, and reception/deliver of containers from/to the hinterland. Moreover there is an additional process that ensures the interconnection between previous logistic activities: the internal transport subsystem. The aim of this paper is to optimize the internal transport cycle in a marine container terminal managed by straddle carriers, one of the most used container transfer technologies. Three sub-systems are analyzed in detail: the landside transportation, the storage of containers in the yard, and the quayside transportation. The conflicts and decisions that arise from these three subsystems are analytically investigated, and optimization algorithms are proposed. Moreover, simulation has been applied to TCB (Barcelona Container Terminal) to test these algorithms and compare different straddle carrier’s operation strategies, such as single cycle versus double cycle, and different sizes of the handling equipment fleet. The simulation model is explained in detail and the main decision-making algorithms from the model are presented and formulated.
Resumo:
Report for the scientific sojourn carried out at the Max Planck Institut of Molecular Phisiology, Germany, from 2006 to 2008.The work carried out during this postdoctoral stage was focused on two different projects. Firstly, identification of D-Ala D-Ala Inhibitors and the development of new synthethic approaches to obtain lipidated peptides and proteins and the use of these lipidated proteins in biological and biophysical studies. In the first project, new D-Ala D-Ala inhibitors were identified by using structural alignments of the ATP binding sites of the bacterial ligase DDl and protein and lipid kinases in complex with ATP analogs. We tested a series of commercially available kinase inhibitors and found LFM-A13 and Tyrphostine derivatives to inhibit DDl enzyme activity. Based on the initial screening results we synthesized a series of malononitrilamide and salicylamide derivatives and were able to confirm the validity of these scaffolds as inhibitors of DDl. From this investigation we gained a better understanding of the structural requirements and limitations necessary for the preparation of ATP competitive DDl inhibitors. The compounds in this study may serve as starting points for the development of bi-substrate inhibitors that incorporate both, an ATP competitive and a substrate competitive moiety. Bisubstrate inhibitors that block the ATP and D-Ala binding sites should exhibit enhanced selectivity and potency profiles by preferentially inhibiting DDl over kinases. In the second project, an optimized synthesis for tha alkylation of cysteins using the thiol ene reaction was establisehd. This new protocol allowed us to obtain large amounts of hexadecylated cysteine that was required for the synthesis of differently lipidated peptides. Afterwards the synthesis of various N-ras peptides bearing different lipid anchors was performed and the peptides were ligated to a truncated N-ras protein. The influence of this differently lipidated N-ras proteins on the partioning and association of N-Ras in model membrane subdomains was studied using Atomic Force Microscopy.
Resumo:
This paper discusses the use of probabilistic or randomized algorithms for solving combinatorial optimization problems. Our approach employs non-uniform probability distributions to add a biased random behavior to classical heuristics so a large set of alternative good solutions can be quickly obtained in a natural way and without complex conguration processes. This procedure is especially useful in problems where properties such as non-smoothness or non-convexity lead to a highly irregular solution space, for which the traditional optimization methods, both of exact and approximate nature, may fail to reach their full potential. The results obtained are promising enough to suggest that randomizing classical heuristics is a powerful method that can be successfully applied in a variety of cases.
Resumo:
"Vegeu el resum a l'inici del document del fitxer adjunt."
Resumo:
For the execution of the scientific applications, different methods have been proposed to dynamically provide execution environments for such applications that hide the complexity of underlying distributed and heterogeneous infrastructures. Recently virtualization has emerged as a promising technology to provide such environments. Virtualization is a technology that abstracts away the details of physical hardware and provides virtualized resources for high-level scientific applications. Virtualization offers a cost-effective and flexible way to use and manage computing resources. Such an abstraction is appealing in Grid computing and Cloud computing for better matching jobs (applications) to computational resources. This work applies the virtualization concept to the Condor dynamic resource management system by using Condor Virtual Universe to harvest the existing virtual computing resources to their maximum utility. It allows existing computing resources to be dynamically provisioned at run-time by users based on application requirements instead of statically at design-time thereby lay the basis for efficient use of the available resources, thus providing way for the efficient use of the available resources.
Resumo:
This paper analyzes a spatial model of political competition between two policy- motivated parties in hard times of crisis. Hard times are modeled in terms of policy- making costs carried by a newly elected party. The results predict policy divergence in equilibrium. If the ideological preferences of parties are quite diverse and extreme, there is a unique equilibrium in which the parties announce symmetric platforms and each party wins with probability one half. If one party is extreme while the other is more moderate, there is a unique equilibrium in which the parties announce asymmetric platforms. If the preferred policies of the parties are not very distinct, there are two equilibria with asymmetric platforms. An important property of equilibrium with asymmetric platforms is that a winning party necessarily announces its most preferred policy as a platform. JEL classification: D72. Keywords: Spatial model; Political competition; Two-party system; Policy-motivated parties; Hard times; Crisis.
Resumo:
In this paper, we are proposing a methodology to determine the most efficient and least costly way of crew pairing optimization. We are developing a methodology based on algorithm optimization on Eclipse opensource IDE using the Java programming language to solve the crew scheduling problems.
Resumo:
Business processes designers take into account the resources that the processes would need, but, due to the variable cost of certain parameters (like energy) or other circumstances, this scheduling must be done when business process enactment. In this report we formalize the energy aware resource cost, including time and usage dependent rates. We also present a constraint programming approach and an auction-based approach to solve the mentioned problem including a comparison of them and a comparison of the proposed algorithms for solving them
Resumo:
We present a polyhedral framework for establishing general structural properties on optimal solutions of stochastic scheduling problems, where multiple job classes vie for service resources: the existence of an optimal priority policy in a given family, characterized by a greedoid (whose feasible class subsets may receive higher priority), where optimal priorities are determined by class-ranking indices, under restricted linear performance objectives (partial indexability). This framework extends that of Bertsimas and Niño-Mora (1996), which explained the optimality of priority-index policies under all linear objectives (general indexability). We show that, if performance measures satisfy partial conservation laws (with respect to the greedoid), which extend previous generalized conservation laws, then the problem admits a strong LP relaxation over a so-called extended greedoid polytope, which has strong structural and algorithmic properties. We present an adaptive-greedy algorithm (which extends Klimov's) taking as input the linear objective coefficients, which (1) determines whether the optimal LP solution is achievable by a policy in the given family; and (2) if so, computes a set of class-ranking indices that characterize optimal priority policies in the family. In the special case of project scheduling, we show that, under additional conditions, the optimal indices can be computed separately for each project (index decomposition). We further apply the framework to the important restless bandit model (two-action Markov decision chains), obtaining new index policies, that extend Whittle's (1988), and simple sufficient conditions for their validity. These results highlight the power of polyhedral methods (the so-called achievable region approach) in dynamic and stochastic optimization.