958 resultados para MIP Mathematical Programming Job Shop Scheduling
Resumo:
Access to healthcare is a major problem in which patients are deprived of receiving timely admission to healthcare. Poor access has resulted in significant but avoidable healthcare cost, poor quality of healthcare, and deterioration in the general public health. Advanced Access is a simple and direct approach to appointment scheduling in which the majority of a clinic's appointments slots are kept open in order to provide access for immediate or same day healthcare needs and therefore, alleviate the problem of poor access the healthcare. This research formulates a non-linear discrete stochastic mathematical model of the Advanced Access appointment scheduling policy. The model objective is to maximize the expected profit of the clinic subject to constraints on minimum access to healthcare provided. Patient behavior is characterized with probabilities for no-show, balking, and related patient choices. Structural properties of the model are analyzed to determine whether Advanced Access patient scheduling is feasible. To solve the complex combinatorial optimization problem, a heuristic that combines greedy construction algorithm and neighborhood improvement search was developed. The model and the heuristic were used to evaluate the Advanced Access patient appointment policy compared to existing policies. Trade-off between profit and access to healthcare are established, and parameter analysis of input parameters was performed. The trade-off curve is a characteristic curve and was observed to be concave. This implies that there exists an access level at which at which the clinic can be operated at optimal profit that can be realized. The results also show that, in many scenarios by switching from existing scheduling policy to Advanced Access policy clinics can improve access without any decrease in profit. Further, the success of Advanced Access policy in providing improved access and/or profit depends on the expected value of demand, variation in demand, and the ratio of demand for same day and advanced appointments. The contributions of the dissertation are a model of Advanced Access patient scheduling, a heuristic to solve the model, and the use of the model to understand the scheduling policy trade-offs which healthcare clinic managers must make. ^
Resumo:
This dissertation presents and evaluates a methodology for scheduling medical application workloads in virtualized computing environments. Such environments are being widely adopted by providers of "cloud computing" services. In the context of provisioning resources for medical applications, such environments allow users to deploy applications on distributed computing resources while keeping their data secure. Furthermore, higher level services that further abstract the infrastructure-related issues can be built on top of such infrastructures. For example, a medical imaging service can allow medical professionals to process their data in the cloud, easing them from the burden of having to deploy and manage these resources themselves. In this work, we focus on issues related to scheduling scientific workloads on virtualized environments. We build upon the knowledge base of traditional parallel job scheduling to address the specific case of medical applications while harnessing the benefits afforded by virtualization technology. To this end, we provide the following contributions: (1) An in-depth analysis of the execution characteristics of the target applications when run in virtualized environments. (2) A performance prediction methodology applicable to the target environment. (3) A scheduling algorithm that harnesses application knowledge and virtualization-related benefits to provide strong scheduling performance and quality of service guarantees. In the process of addressing these pertinent issues for our target user base (i.e. medical professionals and researchers), we provide insight that benefits a large community of scientific application users in industry and academia. Our execution time prediction and scheduling methodologies are implemented and evaluated on a real system running popular scientific applications. We find that we are able to predict the execution time of a number of these applications with an average error of 15%. Our scheduling methodology, which is tested with medical image processing workloads, is compared to that of two baseline scheduling solutions and we find that it outperforms them in terms of both the number of jobs processed and resource utilization by 20–30%, without violating any deadlines. We conclude that our solution is a viable approach to supporting the computational needs of medical users, even if the cloud computing paradigm is not widely adopted in its current form.
Resumo:
Scheduling optimization is concerned with the optimal allocation of events to time slots. In this paper, we look at one particular example of scheduling problems - the 2015 Joint Statistical Meetings. We want to assign each session among similar topics to time slots to reduce scheduling conflicts. Chapter 1 briefly talks about the motivation for this example as well as the constraints and the optimality criterion. Chapter 2 proposes use of Latent Dirichlet Allocation (LDA) to identify the topic proportions in each session and talks about the fitting of the model. Chapter 3 translates these ideas into a mathematical formulation and introduces a Greedy Algorithm to minimize conflicts. Chapter 4 demonstrates the improvement of the scheduling with this method.
Resumo:
This paper compares two linear programming (LP) models for shift scheduling in services where homogeneously-skilled employees are available at limited times. Although both models are based on set covering approaches, one explicitly matches employees to shifts, while the other imposes this matching implicitly. Each model is used in three forms—one with complete, another with very limited meal break placement flexibility, and a third without meal breaks—to provide initial schedules to a completion/improvement heuristic. The term completion/improvement heuristic is used to describe a construction/ improvement heuristic operating on a starting schedule. On 80 test problems varying widely in scheduling flexibility, employee staffing requirements, and employee availability characteristics, all six LP-based procedures generated lower cost schedules than a comparison from-scratch construction/improvement heuristic. This heuristic, which perpetually maintains an explicit matching of employees to shifts, consists of three phases which add, drop, and modify shifts. In terms of schedule cost, schedule generation time, and model size, the procedures based on the implicit model performed better, as a group, than those based on the explicit model. The LP model with complete break placement flexibility and implicitly matching employees to shifts generated schedules costing 6.7% less than those developed by the from-scratch heuristic.
Resumo:
This paper presents an integer programming model for developing optimal shift schedules while allowing extensive flexibility in terms of alternate shift starting times, shift lengths, and break placement. The model combines the work of Moondra (1976) and Bechtold and Jacobs (1990) by implicitly matching meal breaks to implicitly represented shifts. Moreover, the new model extends the work of these authors to enable the scheduling of overtime and the scheduling of rest breaks. We compare the new model to Bechtold and Jacobs' model over a diverse set of 588 test problems. The new model generates optimal solutions more rapidly, solves problems with more shift alternatives, and does not generate schedules violating the operative restrictions on break timing.
Resumo:
Fire is a form of uncontrolled combustion which generates heat, smoke, toxic and irritant gases. All of these products are harmful to man and account for the heavy annual cost of 800 lives and £1,000,000,000 worth of property damage in Britain alone. The new discipline of Fire Safety Engineering has developed as a means of reducing these unacceptable losses. One of the main tools of Fire Safety Engineering is the mathematical model and over the past 15 years a number of mathematical models have emerged to cater for the needs of this discipline. Part of the difficulty faced by the Fire Safety Engineer is the selection of the most appropriate modelling tool to use for the job. To make an informed choice it is essential to have a good understanding of the various modelling approaches, their capabilities and limitations. In this paper some of the fundamental modelling tools used to predict fire and evacuation are investigated as are the issues associated with their use and recent developments in modelling technology.
Resumo:
La programmation par contraintes est une technique puissante pour résoudre, entre autres, des problèmes d’ordonnancement de grande envergure. L’ordonnancement vise à allouer dans le temps des tâches à des ressources. Lors de son exécution, une tâche consomme une ressource à un taux constant. Généralement, on cherche à optimiser une fonction objectif telle la durée totale d’un ordonnancement. Résoudre un problème d’ordonnancement signifie trouver quand chaque tâche doit débuter et quelle ressource doit l’exécuter. La plupart des problèmes d’ordonnancement sont NP-Difficiles. Conséquemment, il n’existe aucun algorithme connu capable de les résoudre en temps polynomial. Cependant, il existe des spécialisations aux problèmes d’ordonnancement qui ne sont pas NP-Complet. Ces problèmes peuvent être résolus en temps polynomial en utilisant des algorithmes qui leur sont propres. Notre objectif est d’explorer ces algorithmes d’ordonnancement dans plusieurs contextes variés. Les techniques de filtrage ont beaucoup évolué dans les dernières années en ordonnancement basé sur les contraintes. La proéminence des algorithmes de filtrage repose sur leur habilité à réduire l’arbre de recherche en excluant les valeurs des domaines qui ne participent pas à des solutions au problème. Nous proposons des améliorations et présentons des algorithmes de filtrage plus efficaces pour résoudre des problèmes classiques d’ordonnancement. De plus, nous présentons des adaptations de techniques de filtrage pour le cas où les tâches peuvent être retardées. Nous considérons aussi différentes propriétés de problèmes industriels et résolvons plus efficacement des problèmes où le critère d’optimisation n’est pas nécessairement le moment où la dernière tâche se termine. Par exemple, nous présentons des algorithmes à temps polynomial pour le cas où la quantité de ressources fluctue dans le temps, ou quand le coût d’exécuter une tâche au temps t dépend de t.
Resumo:
The aim of this research is twofold: Firstly, to model and solve a complex nurse scheduling problem with an integer programming formulation and evolutionary algorithms. Secondly, to detail a novel statistical method of comparing and hence building better scheduling algorithms by identifying successful algorithm modifications. The comparison method captures the results of algorithms in a single figure that can then be compared using traditional statistical techniques. Thus, the proposed method of comparing algorithms is an objective procedure designed to assist in the process of improving an algorithm. This is achieved even when some results are non-numeric or missing due to infeasibility. The final algorithm outperforms all previous evolutionary algorithms, which relied on human expertise for modification.
Resumo:
Processors with large numbers of cores are becoming commonplace. In order to utilise the available resources in such systems, the programming paradigm has to move towards increased parallelism. However, increased parallelism does not necessarily lead to better performance. Parallel programming models have to provide not only flexible ways of defining parallel tasks, but also efficient methods to manage the created tasks. Moreover, in a general-purpose system, applications residing in the system compete for the shared resources. Thread and task scheduling in such a multiprogrammed multithreaded environment is a significant challenge. In this thesis, we introduce a new task-based parallel reduction model, called the Glasgow Parallel Reduction Machine (GPRM). Our main objective is to provide high performance while maintaining ease of programming. GPRM supports native parallelism; it provides a modular way of expressing parallel tasks and the communication patterns between them. Compiling a GPRM program results in an Intermediate Representation (IR) containing useful information about tasks, their dependencies, as well as the initial mapping information. This compile-time information helps reduce the overhead of runtime task scheduling and is key to high performance. Generally speaking, the granularity and the number of tasks are major factors in achieving high performance. These factors are even more important in the case of GPRM, as it is highly dependent on tasks, rather than threads. We use three basic benchmarks to provide a detailed comparison of GPRM with Intel OpenMP, Cilk Plus, and Threading Building Blocks (TBB) on the Intel Xeon Phi, and with GNU OpenMP on the Tilera TILEPro64. GPRM shows superior performance in almost all cases, only by controlling the number of tasks. GPRM also provides a low-overhead mechanism, called “Global Sharing”, which improves performance in multiprogramming situations. We use OpenMP, as the most popular model for shared-memory parallel programming as the main GPRM competitor for solving three well-known problems on both platforms: LU factorisation of Sparse Matrices, Image Convolution, and Linked List Processing. We focus on proposing solutions that best fit into the GPRM’s model of execution. GPRM outperforms OpenMP in all cases on the TILEPro64. On the Xeon Phi, our solution for the LU Factorisation results in notable performance improvement for sparse matrices with large numbers of small blocks. We investigate the overhead of GPRM’s task creation and distribution for very short computations using the Image Convolution benchmark. We show that this overhead can be mitigated by combining smaller tasks into larger ones. As a result, GPRM can outperform OpenMP for convolving large 2D matrices on the Xeon Phi. Finally, we demonstrate that our parallel worksharing construct provides an efficient solution for Linked List processing and performs better than OpenMP implementations on the Xeon Phi. The results are very promising, as they verify that our parallel programming framework for manycore processors is flexible and scalable, and can provide high performance without sacrificing productivity.
Resumo:
Atualmente, uma organização industrial com vista a singrar no mercado global é fortemente influenciada por pressões que visam o aumento da eficiência global e consequente redução de custos operacionais. O desafio para as mesmas passa, portanto, por expurgar do produto tudo aquilo que não lhe acrescenta valor percetível pelo cliente e por maximizar a utilização dos vários recursos industriais instalados. No seguimento deste desafio, surge o Problema de Planeamento e Programação da Produção, ao qual é necessário dar uma resposta eficiente. Este projeto tem como objetivo estudar o problema da Programação da Produção numa indústria de pavimentos e revestimentos cerâmicos, desenvolvendo uma heurística construtiva capaz de traduzir com fiabilidade a realidade do processo produtivo da mesma e, se possível, auxiliar na sua resolução. O problema da programação da produção em estudo visa responder às questões: o quê, em que quantidade, quando e em que linha produzir, por forma a satisfazer as necessidades dos clientes num prazo previamente estipulado como admissível, garantindo o enchimento dos fornos ligados. Sem grandes constrangimentos ao normal lavor da Produção, pretende obter-se com a heurística planos de produção viáveis, que minimizem o tempo necessário para a conclusão do conjunto de referências com necessidades produtivas. O problema é também abordado através de um modelo exato como um problema de máquinas paralelas idênticas capacitado, com matriz de compatibilidades, setups de família e de subfamília e com lotes mínimos de produção. Quer a heurística quer o modelo de programação inteira mista desenvolvidos permitem obter planos de produção válidos, equivalentes aos obtidos atualmente pela empresa através dos meios de programação atuais, embora com um dispêndio de tempo muito inferior.
Resumo:
Datacenters have emerged as the dominant form of computing infrastructure over the last two decades. The tremendous increase in the requirements of data analysis has led to a proportional increase in power consumption and datacenters are now one of the fastest growing electricity consumers in the United States. Another rising concern is the loss of throughput due to network congestion. Scheduling models that do not explicitly account for data placement may lead to a transfer of large amounts of data over the network causing unacceptable delays. In this dissertation, we study different scheduling models that are inspired by the dual objectives of minimizing energy costs and network congestion in a datacenter. As datacenters are equipped to handle peak workloads, the average server utilization in most datacenters is very low. As a result, one can achieve huge energy savings by selectively shutting down machines when demand is low. In this dissertation, we introduce the network-aware machine activation problem to find a schedule that simultaneously minimizes the number of machines necessary and the congestion incurred in the network. Our model significantly generalizes well-studied combinatorial optimization problems such as hard-capacitated hypergraph covering and is thus strongly NP-hard. As a result, we focus on finding good approximation algorithms. Data-parallel computation frameworks such as MapReduce have popularized the design of applications that require a large amount of communication between different machines. Efficient scheduling of these communication demands is essential to guarantee efficient execution of the different applications. In the second part of the thesis, we study the approximability of the co-flow scheduling problem that has been recently introduced to capture these application-level demands. Finally, we also study the question, "In what order should one process jobs?'' Often, precedence constraints specify a partial order over the set of jobs and the objective is to find suitable schedules that satisfy the partial order. However, in the presence of hard deadline constraints, it may be impossible to find a schedule that satisfies all precedence constraints. In this thesis we formalize different variants of job scheduling with soft precedence constraints and conduct the first systematic study of these problems.
Resumo:
The aim of this research is twofold: Firstly, to model and solve a complex nurse scheduling problem with an integer programming formulation and evolutionary algorithms. Secondly, to detail a novel statistical method of comparing and hence building better scheduling algorithms by identifying successful algorithm modifications. The comparison method captures the results of algorithms in a single figure that can then be compared using traditional statistical techniques. Thus, the proposed method of comparing algorithms is an objective procedure designed to assist in the process of improving an algorithm. This is achieved even when some results are non-numeric or missing due to infeasibility. The final algorithm outperforms all previous evolutionary algorithms, which relied on human expertise for modification.
Resumo:
The aim of this research is twofold: Firstly, to model and solve a complex nurse scheduling problem with an integer programming formulation and evolutionary algorithms. Secondly, to detail a novel statistical method of comparing and hence building better scheduling algorithms by identifying successful algorithm modifications. The comparison method captures the results of algorithms in a single figure that can then be compared using traditional statistical techniques. Thus, the proposed method of comparing algorithms is an objective procedure designed to assist in the process of improving an algorithm. This is achieved even when some results are non-numeric or missing due to infeasibility. The final algorithm outperforms all previous evolutionary algorithms, which relied on human expertise for modification.
Resumo:
The Train Timetabling Problem (TTP) has been widely studied for freight and passenger rail systems. A lesser effort has been devoted to the study of high-speed rail systems. A modeling issue that has to be addressed is to model departure time choice of passengers on railway services. Passengers who use these systems attempt to travel at predetermined hours due to their daily life necessities (e.g., commuter trips). We incorporate all these features into TTP focusing on high-speed railway systems. We propose a Rail Scheduling and Rolling Stock (RSch-RS) model for timetable planning of high-speed railway systems. This model is composed of two essential elements: i) an infrastructure model for representing the railway network: it includes capacity constraints of the rail network and the Rolling-Stock constraints; and ii) a demand model that defines how the passengers choose the departure time. The resulting model is a mixed-integer programming model which objective function attempts to maximize the profit for the rail operator
Resumo:
The aim of this note is to formulate an envelope theorem for vector convex programs. This version corrects an earlier work, “The envelope theorem for multiobjective convex programming via contingent derivatives” by Jiménez Guerra et al. (2010) [3]. We first propose a necessary and sufficient condition allowing to restate the main result proved in the alluded paper. Second, we introduce a new Lagrange multiplier in order to obtain an envelope theorem avoiding the aforementioned error.