994 resultados para Scheduling models
Resumo:
This thesis deals with an investigation of combinatorial and robust optimisation models to solve railway problems. Railway applications represent a challenging area for operations research. In fact, most problems in this context can be modelled as combinatorial optimisation problems, in which the number of feasible solutions is finite. Yet, despite the astonishing success in the field of combinatorial optimisation, the current state of algorithmic research faces severe difficulties with highly-complex and data-intensive applications such as those dealing with optimisation issues in large-scale transportation networks. One of the main issues concerns imperfect information. The idea of Robust Optimisation, as a way to represent and handle mathematically systems with not precisely known data, dates back to 1970s. Unfortunately, none of those techniques proved to be successfully applicable in one of the most complex and largest in scale (transportation) settings: that of railway systems. Railway optimisation deals with planning and scheduling problems over several time horizons. Disturbances are inevitable and severely affect the planning process. Here we focus on two compelling aspects of planning: robust planning and online (real-time) planning.
Resumo:
Crew scheduling and crew rostering are similar and related problems which can be solved by similar procedures. So far, the existing solution methods usually create a model for each one of these problems (scheduling and rostering), and when they are solved together in some cases an interaction between models is considered in order to obtain a better solution. A single set covering model to solve simultaneously both problems is presented here, where the total quantity of drivers needed is directly considered and optimized. This integration allows to optimize all of the depots at the same time, while traditional approaches needed to work depot by depot, and also it allows to see and manage the relationship between scheduling and rostering, which was known in some degree but usually not easy to quantify as this model permits. Recent research in the area of crew scheduling and rostering has stated that one of the current challenges to be achieved is to determine a schedule where crew fatigue, which depends mainly on the quality of the rosters created, is reduced. In this approach rosters are constructed in such way that stable working hours are used in every week of work, and a change to a different shift is done only using free days in between to make easier the adaptation to the new working hours. Computational results for real-world-based instances are presented. Instances are geographically diverse to test the performance of the procedures and the model in different scenarios.
Resumo:
Zahnverlust zu Lebzeiten („antemortem tooth loss“, AMTL) kann als Folge von Zahnerkrankungen, Traumata, Zahnextraktionen oder extremer kontinuierlicher Eruption sowie als Begleiterscheinung fortgeschrittener Stadien von Skorbut oder Lepra auftreten. Nach dem Zahnverlust setzt die Wundheilung als Sekundärheilung ein, während der sich die Alveole mit Blut füllt und sich ein Koagulum bildet. Anschließend erfolgt dessen Umwandlung in Knochengewebe und schließlich verstreicht die Alveole derart, dass sie makroskopisch nicht mehr erkannt werden kann. Der Zeitrahmen der knöchernen Konsolidierung des Kieferkammes ist im Detail wenig erforscht. Aufgrund des gehäuften Auftretens von AMTL in menschlichen Populationen, ist die Erarbeitung eines Zeitfensters, mit dessen Hilfe durch makroskopische Beobachtung des Knochens die Zeitspanne seit dem Zahnverlust („time since tooth loss“, TSL) ermittelt werden kann, insbesondere im archäologischen Kontext äußerst wertvoll. Solch ein Zeitschema mit Angaben über die Variabilität der zeitlichen Abläufe bei den Heilungsvorgängen kann nicht nur in der Osteologie, sondern auch in der Forensik, der allgemeinen Zahnheilkunde und der Implantologie nutzbringend angewandt werden. rnrnNach dem Verlust eines Zahnes wird das Zahnfach in der Regel durch ein Koagulum aufgefüllt. Das sich bildende Gewebe wird rasch in noch unreifen Knochen umgewandelt, welcher den Kieferknochen und auch die angrenzenden Zähne stabilisiert. Nach seiner Ausreifung passt sich das Gewebe schließlich dem umgebenden Knochen an. Das Erscheinungsbild des Zahnfaches während dieses Vorgangs durchläuft verschiedene Stadien, welche in der vorliegenden Studie anhand von klinischen Röntgenaufnahmen rezenter Patienten sowie durch Untersuchungen an archäologischen Skelettserien identifiziert wurden. Die Heilungsvorgänge im Zahnfach können in eine prä-ossale Phase (innerhalb einer Woche nach Zahnverlust), eine Verknöcherungsphase (etwa 14 Wochen nach Zahnverlust) und eine ossifizierte bzw. komplett verheilte Phase (mindestens 29 Wochen nach Zahnverlust) eingeteilt werden. Etliche Faktoren – wie etwa die Resorption des Interdentalseptums, der Zustand des Alveolarknochens oder das Individualgeschlecht – können den normalen Heilungsprozess signifikant beschleunigen oder hemmen und so Unterschiede von bis zu 19 Wochen verursachen. Weitere Variablen wirkten sich nicht signifikant auf den zeitlichen Rahmen des Heilungsprozesse aus. Relevante Abhängigkeiten zwischen verschiedenen Variabeln wurden ungeachtet der Alveolenauffüllung ebenfalls getestet. Gruppen von unabhängigen Variabeln wurden im Hinblick auf Auffüllungsgrad und TSL in multivariablen Modellen untersucht. Mit Hilfe dieser Ergebnisse ist eine grobe Einschätzung der Zeitspanne nach einem Zahnverlust in Wochen möglich, wobei die Einbeziehung weiterer Parameter eine höhere Präzision ermöglicht. rnrnObwohl verschiedene dentale Pathologien in dieser Studie berücksichtigt wurden, sollten zukünftige Untersuchungen genauer auf deren potenzielle Einflussnahme auf den alveolaren Heilungsprozess eingehen. Der kausale Zusammenhang einiger Variablen (wie z. B. Anwesenheit von Nachbarzähnen oder zahnmedizinische Behandlungen), welche die Geschwindigkeit der Heilungsrate beeinflussen, wäre von Bedeutung für zukünftige Untersuchungen des oralen Knochengewebes. Klinische Vergleichsstudien an forensischen Serien mit bekannter TSL oder an einer sich am Anfang des Heilungsprozesses befindlichen klinischen Serie könnten eine Bekräftigung dieser Ergebnisse liefern.
Resumo:
In process industries, make-and-pack production is used to produce food and beverages, chemicals, and metal products, among others. This type of production process allows the fabrication of a wide range of products in relatively small amounts using the same equipment. In this article, we consider a real-world production process (cf. Honkomp et al. 2000. The curse of reality – why process scheduling optimization problems are diffcult in practice. Computers & Chemical Engineering, 24, 323–328.) comprising sequence-dependent changeover times, multipurpose storage units with limited capacities, quarantine times, batch splitting, partial equipment connectivity, and transfer times. The planning problem consists of computing a production schedule such that a given demand of packed products is fulfilled, all technological constraints are satisfied, and the production makespan is minimised. None of the models in the literature covers all of the technological constraints that occur in such make-and-pack production processes. To close this gap, we develop an efficient mixed-integer linear programming model that is based on a continuous time domain and general-precedence variables. We propose novel types of symmetry-breaking constraints and a preprocessing procedure to improve the model performance. In an experimental analysis, we show that small- and moderate-sized instances can be solved to optimality within short CPU times.
Resumo:
The paper deals with batch scheduling problems in process industries where final products arise from several successive chemical or physical transformations of raw materials using multi–purpose equipment. In batch production mode, the total requirements of intermediate and final products are partitioned into batches. The production start of a batch at a given level requires the availability of all input products. We consider the problem of scheduling the production of given batches such that the makespan is minimized. Constraints like minimum and maximum time lags between successive production levels, sequence–dependent facility setup times, finite intermediate storages, production breaks, and time–varying manpower contribute to the complexity of this problem. We propose a new solution approach using models and methods of resource–constrained project scheduling, which (approximately) solves problems of industrial size within a reasonable amount of time.
Resumo:
This paper is concerned with the modelling of storage configurations for intermediate products in process industries. Those models form the basis of algorithms for scheduling chemical production plants. Different storage capacity settings (unlimited, finite, and no intermediate storage), storage homogeneity settings (dedicated and shared storage), and storage time settings (unlimited, finite, and no wait) are considered. We discuss a classification of storage constraints in batch scheduling and show how those constraints can be integrated into a general production scheduling model that is based on the concept of cumulative resources.
Resumo:
The interactions among three important issues involved in the implementation of logic programs in parallel (goal scheduling, precedence, and memory management) are discussed. A simplified, parallel memory management model and an efficient, load-balancing goal scheduling strategy are presented. It is shown how, for systems which support "don't know" non-determinism, special care has to be taken during goal scheduling if the space recovery characteristics of sequential systems are to be preserved. A solution based on selecting only "newer" goals for execution is described, and an algorithm is proposed for efficiently maintaining and determining precedence relationships and variable ages across parallel goals. It is argued that the proposed schemes and algorithms make it possible to extend the storage performance of sequential systems to parallel execution without the considerable overhead previously associated with it. The results are applicable to a wide class of parallel and coroutining systems, and they represent an efficient alternative to "all heap" or "spaghetti stack" allocation models.
Resumo:
The analysis of concurrent constraint programs is a challenge due to the inherently concurrent behaviour of its computational model. However, most implementations of the concurrent paradigm can be viewed as a computation with a fixed scheduling rule which suspends some goals so that their execution is postponed until some condition awakens them. For a certain kind of properties, an analysis defined in these terms is correct. Furthermore, it is much more tractable, and in addition can make use of existing analysis technology for the underlying fixed computation rule. We show how this can be done when the starting point is a framework for the analysis of sequential programs. The resulting analysis, which incorporates suspensions, is adequate for concurrent models where concurrency is localized, e.g. the Andorra model. We refine the analysis for this particular case. Another model in which concurrency is preferably encapsulated, and thus suspensions are local to parts of the computation, is that of CIAO. Nonetheless, the analysis scheme can be generalized to models with global concurrency. We also sketch how this could be done, and we show how the resulting analysis framework could be used for analyzing typical properties, such as suspensión freeness.
Resumo:
In this paper, we examine the issue of memory management in the parallel execution of logic programs. We concentrate on non-deterministic and-parallel schemes which we believe present a relatively general set of problems to be solved, including most of those encountered in the memory management of or-parallel systems. We present a distributed stack memory management model which allows flexible scheduling of goals. Previously proposed models (based on the "Marker model") are lacking in that they impose restrictions on the selection of goals to be executed or they may require consume a large amount of virtual memory. This paper first presents results which imply that the above mentioned shortcomings can have significant performance impacts. An extension of the Marker Model is then proposed which allows flexible scheduling of goals while keeping (virtual) memory consumption down. Measurements are presented which show the advantage of this solution. Methods for handling forward and backward execution, cut and roll back are discussed in the context of the proposed scheme. In addition, the paper shows how the same mechanism for flexible scheduling can be applied to allow the efficient handling of the very general form of suspension that can occur in systems which combine several types of and-parallelism and more sophisticated methods of executing logic programs. We believe that the results are applicable to many and- and or-parallel systems.
Resumo:
Over the past few decades, we have been enjoying tremendous benefits thanks to the revolutionary advancement of computing systems, driven mainly by the remarkable semiconductor technology scaling and the increasingly complicated processor architecture. However, the exponentially increased transistor density has directly led to exponentially increased power consumption and dramatically elevated system temperature, which not only adversely impacts the system's cost, performance and reliability, but also increases the leakage and thus the overall power consumption. Today, the power and thermal issues have posed enormous challenges and threaten to slow down the continuous evolvement of computer technology. Effective power/thermal-aware design techniques are urgently demanded, at all design abstraction levels, from the circuit-level, the logic-level, to the architectural-level and the system-level. ^ In this dissertation, we present our research efforts to employ real-time scheduling techniques to solve the resource-constrained power/thermal-aware, design-optimization problems. In our research, we developed a set of simple yet accurate system-level models to capture the processor's thermal dynamic as well as the interdependency of leakage power consumption, temperature, and supply voltage. Based on these models, we investigated the fundamental principles in power/thermal-aware scheduling, and developed real-time scheduling techniques targeting at a variety of design objectives, including peak temperature minimization, overall energy reduction, and performance maximization. ^ The novelty of this work is that we integrate the cutting-edge research on power and thermal at the circuit and architectural-level into a set of accurate yet simplified system-level models, and are able to conduct system-level analysis and design based on these models. The theoretical study in this work serves as a solid foundation for the guidance of the power/thermal-aware scheduling algorithms development in practical computing systems.^
Resumo:
This paper compares two linear programming (LP) models for shift scheduling in services where homogeneously-skilled employees are available at limited times. Although both models are based on set covering approaches, one explicitly matches employees to shifts, while the other imposes this matching implicitly. Each model is used in three forms—one with complete, another with very limited meal break placement flexibility, and a third without meal breaks—to provide initial schedules to a completion/improvement heuristic. The term completion/improvement heuristic is used to describe a construction/ improvement heuristic operating on a starting schedule. On 80 test problems varying widely in scheduling flexibility, employee staffing requirements, and employee availability characteristics, all six LP-based procedures generated lower cost schedules than a comparison from-scratch construction/improvement heuristic. This heuristic, which perpetually maintains an explicit matching of employees to shifts, consists of three phases which add, drop, and modify shifts. In terms of schedule cost, schedule generation time, and model size, the procedures based on the implicit model performed better, as a group, than those based on the explicit model. The LP model with complete break placement flexibility and implicitly matching employees to shifts generated schedules costing 6.7% less than those developed by the from-scratch heuristic.
Resumo:
An extensive literature exists on the problems of daily (shift) and weekly (tour) labor scheduling. In representing requirements for employees in these problems, researchers have used formulations based either on the model of Dantzig (1954) or on the model of Keith (1979). We show that both formulations have weakness in environments where management knows, or can attempt to identify, how different levels of customer service affect profits. These weaknesses results in lower-than-necessary profits. This paper presents a New Formulation of the daily and weekly Labor Scheduling Problems (NFLSP) designed to overcome the limitations of earlier models. NFLSP incorporates information on how changing the number of employees working in each planning period affects profits. NFLP uses this information during the development of the schedule to identify the number of employees who, ideally, should be working in each period. In an extensive simulation of 1,152 service environments, NFLSP outperformed the formulations of Dantzig (1954) and Keith (1979) at a level of significance of 0.001. Assuming year-round operations and an hourly wage, including benefits, of $6.00, NFLSP's schedules were $96,046 (2.2%) and $24,648 (0.6%) more profitable, on average, than schedules developed using the formulations of Danzig (1954) and Keith (1979), respectively. Although the average percentage gain over Keith's model was fairly small, it could be much larger in some real cases with different parameters. In 73 and 100 percent of the cases we simulated NFLSP yielded a higher profit than the models of Keith (1979) and Danzig (1954), respectively.
Resumo:
The U.S. railroad companies spend billions of dollars every year on railroad track maintenance in order to ensure safety and operational efficiency of their railroad networks. Besides maintenance costs, other costs such as train accident costs, train and shipment delay costs and rolling stock maintenance costs are also closely related to track maintenance activities. Optimizing the track maintenance process on the extensive railroad networks is a very complex problem with major cost implications. Currently, the decision making process for track maintenance planning is largely manual and primarily relies on the knowledge and judgment of experts. There is considerable potential to improve the process by using operations research techniques to develop solutions to the optimization problems on track maintenance. In this dissertation study, we propose a range of mathematical models and solution algorithms for three network-level scheduling problems on track maintenance: track inspection scheduling problem (TISP), production team scheduling problem (PTSP) and job-to-project clustering problem (JTPCP). TISP involves a set of inspection teams which travel over the railroad network to identify track defects. It is a large-scale routing and scheduling problem where thousands of tasks are to be scheduled subject to many difficult side constraints such as periodicity constraints and discrete working time constraints. A vehicle routing problem formulation was proposed for TISP, and a customized heuristic algorithm was developed to solve the model. The algorithm iteratively applies a constructive heuristic and a local search algorithm in an incremental scheduling horizon framework. The proposed model and algorithm have been adopted by a Class I railroad in its decision making process. Real-world case studies show the proposed approach outperforms the manual approach in short-term scheduling and can be used to conduct long-term what-if analyses to yield managerial insights. PTSP schedules capital track maintenance projects, which are the largest track maintenance activities and account for the majority of railroad capital spending. A time-space network model was proposed to formulate PTSP. More than ten types of side constraints were considered in the model, including very complex constraints such as mutual exclusion constraints and consecution constraints. A multiple neighborhood search algorithm, including a decomposition and restriction search and a block-interchange search, was developed to solve the model. Various performance enhancement techniques, such as data reduction, augmented cost function and subproblem prioritization, were developed to improve the algorithm. The proposed approach has been adopted by a Class I railroad for two years. Our numerical results show the model solutions are able to satisfy all hard constraints and most soft constraints. Compared with the existing manual procedure, the proposed approach is able to bring significant cost savings and operational efficiency improvement. JTPCP is an intermediate problem between TISP and PTSP. It focuses on clustering thousands of capital track maintenance jobs (based on the defects identified in track inspection) into projects so that the projects can be scheduled in PTSP. A vehicle routing problem based model and a multiple-step heuristic algorithm were developed to solve this problem. Various side constraints such as mutual exclusion constraints and rounding constraints were considered. The proposed approach has been applied in practice and has shown good performance in both solution quality and efficiency.
Resumo:
Our research has shown that schedules can be built mimicking a human scheduler by using a set of rules that involve domain knowledge. This chapter presents a Bayesian Optimization Algorithm (BOA) for the nurse scheduling problem that chooses such suitable scheduling rules from a set for each nurse’s assignment. Based on the idea of using probabilistic models, the BOA builds a Bayesian network for the set of promising solutions and samples these networks to generate new candidate solutions. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed algorithm may be suitable for other scheduling problems.
Resumo:
Reconfigurable platforms are a promising technology that offers an interesting trade-off between flexibility and performance, which many recent embedded system applications demand, especially in fields such as multimedia processing. These applications typically involve multiple ad-hoc tasks for hardware acceleration, which are usually represented using formalisms such as Data Flow Diagrams (DFDs), Data Flow Graphs (DFGs), Control and Data Flow Graphs (CDFGs) or Petri Nets. However, none of these models is able to capture at the same time the pipeline behavior between tasks (that therefore can coexist in order to minimize the application execution time), their communication patterns, and their data dependencies. This paper proves that the knowledge of all this information can be effectively exploited to reduce the resource requirements and the timing performance of modern reconfigurable systems, where a set of hardware accelerators is used to support the computation. For this purpose, this paper proposes a novel task representation model, named Temporal Constrained Data Flow Diagram (TCDFD), which includes all this information. This paper also presents a mapping-scheduling algorithm that is able to take advantage of the new TCDFD model. It aims at minimizing the dynamic reconfiguration overhead while meeting the communication requirements among the tasks. Experimental results show that the presented approach achieves up to 75% of resources saving and up to 89% of reconfiguration overhead reduction with respect to other state-of-the-art techniques for reconfigurable platforms.