2 resultados para Scheduling models
Resumo:
We consider the problem of train planning or scheduling for large, busy, complex train stations, which are common in Europe and elsewhere, though not in North America. We develop the constraints and objectives for this problem, but these are too computationally complex to solve by standard combinatorial search or integer programming methods. Also, the problem is somewhat political in nature, that is, it does not have a clear objective function because it involves multiple train operators with conflicting interests. We therefore develop scheduling heuristics analogous to those successfully adopted by train planners using ''manual'' methods. We tested the model and algorithms by applying to a typical large station that exhibits most of the complexities found in practice. The results compare well with those found by traditional methods, and take account of cost and preference trade-offs not handled by those methods. With successive refinements, the algorithm eventually took only a few seconds to run, the time depending on the version of the algorithm and the scheduling problem. The scheduling models and algorithms developed and tested here can be used on their own, or as key components for a more general system for train scheduling for a rail line or network.Train scheduling for a busy station includes ensuring that there are no conflicts between several hundred trains per day going in and out of the station on intersecting paths from multiple in-lines and out-lines to multiple platforms, while ensuring that each train is allowed at least its minimum required headways, dwell time, turnaround time and trip time. This has to be done while minimizing (costs of) deviations from desired times, platforms or lines, allowing for conflicts due to through-platforms, dead-end platforms, multiple sub-platforms, and possible constraints due to infrastructure, safety or business policy.
Resumo:
We propose simple models to predict the performance degradation of disk requests due to storage device contention in consolidated virtualized environments. Model parameters can be deduced from measurements obtained inside Virtual Machines (VMs) from a system where a single VM accesses a remote storage server. The parameterized model can then be used to predict the effect of storage contention when multiple VMs are consolidated on the same server. We first propose a trace-driven approach that evaluates a queueing network with fair share scheduling using simulation. The model parameters consider Virtual Machine Monitor level disk access optimizations and rely on a calibration technique. We further present a measurement-based approach that allows a distinct characterization of read/write performance attributes. In particular, we define simple linear prediction models for I/O request mean response times, throughputs and read/write mixes, as well as a simulation model for predicting response time distributions. We found our models to be effective in predicting such quantities across a range of synthetic and emulated application workloads.