911 resultados para Train scheduling
Resumo:
In this paper we present a mixed integer model that integrates lot sizing and lot scheduling decisions for the production planning of a soft drink company. The main contribution of the paper is to present a model that differ from others in the literature for the constraints related to the scheduling decisions. The proposed strategy is compared to other strategies presented in the literature.
Resumo:
The medium term hydropower scheduling (MTHS) problem involves an attempt to determine, for each time stage of the planning period, the amount of generation at each hydro plant which will maximize the expected future benefits throughout the planning period, while respecting plant operational constraints. Besides, it is important to emphasize that this decision-making has been done based mainly on inflow earliness knowledge. To perform the forecast of a determinate basin, it is possible to use some intelligent computational approaches. In this paper one considers the Dynamic Programming (DP) with the inflows given by their average values, thus turning the problem into a deterministic one which the solution can be obtained by deterministic DP (DDP). The performance of the DDP technique in the MTHS problem was assessed by simulation using the ensemble prediction models. Features and sensitivities of these models are discussed. © 2012 IEEE.
Resumo:
Software transaction memory (STM) systems have been used as an approach to improve performance, by allowing the concurrent execution of atomic blocks. However, under high-contention workloads, STM-based systems can considerably degrade performance, as transaction conflict rate increases. Contention management policies have been used as a way to select which transaction to abort when a conflict occurs. In general, contention managers are not capable of avoiding conflicts, as they can only select which transaction to abort and the moment it should restart. Since contention managers act only after a conflict is detected, it becomes harder to effectively increase transaction throughput. More proactive approaches have emerged, aiming at predicting when a transaction is likely to abort, postponing its execution. Nevertheless, most of the proposed proactive techniques are limited, as they do not replace the doomed transaction by another or, when they do, they rely on the operating system for that, having little or no control on which transaction to run. This article proposes LUTS, a lightweight user-level transaction scheduler. Unlike other techniques, LUTS provides the means for selecting another transaction to run in parallel, thus improving system throughput. We discuss LUTS design and propose a dynamic conflict-avoidance heuristic built around its scheduling capabilities. Experimental results, conducted with the STAMP and STMBench7 benchmark suites, running on TinySTM and SwissTM, show how our conflict-avoidance heuristic can effectively improve STM performance on high contention applications. © 2012 Springer Science+Business Media, LLC.
Resumo:
This paper tackles a Nurse Scheduling Problem which consists of generating work schedules for a set of nurses while considering their shift preferences and other requirements. The objective is to maximize the satisfaction of nurses' preferences and minimize the violation of soft constraints. This paper presents a new deterministic heuristic algorithm, called MAPA (multi-assignment problem-based algorithm), which is based on successive resolutions of the assignment problem. The algorithm has two phases: a constructive phase and an improvement phase. The constructive phase builds a full schedule by solving successive assignment problems, one for each day in the planning period. The improvement phase uses a couple of procedures that re-solve assignment problems to produce a better schedule. Given the deterministic nature of this algorithm, the same schedule is obtained each time that the algorithm is applied to the same problem instance. The performance of MAPA is benchmarked against published results for almost 250,000 instances from the NSPLib dataset. In most cases, particularly on large instances of the problem, the results produced by MAPA are better when compared to best-known solutions from the literature. The experiments reported here also show that the MAPA algorithm finds more feasible solutions compared with other algorithms in the literature, which suggest that this proposed approach is effective and robust. © 2013 Springer Science+Business Media New York.
Resumo:
In many production processes, a key material is prepared and then transformed into different final products. The lot sizing decisions concern not only the production of final products, but also that of material preparation in order to take account of their sequence-dependent setup costs and times. The amount of research in recent years indicates the relevance of this problem in various industrial settings. In this paper, facility location reformulation and strengthening constraints are newly applied to a previous lot-sizing model in order to improve solution quality and computing time. Three alternative metaheuristics are used to fix the setup variables, resulting in much improved performance over previous research, especially regarding the use of the metaheuristics for larger instances. © 2013 Elsevier Ltd. All rights reserved.
Resumo:
This study tested whether 3-4 weeks of classical Live High-Train High (LHTH) altitude training increases swim-specific VO2max through increased hemoglobin mass (Hb(mass)).Ten swimmers lived and trained for more than 3 weeks between 2,130 and 3,094 m of altitude, and a control group of ten swimmers followed the same training at sea-level (SL). Body composition was examined using dual X-ray absorptiometry. Hb(mass) was determined by carbon monoxide rebreathing. Swimming VO2peak was determined and swimming trials of 4 x 50, 200 and 3,000 m were performed before and after the intervention.Hb(mass) (n = 10) was increased (P < 0.05)after altitude training by 6.2 +/- A 3.9 % in the LHTH group, whereas no changes were apparent in the SL group (n = 10). Swimming VO2peak was similar before and after training camps in both groups (LHTH: n = 7, SL: n = 6). Performance of 4 x 50 m at race pace was improved to a similar degree in both groups (LHTH: n = 10, SL: n = 10). Maximal speed reached in an incremental swimming step test (P = 0.051), and time to complete 3,000 m tended (P = 0.09) to be more improved after LHTH (n = 10) than SL training (n = 10).In conclusion, 3-4 weeks of classical LHTH is sufficient to increase Hb(mass) but exerts no effect on swimming-specific VO2peak. LHTH may improve performance more than SL training.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Pós-graduação em Engenharia Elétrica - FEIS
Resumo:
Data-intensive Grid applications require huge data transfers between grid computing nodes. These computing nodes, where computing jobs are executed, are usually geographically separated. A grid network that employs optical wavelength division multiplexing (WDM) technology and optical switches to interconnect computing resources with dynamically provisioned multi-gigabit rate bandwidth lightpath is called a Lambda Grid network. A computing task may be executed on any one of several computing nodes which possesses the necessary resources. In order to reflect the reality in job scheduling, allocation of network resources for data transfer should be taken into consideration. However, few scheduling methods consider the communication contention on Lambda Grids. In this paper, we investigate the joint scheduling problem while considering both optical network and computing resources in a Lambda Grid network. The objective of our work is to maximize the total number of jobs that can be scheduled in a Lambda Grid network. An adaptive routing algorithm is proposed and implemented for accomplishing the communication tasks for every job submitted in the network. Four heuristics (FIFO, ESTF, LJF, RS) are implemented for job scheduling of the computational tasks. Simulation results prove the feasibility and efficiency of the proposed solution.
Resumo:
Data-intensive Grid applications require huge data transfers between grid computing nodes. These computing nodes, where computing jobs are executed, are usually geographically separated. A grid network that employs optical wavelength division multiplexing (WDM) technology and optical switches to interconnect computing resources with dynamically provisioned multi-gigabit rate bandwidth lightpath is called a Lambda Grid network. A computing task may be executed on any one of several computing nodes which possesses the necessary resources. In order to reflect the reality in job scheduling, allocation of network resources for data transfer should be taken into consideration. However, few scheduling methods consider the communication contention on Lambda Grids. In this paper, we investigate the joint scheduling problem while considering both optical network and computing resources in a Lambda Grid network. The objective of our work is to maximize the total number of jobs that can be scheduled in a Lambda Grid network. An adaptive routing algorithm is proposed and implemented for accomplishing the communication tasks for every job submitted in the network. Four heuristics (FIFO, ESTF, LJF, RS) are implemented for job scheduling of the computational tasks. Simulation results prove the feasibility and efficiency of the proposed solution.
Resumo:
Lightpath scheduling is an important capability in next-generation wavelength-division multiplexing (WDM) optical networks to reserve resources in advance for a specified time period while provisioning end-to-end lightpaths. In this study, we propose an approach to support dynamic lightpath scheduling in such networks. To minimize blocking probability in a network that accommodates dynamic scheduled lightpath demands (DSLDs), resource allocation should be optimized in a dynamic manner. However, for the network users who desire deterministic services, resources must be reserved in advance and guaranteed for future use. These two objectives may be mutually incompatible. Therefore, we propose a two-phase dynamic lightpath scheduling approach to tackle this issue. The first phase is the deterministic lightpath scheduling phase. When a lightpath request arrives, the network control plane schedules a path with guaranteed resources so that the user can get a quick response with the deterministic lightpath schedule. The second phase is the lightpath re-optimization phase, in which the network control plane re-provisions some already scheduled lightpaths. Experimental results show that our proposed two-phase dynamic lightpath scheduling approach can greatly reduce WDM network blocking.
Resumo:
Lightpath scheduling is an important capability in next-generation wavelength-division multiplexing (WDM) optical networks to reserve resources in advance for a specified time period while provisioning end-to-end lightpaths. In a dynamic environment, the end user requests for dynamic scheduled lightpath demands (D-SLDs) need to be serviced without the knowledge of future requests. Even though the starting time of the request may be hours or days from the current time, the end-user however expects a quick response as to whether the request could be satisfied. We propose a two-phase approach to dynamically schedule and provision D-SLDs. In the first phase, termed the deterministic lightpath scheduling phase, upon arrival of a lightpath request, the network control plane schedules a path with guaranteed resources so that the user can get a quick response with a deterministic lightpath schedule. In the second phase, termed the lightpath re-optimization phase, we re-provision some already scheduled lightpaths to re-optimize for improving network performance. We study two reoptimization scenarios to reallocate network resources while maintaining the existing lightpath schedules. Experimental results show that our proposed two-phase dynamic lightpath scheduling approach can greatly reduce network blocking.
Resumo:
We propose an efficient scheduling scheme that optimizes advance-reserved lightpath services in reconfigurable WDM networks. A re-optimization approach is devised to reallocate network resources for dynamic service demands while keeping determined schedule unchanged.
Resumo:
This paper proposes three new hybrid mechanisms for the scheduling of grid tasks, which integrate reactive and proactive approaches. They differ by the scheduler used to define the initial schedule of an application and by the scheduler used to reschedule the application. The mechanisms are compared to reactive and proactive mechanisms. Results show that hybrid approach produces performance close to that of the reactive mechanisms, but demanding less migrations.
Resumo:
This article describes a real-world production planning and scheduling problem occurring at an integrated pulp and paper mill (P&P) which manufactures paper for cardboard out of produced pulp. During the cooking of wood chips in the digester, two by-products are produced: the pulp itself (virgin fibers) and the waste stream known as black liquor. The former is then mixed with recycled fibers and processed in a paper machine. Here, due to significant sequence-dependent setups in paper type changeovers, sizing and sequencing of lots have to be made simultaneously in order to efficiently use capacity. The latter is converted into electrical energy using a set of evaporators, recovery boilers and counter-pressure turbines. The planning challenge is then to synchronize the material flow as it moves through the pulp and paper mills, and energy plant, maximizing customer demand (as backlogging is allowed), and minimizing operation costs. Due to the intensive capital feature of P&P, the output of the digester must be maximized. As the production bottleneck is not fixed, to tackle this problem we propose a new model that integrates the critical production units associated to the pulp and paper mills, and energy plant for the first time. Simple stochastic mixed integer programming based local search heuristics are developed to obtain good feasible solutions for the problem. The benefits of integrating the three stages are discussed. The proposed approaches are tested on real-world data. Our work may help P&P companies to increase their competitiveness and reactiveness in dealing with demand pattern oscillations. (C) 2012 Elsevier Ltd. All rights reserved.