824 resultados para parallel scheduling
Resumo:
Data-intensive Grid applications require huge data transfers between grid computing nodes. These computing nodes, where computing jobs are executed, are usually geographically separated. A grid network that employs optical wavelength division multiplexing (WDM) technology and optical switches to interconnect computing resources with dynamically provisioned multi-gigabit rate bandwidth lightpath is called a Lambda Grid network. A computing task may be executed on any one of several computing nodes which possesses the necessary resources. In order to reflect the reality in job scheduling, allocation of network resources for data transfer should be taken into consideration. However, few scheduling methods consider the communication contention on Lambda Grids. In this paper, we investigate the joint scheduling problem while considering both optical network and computing resources in a Lambda Grid network. The objective of our work is to maximize the total number of jobs that can be scheduled in a Lambda Grid network. An adaptive routing algorithm is proposed and implemented for accomplishing the communication tasks for every job submitted in the network. Four heuristics (FIFO, ESTF, LJF, RS) are implemented for job scheduling of the computational tasks. Simulation results prove the feasibility and efficiency of the proposed solution.
Resumo:
Lightpath scheduling is an important capability in next-generation wavelength-division multiplexing (WDM) optical networks to reserve resources in advance for a specified time period while provisioning end-to-end lightpaths. In this study, we propose an approach to support dynamic lightpath scheduling in such networks. To minimize blocking probability in a network that accommodates dynamic scheduled lightpath demands (DSLDs), resource allocation should be optimized in a dynamic manner. However, for the network users who desire deterministic services, resources must be reserved in advance and guaranteed for future use. These two objectives may be mutually incompatible. Therefore, we propose a two-phase dynamic lightpath scheduling approach to tackle this issue. The first phase is the deterministic lightpath scheduling phase. When a lightpath request arrives, the network control plane schedules a path with guaranteed resources so that the user can get a quick response with the deterministic lightpath schedule. The second phase is the lightpath re-optimization phase, in which the network control plane re-provisions some already scheduled lightpaths. Experimental results show that our proposed two-phase dynamic lightpath scheduling approach can greatly reduce WDM network blocking.
Resumo:
Lightpath scheduling is an important capability in next-generation wavelength-division multiplexing (WDM) optical networks to reserve resources in advance for a specified time period while provisioning end-to-end lightpaths. In a dynamic environment, the end user requests for dynamic scheduled lightpath demands (D-SLDs) need to be serviced without the knowledge of future requests. Even though the starting time of the request may be hours or days from the current time, the end-user however expects a quick response as to whether the request could be satisfied. We propose a two-phase approach to dynamically schedule and provision D-SLDs. In the first phase, termed the deterministic lightpath scheduling phase, upon arrival of a lightpath request, the network control plane schedules a path with guaranteed resources so that the user can get a quick response with a deterministic lightpath schedule. In the second phase, termed the lightpath re-optimization phase, we re-provision some already scheduled lightpaths to re-optimize for improving network performance. We study two reoptimization scenarios to reallocate network resources while maintaining the existing lightpath schedules. Experimental results show that our proposed two-phase dynamic lightpath scheduling approach can greatly reduce network blocking.
Resumo:
We propose an efficient scheduling scheme that optimizes advance-reserved lightpath services in reconfigurable WDM networks. A re-optimization approach is devised to reallocate network resources for dynamic service demands while keeping determined schedule unchanged.
Resumo:
This long-term extension of an 8-week randomized, naturalistic study in patients with panic disorder with or without agoraphobia compared the efficacy and safety of clonazepam (n = 47) and paroxetine (n = 37) over a 3-year total treatment duration. Target doses for all patients were 2 mg/d clonazepam and 40 mg/d paroxetine (both taken at bedtime). This study reports data from the long-term period (34 months), following the initial 8-week treatment phase. Thus, total treatment duration was 36 months. Patients with a good primary outcome during acute treatment continued monotherapy with clonazepam or paroxetine, but patients with partial primary treatment success were switched to the combination therapy. At initiation of the long-term study, the mean doses of clonazepam and paroxetine were 1.9 (SD, 0.30) and 38.4 (SD, 3.74) mg/d, respectively. These doses were maintained until month 36 (clonazepam 1.9 [ SD, 0.29] mg/d and paroxetine 38.2 [SD, 3.87] mg/d). Long-term treatment with clonazepam led to a small but significantly better Clinical Global Impression (CGI)-Improvement rating than treatment with paroxetine (mean difference: CGI-Severity scale -3.48 vs -3.24, respectively, P = 0.02; CGI-Improvement scale 1.06 vs 1.11, respectively, P = 0.04). Both treatments similarly reduced the number of panic attacks and severity of anxiety. Patients treated with clonazepam had significantly fewer adverse events than those treated with paroxetine (28.9% vs 70.6%, P < 0.001). The efficacy of clonazepam and paroxetine for the treatment of panic disorder was maintained over the long-term course. There was a significant advantage with clonazepam over paroxetine with respect to the frequency and nature of adverse events.
Resumo:
This paper proposes three new hybrid mechanisms for the scheduling of grid tasks, which integrate reactive and proactive approaches. They differ by the scheduler used to define the initial schedule of an application and by the scheduler used to reschedule the application. The mechanisms are compared to reactive and proactive mechanisms. Results show that hybrid approach produces performance close to that of the reactive mechanisms, but demanding less migrations.
Resumo:
In this article, we introduce two new variants of the Assembly Line Worker Assignment and Balancing Problem (ALWABP) that allow parallelization of and collaboration between heterogeneous workers. These new approaches suppose an additional level of complexity in the Line Design and Assignment process, but also higher flexibility; which may be particularly useful in practical situations where the aim is to progressively integrate slow or limited workers in conventional assembly lines. We present linear models and heuristic procedures for these two new problems. Computational results show the efficiency of the proposed approaches and the efficacy of the studied layouts in different situations. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Data visualization techniques are powerful in the handling and analysis of multivariate systems. One such technique known as parallel coordinates was used to support the diagnosis of an event, detected by a neural network-based monitoring system, in a boiler at a Brazilian Kraft pulp mill. Its attractiveness is the possibility of the visualization of several variables simultaneously. The diagnostic procedure was carried out step-by-step going through exploratory, explanatory, confirmatory, and communicative goals. This tool allowed the visualization of the boiler dynamics in an easier way, compared to commonly used univariate trend plots. In addition it facilitated analysis of other aspects, namely relationships among process variables, distinct modes of operation and discrepant data. The whole analysis revealed firstly that the period involving the detected event was associated with a transition between two distinct normal modes of operation, and secondly the presence of unusual changes in process variables at this time.
Resumo:
This article describes a real-world production planning and scheduling problem occurring at an integrated pulp and paper mill (P&P) which manufactures paper for cardboard out of produced pulp. During the cooking of wood chips in the digester, two by-products are produced: the pulp itself (virgin fibers) and the waste stream known as black liquor. The former is then mixed with recycled fibers and processed in a paper machine. Here, due to significant sequence-dependent setups in paper type changeovers, sizing and sequencing of lots have to be made simultaneously in order to efficiently use capacity. The latter is converted into electrical energy using a set of evaporators, recovery boilers and counter-pressure turbines. The planning challenge is then to synchronize the material flow as it moves through the pulp and paper mills, and energy plant, maximizing customer demand (as backlogging is allowed), and minimizing operation costs. Due to the intensive capital feature of P&P, the output of the digester must be maximized. As the production bottleneck is not fixed, to tackle this problem we propose a new model that integrates the critical production units associated to the pulp and paper mills, and energy plant for the first time. Simple stochastic mixed integer programming based local search heuristics are developed to obtain good feasible solutions for the problem. The benefits of integrating the three stages are discussed. The proposed approaches are tested on real-world data. Our work may help P&P companies to increase their competitiveness and reactiveness in dealing with demand pattern oscillations. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
The integrated production scheduling and lot-sizing problem in a flow shop environment consists of establishing production lot sizes and allocating machines to process them within a planning horizon in a production line with machines arranged in series. The problem considers that demands must be met without backlogging, the capacity of the machines must be respected, and machine setups are sequence-dependent and preserved between periods of the planning horizon. The objective is to determine a production schedule to minimise the setup, production and inventory costs. A mathematical model from the literature is presented, as well as procedures for obtaining feasible solutions. However, some of the procedures have difficulty in obtaining feasible solutions for large-sized problem instances. In addition, we address the problem using different versions of the Asynchronous Team (A-Team) approach. The procedures were compared with literature heuristics based on Mixed Integer Programming. The proposed A-Team procedures outperformed the literature heuristics, especially for large instances. The developed methodologies and the results obtained are presented.
Resumo:
Consider the NP-hard problem of, given a simple graph G, to find a series-parallel subgraph of G with the maximum number of edges. The algorithm that, given a connected graph G, outputs a spanning tree of G, is a 1/2-approximation. Indeed, if n is the number of vertices in G, any spanning tree in G has n-1 edges and any series-parallel graph on n vertices has at most 2n-3 edges. We present a 7/12 -approximation for this problem and results showing the limits of our approach.
Resumo:
As in the case of most small organic molecules, the electro-oxidation of methanol to CO2 is believed to proceed through a so-called dual pathway mechanism. The direct pathway proceeds via reactive intermediates such as formaldehyde or formic acid, whereas the indirect pathway occurs in parallel, and proceeds via the formation of adsorbed carbon monoxide (COad). Despite the extensive literature on the electro-oxidation of methanol, no study to date distinguished the production of CO2 from direct and indirect pathways. Working under, far-from-equilibrium, oscillatory conditions, we were able to decouple, for the first time, the direct and indirect pathways that lead to CO2 during the oscillatory electro-oxidation of methanol on platinum. The CO2 production was followed by differential electrochemical mass spectrometry and the individual contributions of parallel pathways were identified by a combination of experiments and numerical simulations. We believe that our report opens some perspectives, particularly as a methodology to be used to identify the role played by surface modifiers in the relative weight of both pathways-a key issue to the effective development of catalysts for low temperature fuel cells.
Resumo:
Objective: Gastric development depends directly on the proliferation and differentiation of epithelial cells, and these processes are controlled by multiple elements, such as diet, hormones, and growth factors. Protein restriction affects gastrointestinal functions, but its effects on gastric growth are not fully understood. Methods: The present study evaluated cell proliferation in the gastric epithelia of rats subjected to protein restriction since gestation. Because ghrelin is increasingly expressed from the fetal to the weaning stages and might be part of growth regulation, its distribution in the stomach of rats was investigated at 14, 30, and 50 d old. Results: Although the protein restriction at 8% increased the intake of food and body weight, the body mass was lower (P < 0.05). The stomach and intestine were also smaller but increased proportionately throughout treatment. Cell proliferation was estimated through DNA synthesis and metaphase indices, and lower rates (P < 0.05) were detected at the different ages. The inhibition was concomitant with a larger number of ghrelin-immunolabeled cells at 30 and 50 d postnatally. Conclusion: Protein restriction impairs cell proliferation in the gastric epithelium, and a ghrelin upsurge under this condition is parallel to lower gastric and body growth rates. (C) 2012 Elsevier Inc. All rights reserved.
Resumo:
The single machine scheduling problem with a common due date and non-identical ready times for the jobs is examined in this work. Performance is measured by the minimization of the weighted sum of earliness and tardiness penalties of the jobs. Since this problem is NP-hard, the application of constructive heuristics that exploit specific characteristics of the problem to improve their performance is investigated. The proposed approaches are examined through a computational comparative study on a set of 280 benchmark test problems with up to 1000 jobs.
Resumo:
In this paper, we propose three novel mathematical models for the two-stage lot-sizing and scheduling problems present in many process industries. The problem shares a continuous or quasi-continuous production feature upstream and a discrete manufacturing feature downstream, which must be synchronized. Different time-based scale representations are discussed. The first formulation encompasses a discrete-time representation. The second one is a hybrid continuous-discrete model. The last formulation is based on a continuous-time model representation. Computational tests with state-of-the-art MIP solver show that the discrete-time representation provides better feasible solutions in short running time. On the other hand, the hybrid model achieves better solutions for longer computational times and was able to prove optimality more often. The continuous-type model is the most flexible of the three for incorporating additional operational requirements, at a cost of having the worst computational performance. Journal of the Operational Research Society (2012) 63, 1613-1630. doi:10.1057/jors.2011.159 published online 7 March 2012