975 resultados para Scheduling Systems
Resumo:
In multi-tasking systems when it is not possible to guarantee completion of all activities by specified times, the scheduling problem is not straightforward. Examples of this situation in real-time programming include the occurrence of alarm conditions and the buffering of output to peripherals in on-line facilities. The latter case is studied here with the hope of indicating one solution to the general problem.
Resumo:
Very large scale scheduling and planning tasks cannot be effectively addressed by fully automated schedule optimisation systems, since many key factors which govern 'fitness' in such cases are unformalisable. This raises the question of an interactive (or collaborative) approach, where fitness is assigned by the expert user. Though well-researched in the domains of interactively evolved art and music, this method is as yet rarely used in logistics. This paper concerns a difficulty shared by all interactive evolutionary systems (IESs), but especially those used for logistics or design problems. The difficulty is that objective evaluation of IESs is severely hampered by the need for expert humans in the loop. This makes it effectively impossible to, for example, determine with statistical confidence any ranking among a decent number of configurations for the parameters and strategy choices. We make headway into this difficulty with an Automated Tester (AT) for such systems. The AT replaces the human in experiments, and has parameters controlling its decision-making accuracy (modelling human error) and a built-in notion of a target solution which may typically be at odds with the solution which is optimal in terms of formalisable fitness. Using the AT, plausible evaluations of alternative designs for the IES can be done, allowing for (and examining the effects of) different levels of user error. We describe such an AT for evaluating an IES for very large scale planning.
Resumo:
Cross-layer techniques represent efficient means to enhance throughput and increase the transmission reliability of wireless communication systems. In this paper, a cross-layer design of aggressive adaptive modulation and coding (A-AMC), truncated automatic repeat request (T-ARQ), and user scheduling is proposed for multiuser multiple-input-multiple-output (MIMO) maximal ratio combining (MRC) systems, where the impacts of feedback delay (FD) and limited feedback (LF) on channel state information (CSI) are also considered. The A-AMC and T-ARQ mechanism selects the appropriate modulation and coding schemes (MCSs) to achieve higher spectral efficiency while satisfying the service requirement on the packet loss rate (PLR), profiting from the feasibility of using different MCSs to retransmit a packet, which is destined to a scheduled user selected to exploit multiuser diversity and enhance the system's performance in terms of both transmission efficiency and fairness. The system's performance is evaluated in terms of the average PLR, average spectral efficiency (ASE), outage probability, and average packet delay, which are derived in closed form, considering transmissions over Rayleigh-fading channels. Numerical results and comparisons are provided and show that A-AMC combined with T-ARQ yields higher spectral efficiency than the conventional scheme based on adaptive modulation and coding (AMC), while keeping the achieved PLR closer to the system's requirement and reducing delay. Furthermore, the effects of the number of ARQ retransmissions, numbers of transmit and receive antennas, normalized FD, and cardinality of the beamforming weight vector codebook are studied and discussed.
Resumo:
The evolution of commodity computing lead to the possibility of efficient usage of interconnected machines to solve computationally-intensive tasks, which were previously solvable only by using expensive supercomputers. This, however, required new methods for process scheduling and distribution, considering the network latency, communication cost, heterogeneous environments and distributed computing constraints. An efficient distribution of processes over such environments requires an adequate scheduling strategy, as the cost of inefficient process allocation is unacceptably high. Therefore, a knowledge and prediction of application behavior is essential to perform effective scheduling. In this paper, we overview the evolution of scheduling approaches, focusing on distributed environments. We also evaluate the current approaches for process behavior extraction and prediction, aiming at selecting an adequate technique for online prediction of application execution. Based on this evaluation, we propose a novel model for application behavior prediction, considering chaotic properties of such behavior and the automatic detection of critical execution points. The proposed model is applied and evaluated for process scheduling in cluster and grid computing environments. The obtained results demonstrate that prediction of the process behavior is essential for efficient scheduling in large-scale and heterogeneous distributed environments, outperforming conventional scheduling policies by a factor of 10, and even more in some cases. Furthermore, the proposed approach proves to be efficient for online predictions due to its low computational cost and good precision. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
A lot sizing and scheduling problem prevalent in small market-driven foundries is studied. There are two related decision levels: (I the furnace scheduling of metal alloy production, and (2) moulding machine planning which specifies the type and size of production lots. A mixed integer programming (MIP) formulation of the problem is proposed, but is impractical to solve in reasonable computing time for non-small instances. As a result, a faster relax-and-fix (RF) approach is developed that can also be used on a rolling horizon basis where only immediate-term schedules are implemented. As well as a MIP method to solve the basic RF approach, three variants of a local search method are also developed and tested using instances based on the literature. Finally, foundry-based tests with a real-order book resulted in a very substantial reduction of delivery delays and finished inventory, better use of capacity, and much faster schedule definition compared to the foundry`s own practice. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
In 2006 the Route load balancing algorithm was proposed and compared to other techniques aiming at optimizing the process allocation in grid environments. This algorithm schedules tasks of parallel applications considering computer neighborhoods (where the distance is defined by the network latency). Route presents good results for large environments, although there are cases where neighbors do not have an enough computational capacity nor communication system capable of serving the application. In those situations the Route migrates tasks until they stabilize in a grid area with enough resources. This migration may take long time what reduces the overall performance. In order to improve such stabilization time, this paper proposes RouteGA (Route with Genetic Algorithm support) which considers historical information on parallel application behavior and also the computer capacities and load to optimize the scheduling. This information is extracted by using monitors and summarized in a knowledge base used to quantify the occupation of tasks. Afterwards, such information is used to parameterize a genetic algorithm responsible for optimizing the task allocation. Results confirm that RouteGA outperforms the load balancing carried out by the original Route, which had previously outperformed others scheduling algorithms from literature.
Resumo:
The aim of task scheduling is to minimize the makespan of applications, exploiting the best possible way to use shared resources. Applications have requirements which call for customized environments for their execution. One way to provide such environments is to use virtualization on demand. This paper presents two schedulers based on integer linear programming which schedule virtual machines (VMs) in grid resources and tasks on these VMs. The schedulers differ from previous work by the joint scheduling of tasks and VMs and by considering the impact of the available bandwidth on the quality of the schedule. Experiments show the efficacy of the schedulers in scenarios with different network configurations.
Resumo:
In order to achieve the high performance, we need to have an efficient scheduling of a parallelprogram onto the processors in multiprocessor systems that minimizes the entire executiontime. This problem of multiprocessor scheduling can be stated as finding a schedule for ageneral task graph to be executed on a multiprocessor system so that the schedule length can be minimize [10]. This scheduling problem is known to be NP- Hard.In multi processor task scheduling, we have a number of CPU’s on which a number of tasksare to be scheduled that the program’s execution time is minimized. According to [10], thetasks scheduling problem is a key factor for a parallel multiprocessor system to gain betterperformance. A task can be partitioned into a group of subtasks and represented as a DAG(Directed Acyclic Graph), so the problem can be stated as finding a schedule for a DAG to beexecuted in a parallel multiprocessor system so that the schedule can be minimized. Thishelps to reduce processing time and increase processor utilization. The aim of this thesis workis to check and compare the results obtained by Bee Colony algorithm with already generatedbest known results in multi processor task scheduling domain.
Resumo:
This paper addresses the feasibility of implementing Japanese manufacturing systems in the United States. The recent success of Japanese transplant companies suggests that Just-In-Time (JIT) production is possible within America's industrial environment. Once American workers receive proper training, they have little difficulty participating in rapid setup procedures and utilizing the kanban system. Japanese transplants are gradually developing Japanese-style relationships with their American supplier companies by initiating long-term, mutually beneficial agreements. They are also finding ways to cope with America's problem of distance, which is steadily decreasing as an obstacle to JIT delivery. American companies, however, encounter Significant problems in trying to convert traditionally organized, factories to the JIT system. This paper demonstrates that it is both feasible and beneficial for American manufacturers to implement JIT production techniques. Many of the difficulties manufacturers experience center around a general lack of information about JIT. Once a company realizes its potential for setup-time reduction, a prerequisite for the JIT system, workers and managers can work together to create a new process for handling equipment changeover. Significant results are possible with minimal investment. Also, supervisors often do not realize that the JIT method of ordering goods from suppliers is compatible with current systems. This "kanban system" not only enhances current systems but also reduces the amount of paperwork and scheduling involved. When arranging JlT delivery of supplier goods, American manufacturers tend to overlook important aspects of JIT supplier management. However, by making long-tenn commitments, initiating the open exchange of information, assisting suppliers in reaching new standards of performance, increasing the level of conununication, and relying more on suppliers' engineering capabilities, even American manufacturers can develop Japanese-style supplier relationships that enhance the effectiveness of the system.
Resumo:
Electronic applications are currently developed under the reuse-based paradigm. This design methodology presents several advantages for the reduction of the design complexity, but brings new challenges for the test of the final circuit. The access to embedded cores, the integration of several test methods, and the optimization of the several cost factors are just a few of the several problems that need to be tackled during test planning. Within this context, this thesis proposes two test planning approaches that aim at reducing the test costs of a core-based system by means of hardware reuse and integration of the test planning into the design flow. The first approach considers systems whose cores are connected directly or through a functional bus. The test planning method consists of a comprehensive model that includes the definition of a multi-mode access mechanism inside the chip and a search algorithm for the exploration of the design space. The access mechanism model considers the reuse of functional connections as well as partial test buses, cores transparency, and other bypass modes. The test schedule is defined in conjunction with the access mechanism so that good trade-offs among the costs of pins, area, and test time can be sought. Furthermore, system power constraints are also considered. This expansion of concerns makes it possible an efficient, yet fine-grained search, in the huge design space of a reuse-based environment. Experimental results clearly show the variety of trade-offs that can be explored using the proposed model, and its effectiveness on optimizing the system test plan. Networks-on-chip are likely to become the main communication platform of systemson- chip. Thus, the second approach presented in this work proposes the reuse of the on-chip network for the test of the cores embedded into the systems that use this communication platform. A power-aware test scheduling algorithm aiming at exploiting the network characteristics to minimize the system test time is presented. The reuse strategy is evaluated considering a number of system configurations, such as different positions of the cores in the network, power consumption constraints and number of interfaces with the tester. Experimental results show that the parallelization capability of the network can be exploited to reduce the system test time, whereas area and pin overhead are strongly minimized. In this manuscript, the main problems of the test of core-based systems are firstly identified and the current solutions are discussed. The problems being tackled by this thesis are then listed and the test planning approaches are detailed. Both test planning techniques are validated for the recently released ITC’02 SoC Test Benchmarks, and further compared to other test planning methods of the literature. This comparison confirms the efficiency of the proposed methods.
Resumo:
This paper proposes a combined pool/bilateral short term hydrothermal scheduling model (PDC) for the context of the day-ahead energy markets. Some innovative aspects are introduced in the model, such as: i) the hydraulic generation is optimized through the opportunity cost function proposed; ii) there is no decoupling between physical and commercial dispatches, as is the case today in Brazil; iii) interrelationships between pool and bilateral markets are represented through a single optimization problem; iv) risk exposures related to future deficits are intrinsically mitigated; v) the model calculates spot prices in an hourly basis and the results show a coherent correlation between hydrological conditions and calculated prices. The proposed PDC model is solved by a primal-dual interior point method and is evaluated by simulations involving a test system. The results are focused on sensitivity analyses involving the parameters of the model, in such a way to emphasize its main modeling aspects. The results show that the proposed PDC provides a conceptual means for short term price formation for hydrothermal systems.
Resumo:
In this paper, short term hydroelectric scheduling is formulated as a network flow optimization model and solved by interior point methods. The primal-dual and predictor-corrector versions of such interior point methods are developed and the resulting matrix structure is explored. This structure leads to very fast iterations since it avoids computation and factorization of impedance matrices. For each time interval, the linear algebra reduces to the solution of two linear systems, either to the number of buses or to the number of independent loops. Either matrix is invariant and can be factored off-line. As a consequence of such matrix manipulations, a linear system which changes at each iteration has to be solved, although its size is reduced to the number of generating units and is not a function of time intervals. These methods were applied to IEEE and Brazilian power systems, and numerical results were obtained using a MATLAB implementation. Both interior point methods proved to be robust and achieved fast convergence for all instances tested. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
This paper proposes a methodology to incorporate voltage/reactive representation to Short Term Generation Scheduling (STGS) models, which is based on active/reactive decoupling characteristics of power systems. In such approach STGS is decoupled in both Active (AGS) and Reactive (RGS) Generation Scheduling models. AGS model establishes an initial active generation scheduling through a traditional dispatch model. The scheduling proposed by AGS model is evaluated from the voltage/reactive points of view, through the proposed RGS model. RGS is formulated as a sequence of T nonlinear OPF problems, solved separately but taking into account load tracking between consecutive time intervals. This approach considerably reduces computational effort to perform the reactive analysis of the RGS problem as a whole. When necessary, RGS model is capable to propose active generation redispatches, such that critical reactive problems (in which all reactive variables have been insufficient to control the reactive problems) can be overcome. The formulation and solution methodology proposed are evaluated in the IEEE30 system in two case studies. These studies show that the methodology is robust enough to incorporate reactive aspects to STGS problem.
Resumo:
Within a weekly market horizon, this paper considers a power producer that sells its energy both in the pool and through weekly forward contracts. The paper provides a methodology that allows the producer to derive the self-scheduling of its production units, to select weekly forward contracts, and to obtain the offering strategy for Monday's pool. The proposed technique is based on stochastic programming and allows the producer to maximize its expected profit while controlling the risk of profit variability. A comprehensive case study is used to illustrate the characteristics of the proposed methodology. Appropriate conclusions are finally drawn.
Resumo:
This paper presents a nonlinear model with individual representation of plants for the centralized long-term hydrothermal scheduling problem over multiple areas. In addition to common aspects of long-term scheduling, this model takes transmission constraints into account. The ability to optimize hydropower exchange among multiple areas is important because it enables further minimization of complementary thermal generation costs. Also, by considering transmission constraints for long-term scheduling, a more precise coupling with shorter horizon schedules can be expected. This is an important characteristic from both operational and economic viewpoints. The proposed model is solved by a sequential quadratic programming approach in the form of a prototype system for different case studies. An analysis of the benefits provided by the model is also presented. ©2009 IEEE.