900 resultados para Hard combinatorial scheduling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Bayesian optimisation algorithm for a nurse scheduling problem is presented, which involves choosing a suitable scheduling rule from a set for each nurse's assignment. When a human scheduler works, he normally builds a schedule systematically following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not yet completed, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this paper, we design a more human-like scheduling algorithm, by using a Bayesian optimisation algorithm to implement explicit learning from past solutions. A nurse scheduling problem from a UK hospital is used for testing. Unlike our previous work that used Genetic Algorithms to implement implicit learning [1], the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The Bayesian optimisation algorithm is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, new rule strings have been obtained. Sets of rule strings are generated in this way, some of which will replace previous strings based on fitness. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. For clarity, consider the following toy example of scheduling five nurses with two rules (1: random allocation, 2: allocate nurse to low-cost shifts). In the beginning of the search, the probabilities of choosing rule 1 or 2 for each nurse is equal, i.e. 50%. After a few iterations, due to the selection pressure and reinforcement learning, we experience two solution pathways: Because pure low-cost or random allocation produces low quality solutions, either rule 1 is used for the first 2-3 nurses and rule 2 on remainder or vice versa. In essence, Bayesian network learns 'use rule 2 after 2-3x using rule 1' or vice versa. It should be noted that for our and most other scheduling problems, the structure of the network model is known and all variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus, learning can amount to 'counting' in the case of multinomial distributions. For our problem, we use our rules: Random, Cheapest Cost, Best Cover and Balance of Cost and Cover. In more detail, the steps of our Bayesian optimisation algorithm for nurse scheduling are: 1. Set t = 0, and generate an initial population P(0) at random; 2. Use roulette-wheel selection to choose a set of promising rule strings S(t) from P(t); 3. Compute conditional probabilities of each node according to this set of promising solutions; 4. Assign each nurse using roulette-wheel selection based on the rules' conditional probabilities. A set of new rule strings O(t) will be generated in this way; 5. Create a new population P(t+1) by replacing some rule strings from P(t) with O(t), and set t = t+1; 6. If the termination conditions are not met (we use 2000 generations), go to step 2. Computational results from 52 real data instances demonstrate the success of this approach. They also suggest that the learning mechanism in the proposed approach might be suitable for other scheduling problems. Another direction for further research is to see if there is a good constructing sequence for individual data instances, given a fixed nurse scheduling order. If so, the good patterns could be recognized and then extracted as new domain knowledge. Thus, by using this extracted knowledge, we can assign specific rules to the corresponding nurses beforehand, and only schedule the remaining nurses with all available rules, making it possible to reduce the solution space. Acknowledgements The work was funded by the UK Government's major funding agency, Engineering and Physical Sciences Research Council (EPSRC), under grand GR/R92899/01. References [1] Aickelin U, "An Indirect Genetic Algorithm for Set Covering Problems", Journal of the Operational Research Society, 53(10): 1118-1126,

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Catering to society’s demand for high performance computing, billions of transistors are now integrated on IC chips to deliver unprecedented performances. With increasing transistor density, the power consumption/density is growing exponentially. The increasing power consumption directly translates to the high chip temperature, which not only raises the packaging/cooling costs, but also degrades the performance/reliability and life span of the computing systems. Moreover, high chip temperature also greatly increases the leakage power consumption, which is becoming more and more significant with the continuous scaling of the transistor size. As the semiconductor industry continues to evolve, power and thermal challenges have become the most critical challenges in the design of new generations of computing systems. In this dissertation, we addressed the power/thermal issues from the system-level perspective. Specifically, we sought to employ real-time scheduling methods to optimize the power/thermal efficiency of the real-time computing systems, with leakage/ temperature dependency taken into consideration. In our research, we first explored the fundamental principles on how to employ dynamic voltage scaling (DVS) techniques to reduce the peak operating temperature when running a real-time application on a single core platform. We further proposed a novel real-time scheduling method, “M-Oscillations” to reduce the peak temperature when scheduling a hard real-time periodic task set. We also developed three checking methods to guarantee the feasibility of a periodic real-time schedule under peak temperature constraint. We further extended our research from single core platform to multi-core platform. We investigated the energy estimation problem on the multi-core platforms and developed a light weight and accurate method to calculate the energy consumption for a given voltage schedule on a multi-core platform. Finally, we concluded the dissertation with elaborated discussions of future extensions of our research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract In this paper, we address the problem of picking a subset of bids in a general combinatorial auction so as to maximize the overall profit using the first-price model. This winner determination problem assumes that a single bidding round is held to determine both the winners and prices to be paid. We introduce six variants of biased random-key genetic algorithms for this problem. Three of them use a novel initialization technique that makes use of solutions of intermediate linear programming relaxations of an exact mixed integer-linear programming model as initial chromosomes of the population. An experimental evaluation compares the effectiveness of the proposed algorithms with the standard mixed linear integer programming formulation, a specialized exact algorithm, and the best-performing heuristics proposed for this problem. The proposed algorithms are competitive and offer strong results, mainly for large-scale auctions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We examine, in the imaginary-time formalism, the high temperature behavior of n-point thermal loops in static Yang-Mills and gravitational fields. We show that in this regime, any hard thermal loop gives the same leading contribution as the one obtained by evaluating the loop integral at zero external energies and momenta.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We investigate the performance of a variant of Axelrod's model for dissemination of culture-the Adaptive Culture Heuristic (ACH)-on solving an NP-Complete optimization problem, namely, the classification of binary input patterns of size F by a Boolean Binary Perceptron. In this heuristic, N agents, characterized by binary strings of length F which represent possible solutions to the optimization problem, are fixed at the sites of a square lattice and interact with their nearest neighbors only. The interactions are such that the agents' strings (or cultures) become more similar to the low-cost strings of their neighbors resulting in the dissemination of these strings across the lattice. Eventually the dynamics freezes into a homogeneous absorbing configuration in which all agents exhibit identical solutions to the optimization problem. We find through extensive simulations that the probability of finding the optimal solution is a function of the reduced variable F/N(1/4) so that the number of agents must increase with the fourth power of the problem size, N proportional to F(4), to guarantee a fixed probability of success. In this case, we find that the relaxation time to reach an absorbing configuration scales with F(6) which can be interpreted as the overall computational cost of the ACH to find an optimal set of weights for a Boolean binary perceptron, given a fixed probability of success.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The valence and core levels of In(2)O(3) and Sn-doped In(2)O(3) have been studied by hard x-ray photoemission spectroscopy (hv = 6000 eV) and by conventional Al K alpha (hv = 1486.6 eV) x-ray photoemission spectroscopy. The experimental spectra are compared with density-functional theory calculations. It is shown that structure deriving from electronic levels with significant In or Sn 5s character is selectively enhanced under 6000 eV excitation. This allows us to infer that conduction band states in Sn-doped samples and states at the bottom of the valence band both contain a pronounced In 5s contribution. The In 3d core line measured at hv = 1486.6 eV for both undoped and Sn-doped In(2)O(3) display an asymmetric lineshape, and may be fitted with two components associated with screened and unscreened final states. The In 3d core line spectra excited at hv = 6000 eV for the Sn-doped samples display pronounced shoulders and demand a fit with two components. The In 3d core line spectrum for the undoped sample can also be fitted with two components, although the relative intensity of the component associated with the screened final state is low, compared to excitation at 1486.6 eV. These results are consistent with a high concentration of carriers confined close to the surface of nominally undoped In(2)O(3). This conclusion is in accord with the fact that a conduction band feature observed for undoped In(2)O(3) in Al K alpha x-ray photoemission is much weaker than expected in hard x-ray photoemission.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We report the discovery with XMM-Newton of a hard-thermal (T similar to 130 MK) and variable X-ray emission from the Be star HD 157832, a new member of the puzzling class of gamma-Cas-like Be/X-ray systems. Recent optical spectroscopy reveals the presence of a large/dense circumstellar disk seen at intermediate/high inclination. With a B1.5V spectral type, HD 157832 is the coolest gamma-Cas analog known. In addition, its non-detection in the ROSAT all-sky survey shows that its average soft X-ray luminosity varied by a factor larger than similar to 3 over a time interval of 14 yr. These two remarkable features, ""low"" effective temperature, and likely high X-ray variability turn HD 157832 into a promising object for understanding the origin of the unusually high-temperature X-ray emission in these systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The general flowshop scheduling problem is a production problem where a set of n jobs have to be processed with identical flow pattern on in machines. In permutation flowshops the sequence of jobs is the same on all machines. A significant research effort has been devoted for sequencing jobs in a flowshop minimizing the makespan. This paper describes the application of a Constructive Genetic Algorithm (CGA) to makespan minimization on flowshop scheduling. The CGA was proposed recently as an alternative to traditional GA approaches, particularly, for evaluating schemata directly. The population initially formed only by schemata, evolves controlled by recombination to a population of well-adapted structures (schemata instantiation). The CGA implemented is based on the NEH classic heuristic and a local search heuristic used to define the fitness functions. The parameters of the CGA are calibrated using a Design of Experiments (DOE) approach. The computational results are compared against some other successful algorithms from the literature on Taillard`s well-known standard benchmark. The computational experience shows that this innovative CGA approach provides competitive results for flowshop scheduling; problems. (C) 2007 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we address the problem of scheduling jobs in a no-wait flowshop with the objective of minimising the total completion time. This problem is well-known for being nondeterministic polynomial-time hard, and therefore, most contributions to the topic focus on developing algorithms able to obtain good approximate solutions for the problem in a short CPU time. More specifically, there are various constructive heuristics available for the problem [such as the ones by Rajendran and Chaudhuri (Nav Res Logist 37: 695-705, 1990); Bertolissi (J Mater Process Technol 107: 459-465, 2000), Aldowaisan and Allahverdi (Omega 32: 345-352, 2004) and the Chins heuristic by Fink and Voa (Eur J Operat Res 151: 400-414, 2003)], as well as a successful local search procedure (Pilot-1-Chins). We propose a new constructive heuristic based on an analogy with the two-machine problem in order to select the candidate to be appended in the partial schedule. The myopic behaviour of the heuristic is tempered by exploring the neighbourhood of the so-obtained partial schedules. The computational results indicate that the proposed heuristic outperforms existing ones in terms of quality of the solution obtained and equals the performance of the time-consuming Pilot-1-Chins.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Pipeline systems play a key role in the petroleum business. These operational systems provide connection between ports and/or oil fields and refineries (upstream), as well as between these and consumer markets (downstream). The purpose of this work is to propose a novel MINLP formulation based on a continuous time representation for the scheduling of multiproduct pipeline systems that must supply multiple consumer markets. Moreover, it also considers that the pipeline operates intermittently and that the pumping costs depend on the booster stations yield rates, which in turn may generate different flow rates. The proposed continuous time representation is compared with a previously developed discrete time representation [Rejowski, R., Jr., & Pinto, J. M. (2004). Efficient MILP formulations and valid cuts for multiproduct pipeline scheduling. Computers and Chemical Engineering, 28, 1511] in terms of solution quality and computational performance. The influence of the number of time intervals that represents the transfer operation is studied and several configurations for the booster stations are tested. Finally, the proposed formulation is applied to a larger case, in which several booster configurations with different numbers of stages are tested. (C) 2007 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper addresses the minimization of the mean absolute deviation from a common due date in a two-machine flowshop scheduling problem. We present heuristics that use an algorithm, based on proposed properties, which obtains an optimal schedule fora given job sequence. A new set of benchmark problems is presented with the purpose of evaluating the heuristics. Computational experiments show that the developed heuristics outperform results found in the literature for problems up to 500 jobs. (C) 2007 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper addresses the non-preemptive single machine scheduling problem to minimize total tardiness. We are interested in the online version of this problem, where orders arrive at the system at random times. Jobs have to be scheduled without knowledge of what jobs will come afterwards. The processing times and the due dates become known when the order is placed. The order release date occurs only at the beginning of periodic intervals. A customized approximate dynamic programming method is introduced for this problem. The authors also present numerical experiments that assess the reliability of the new approach and show that it performs better than a myopic policy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cr3C2-NiCr and WC-Ni coatings are widely used for wear applications at high and room temperature, respectively. Due to the high corrosion resistance of NiCr binder, Cr3C2-NiCr coatings are also used in corrosive environments. The application of WC-Ni coatings in corrosive media is 14 not recommended due to the poor corrosion resistance of the (pure Ni) metallic matrix. It is well known that the addition of Cr to the metallic binder improves the corrosion properties. Erosion-corrosion performance of thermal spray coatings is widely influenced by ceramic phase composition, the size of ceramic particles and also the composition of the metallic binder. In the present work, two types of HVOF thermal spray coatings (Cr3C2-NiCr and WC-Ni) obtained with different spray conditions were studied and compared with conventional micro-cracked hard chromium coatings. Both as-sprayed and polished samples were tested under two erosion-corrosion conditions with different erosivity. Tungsten carbide coatings showed better performance under the most erosive condition, while chromium carbide coatings were superior under less erosive conditions. Some of the tungsten carbide coatings and hard chromium showed similar erosion-corrosion behaviour under more and less erosive conditions. The erosion-corrosion and electrochemical results showed that surface polishing improved the erosion-corrosion properties of the thermally sprayed coatings. The corrosion behaviour of the different coatings has been compared using Electrochemical Impedance Spectroscopy (EIS) and polarization curves. Total material loss due to erosion-corrosion was determined by weight loss measurements. An estimation of the corrosion contribution to the total weight loss was also given. (c) 2007 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper addresses the single machine scheduling problem with a common due date aiming to minimize earliness and tardiness penalties. Due to its complexity, most of the previous studies in the literature deal with this problem using heuristics and metaheuristics approaches. With the intention of contributing to the study of this problem, a branch-and-bound algorithm is proposed. Lower bounds and pruning rules that exploit properties of the problem are introduced. The proposed approach is examined through a computational comparative study with 280 problems involving different due date scenarios. In addition, the values of optimal solutions for small problems from a known benchmark are provided.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The representation of sustainability concerns in industrial forests management plans, in relation to environmental, social and economic aspects, involve a great amount of details when analyzing and understanding the interaction among these aspects to reduce possible future impacts. At the tactical and operational planning levels, methods based on generic assumptions usually provide non-realistic solutions, impairing the decision making process. This study is aimed at improving current operational harvesting planning techniques, through the development of a mixed integer goal programming model. This allows the evaluation of different scenarios, subject to environmental and supply constraints, increase of operational capacity, and the spatial consequences of dispatching harvest crews to certain distances over the evaluation period. As a result, a set of performance indicators was selected to evaluate all optimal solutions provided to different possible scenarios and combinations of these scenarios, and to compare these outcomes with the real results observed by the mill in the study case area. Results showed that it is possible to elaborate a linear programming model that adequately represents harvesting limitations, production aspects and environmental and supply constraints. The comparison involving the evaluated scenarios and the real observed results showed the advantage of using more holistic approaches and that it is possible to improve the quality of the planning recommendations using linear programming techniques.