886 resultados para Lot sizing and scheduling problems
Resumo:
Objective: Clinical studies of asthmatic children have found an association between lung disease and internalizing behavior problems. The causal direction of this association is, however, unclear. This article examines the nature of the relationship between behavior and asthma problems in childhood and adolescence. Methods: Data were analyzed on 5135 children from the Mater University Study of Pregnancy and its outcomes (MUSP), a large birth cohort of mothers and children started in Brisbane, Australia, in 1981. Lung disease was measured from maternal reports of asthma/bronchitis when the children were aged 5 and maternal reports of asthma symptoms when the children were aged 14. Symptoms of internalizing behaviors were obtained by maternal reports (Child Behavior Checklist) at 5 years and by maternal and children's reports at 14 years (Child Behavior Checklist and Youth Self Report). Results: Although there was no association between prevalence of asthma and externalizing symptoms, asthma and internalizing symptoms were significantly associated in cross-sectional analyses at 5 and 14 years. In prospective analyses, after excluding children with asthma at 5 years, internalizing symptoms at age 5 were not associated with the development of asthma symptoms at age 14. After excluding children with internalizing symptoms at 5 years, those who had asthma at 5 years had greater odds of developing internalizing symptoms at age 14. Conclusion: Children who have asthma/bronchitis by the age of 5 are at greater risk of having internalizing behavior problems in adolescence.
Resumo:
Purpose - In many scientific and engineering fields, large-scale heat transfer problems with temperature-dependent pore-fluid densities are commonly encountered. For example, heat transfer from the mantle into the upper crust of the Earth is a typical problem of them. The main purpose of this paper is to develop and present a new combined methodology to solve large-scale heat transfer problems with temperature-dependent pore-fluid densities in the lithosphere and crust scales. Design/methodology/approach - The theoretical approach is used to determine the thickness and the related thermal boundary conditions of the continental crust on the lithospheric scale, so that some important information can be provided accurately for establishing a numerical model of the crustal scale. The numerical approach is then used to simulate the detailed structures and complicated geometries of the continental crust on the crustal scale. The main advantage in using the proposed combination method of the theoretical and numerical approaches is that if the thermal distribution in the crust is of the primary interest, the use of a reasonable numerical model on the crustal scale can result in a significant reduction in computer efforts. Findings - From the ore body formation and mineralization points of view, the present analytical and numerical solutions have demonstrated that the conductive-and-advective lithosphere with variable pore-fluid density is the most favorite lithosphere because it may result in the thinnest lithosphere so that the temperature at the near surface of the crust can be hot enough to generate the shallow ore deposits there. The upward throughflow (i.e. mantle mass flux) can have a significant effect on the thermal structure within the lithosphere. In addition, the emplacement of hot materials from the mantle may further reduce the thickness of the lithosphere. Originality/value - The present analytical solutions can be used to: validate numerical methods for solving large-scale heat transfer problems; provide correct thermal boundary conditions for numerically solving ore body formation and mineralization problems on the crustal scale; and investigate the fundamental issues related to thermal distributions within the lithosphere. The proposed finite element analysis can be effectively used to consider the geometrical and material complexities of large-scale heat transfer problems with temperature-dependent fluid densities.
Resumo:
The thesis presents an account of an attempt to utilize expert systems within the domain of production planning and control. The use of expert systems was proposed due to the problematical nature of a particular function within British Steel Strip Products' Operations Department: the function of Order Allocation, allocating customer orders to a production week and site. Approaches to tackling problems within production planning and control are reviewed, as are the general capabilities of expert systems. The conclusions drawn are that the domain of production planning and control contains both `soft' and `hard' problems, and that while expert systems appear to be a useful technology for this domain, this usefulness has by no means yet been demonstrated. Also, it is argued that the main stream methodology for developing expert systems is unsuited for the domain. A problem-driven approach is developed and used to tackle the Order Allocation function. The resulting system, UAAMS, contained two expert components. One of these, the scheduling procedure was not fully implemented due to inadequate software. The second expert component, the product routing procedure, was untroubled by such difficulties, though it was unusable on its own; thus a second system was developed. This system, MICRO-X10, duplicated the function of X10, a complex database query routine used daily by Order Allocation. A prototype version of MICRO-X10 proved too slow to be useful but allowed implementation and maintenance issues to be analysed. In conclusion, the usefulness of the problem-driven approach to expert systems development within production planning and control is demonstrated but restrictions imposed by current expert system software are highlighted in that the abilities of such software to cope with `hard' scheduling constructs and also the slow processing speeds of such software can restrict the current usefulness of expert systems within production planning and control.
Resumo:
We propose the adaptive algorithm for solving a set of similar scheduling problems using learning technology. It is devised to combine the merits of an exact algorithm based on the mixed graph model and heuristics oriented on the real-world scheduling problems. The former may ensure high quality of the solution by means of an implicit exhausting enumeration of the feasible schedules. The latter may be developed for certain type of problems using their peculiarities. The main idea of the learning technology is to produce effective (in performance measure) and efficient (in computational time) heuristics by adapting local decisions for the scheduling problems under consideration. Adaptation is realized at the stage of learning while solving a set of sample scheduling problems using a branch-and-bound algorithm and structuring knowledge using pattern recognition apparatus.
Resumo:
We consider an uncertain version of the scheduling problem to sequence set of jobs J on a single machine with minimizing the weighted total flow time, provided that processing time of a job can take on any real value from the given closed interval. It is assumed that job processing time is unknown random variable before the actual occurrence of this time, where probability distribution of such a variable between the given lower and upper bounds is unknown before scheduling. We develop the dominance relations on a set of jobs J. The necessary and sufficient conditions for a job domination may be tested in polynomial time of the number n = |J| of jobs. If there is no a domination within some subset of set J, heuristic procedure to minimize the weighted total flow time is used for sequencing the jobs from such a subset. The computational experiments for randomly generated single-machine scheduling problems with n ≤ 700 show that the developed dominance relations are quite helpful in minimizing the weighted total flow time of n jobs with uncertain processing times.
Resumo:
AMS subject classification: 65K10, 49M07, 90C25, 90C48.
Resumo:
Report published in the Proceedings of the National Conference on "Education and Research in the Information Society", Plovdiv, May, 2015
Resumo:
A wireless mesh network is a mesh network implemented over a wireless network system such as wireless LANs. Wireless Mesh Networks(WMNs) are promising for numerous applications such as broadband home networking, enterprise networking, transportation systems, health and medical systems, security surveillance systems, etc. Therefore, it has received considerable attention from both industrial and academic researchers. This dissertation explores schemes for resource management and optimization in WMNs by means of network routing and network coding.^ In this dissertation, we propose three optimization schemes. (1) First, a triple-tier optimization scheme is proposed for load balancing objective. The first tier mechanism achieves long-term routing optimization, and the second tier mechanism, using the optimization results obtained from the first tier mechanism, performs the short-term adaptation to deal with the impact of dynamic channel conditions. A greedy sub-channel allocation algorithm is developed as the third tier optimization scheme to further reduce the congestion level in the network. We conduct thorough theoretical analysis to show the correctness of our design and give the properties of our scheme. (2) Then, a Relay-Aided Network Coding scheme called RANC is proposed to improve the performance gain of network coding by exploiting the physical layer multi-rate capability in WMNs. We conduct rigorous analysis to find the design principles and study the tradeoff in the performance gain of RANC. Based on the analytical results, we provide a practical solution by decomposing the original design problem into two sub-problems, flow partition problem and scheduling problem. (3) Lastly, a joint optimization scheme of the routing in the network layer and network coding-aware scheduling in the MAC layer is introduced. We formulate the network optimization problem and exploit the structure of the problem via dual decomposition. We find that the original problem is composed of two problems, routing problem in the network layer and scheduling problem in the MAC layer. These two sub-problems are coupled through the link capacities. We solve the routing problem by two different adaptive routing algorithms. We then provide a distributed coding-aware scheduling algorithm. According to corresponding experiment results, the proposed schemes can significantly improve network performance.^
Resumo:
We present our approach to real-time service-oriented scheduling problems with the objective of maximizing the total system utility. Different from the traditional utility accrual scheduling problems that each task is associated with only a single time utility function (TUF), we associate two different TUFs—a profit TUF and a penalty TUF—with each task, to model the real-time services that not only need to reward the early completions but also need to penalize the abortions or deadline misses. The scheduling heuristics we proposed in this paper judiciously accept, schedule, and abort real-time services when necessary to maximize the accrued utility. Our extensive experimental results show that our proposed algorithms can significantly outperform the traditional scheduling algorithms such as the Earliest Deadline First (EDF), the traditional utility accrual (UA) scheduling algorithms, and an earlier scheduling approach based on a similar model.
Resumo:
An extensive literature exists on the problems of daily (shift) and weekly (tour) labor scheduling. In representing requirements for employees in these problems, researchers have used formulations based either on the model of Dantzig (1954) or on the model of Keith (1979). We show that both formulations have weakness in environments where management knows, or can attempt to identify, how different levels of customer service affect profits. These weaknesses results in lower-than-necessary profits. This paper presents a New Formulation of the daily and weekly Labor Scheduling Problems (NFLSP) designed to overcome the limitations of earlier models. NFLSP incorporates information on how changing the number of employees working in each planning period affects profits. NFLP uses this information during the development of the schedule to identify the number of employees who, ideally, should be working in each period. In an extensive simulation of 1,152 service environments, NFLSP outperformed the formulations of Dantzig (1954) and Keith (1979) at a level of significance of 0.001. Assuming year-round operations and an hourly wage, including benefits, of $6.00, NFLSP's schedules were $96,046 (2.2%) and $24,648 (0.6%) more profitable, on average, than schedules developed using the formulations of Danzig (1954) and Keith (1979), respectively. Although the average percentage gain over Keith's model was fairly small, it could be much larger in some real cases with different parameters. In 73 and 100 percent of the cases we simulated NFLSP yielded a higher profit than the models of Keith (1979) and Danzig (1954), respectively.
Resumo:
The quality of a heuristic solution to a NP-hard combinatorial problem is hard to assess. A few studies have advocated and tested statistical bounds as a method for assessment. These studies indicate that statistical bounds are superior to the more widely known and used deterministic bounds. However, the previous studies have been limited to a few metaheuristics and combinatorial problems and, hence, the general performance of statistical bounds in combinatorial optimization remains an open question. This work complements the existing literature on statistical bounds by testing them on the metaheuristic Greedy Randomized Adaptive Search Procedures (GRASP) and four combinatorial problems. Our findings confirm previous results that statistical bounds are reliable for the p-median problem, while we note that they also seem reliable for the set covering problem. For the quadratic assignment problem, the statistical bounds has previously been found reliable when obtained from the Genetic algorithm whereas in this work they found less reliable. Finally, we provide statistical bounds to four 2-path network design problem instances for which the optimum is currently unknown.
Resumo:
Abstract- A Bayesian optimization algorithm for the nurse scheduling problem is presented, which involves choosing a suitable scheduling rule from a set for each nurse's assignment. Unlike our previous work that used GAs to implement implicit learning, the learning in the proposed algorithm is explicit, i.e. eventually, we will be able to identify and mix building blocks directly. The Bayesian optimization algorithm is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.
Resumo:
A Bayesian optimisation algorithm for a nurse scheduling problem is presented, which involves choosing a suitable scheduling rule from a set for each nurse's assignment. When a human scheduler works, he normally builds a schedule systematically following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not yet completed, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this paper, we design a more human-like scheduling algorithm, by using a Bayesian optimisation algorithm to implement explicit learning from past solutions. A nurse scheduling problem from a UK hospital is used for testing. Unlike our previous work that used Genetic Algorithms to implement implicit learning [1], the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The Bayesian optimisation algorithm is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, new rule strings have been obtained. Sets of rule strings are generated in this way, some of which will replace previous strings based on fitness. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. For clarity, consider the following toy example of scheduling five nurses with two rules (1: random allocation, 2: allocate nurse to low-cost shifts). In the beginning of the search, the probabilities of choosing rule 1 or 2 for each nurse is equal, i.e. 50%. After a few iterations, due to the selection pressure and reinforcement learning, we experience two solution pathways: Because pure low-cost or random allocation produces low quality solutions, either rule 1 is used for the first 2-3 nurses and rule 2 on remainder or vice versa. In essence, Bayesian network learns 'use rule 2 after 2-3x using rule 1' or vice versa. It should be noted that for our and most other scheduling problems, the structure of the network model is known and all variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus, learning can amount to 'counting' in the case of multinomial distributions. For our problem, we use our rules: Random, Cheapest Cost, Best Cover and Balance of Cost and Cover. In more detail, the steps of our Bayesian optimisation algorithm for nurse scheduling are: 1. Set t = 0, and generate an initial population P(0) at random; 2. Use roulette-wheel selection to choose a set of promising rule strings S(t) from P(t); 3. Compute conditional probabilities of each node according to this set of promising solutions; 4. Assign each nurse using roulette-wheel selection based on the rules' conditional probabilities. A set of new rule strings O(t) will be generated in this way; 5. Create a new population P(t+1) by replacing some rule strings from P(t) with O(t), and set t = t+1; 6. If the termination conditions are not met (we use 2000 generations), go to step 2. Computational results from 52 real data instances demonstrate the success of this approach. They also suggest that the learning mechanism in the proposed approach might be suitable for other scheduling problems. Another direction for further research is to see if there is a good constructing sequence for individual data instances, given a fixed nurse scheduling order. If so, the good patterns could be recognized and then extracted as new domain knowledge. Thus, by using this extracted knowledge, we can assign specific rules to the corresponding nurses beforehand, and only schedule the remaining nurses with all available rules, making it possible to reduce the solution space. Acknowledgements The work was funded by the UK Government's major funding agency, Engineering and Physical Sciences Research Council (EPSRC), under grand GR/R92899/01. References [1] Aickelin U, "An Indirect Genetic Algorithm for Set Covering Problems", Journal of the Operational Research Society, 53(10): 1118-1126,
Resumo:
Schedules can be built in a similar way to a human scheduler by using a set of rules that involve domain knowledge. This paper presents an Estimation of Distribution Algorithm (EDA) for the nurse scheduling problem, which involves choosing a suitable scheduling rule from a set for the assignment of each nurse. Unlike previous work that used Genetic Algorithms (GAs) to implement implicit learning, the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The EDA is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.
Resumo:
Schedules can be built in a similar way to a human scheduler by using a set of rules that involve domain knowledge. This paper presents an Estimation of Distribution Algorithm (EDA) for the nurse scheduling problem, which involves choosing a suitable scheduling rule from a set for the assignment of each nurse. Unlike previous work that used Genetic Algorithms (GAs) to implement implicit learning, the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The EDA is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.