975 resultados para Scheduling Systems
Resumo:
The rapid developments in computer technology have resulted in a widespread use of discrete event dynamic systems (DEDSs). This type of system is complex because it exhibits properties such as concurrency, conflict and non-determinism. It is therefore important to model and analyse such systems before implementation to ensure safe, deadlock free and optimal operation. This thesis investigates current modelling techniques and describes Petri net theory in more detail. It reviews top down, bottom up and hybrid Petri net synthesis techniques that are used to model large systems and introduces on object oriented methodology to enable modelling of larger and more complex systems. Designs obtained by this methodology are modular, easy to understand and allow re-use of designs. Control is the next logical step in the design process. This thesis reviews recent developments in control DEDSs and investigates the use of Petri nets in the design of supervisory controllers. The scheduling of exclusive use of resources is investigated and an efficient Petri net based scheduling algorithm is designed and a re-configurable controller is proposed. To enable the analysis and control of large and complex DEDSs, an object oriented C++ software tool kit was developed and used to implement a Petri net analysis tool, Petri net scheduling and control algorithms. Finally, the methodology was applied to two industrial DEDSs: a prototype can sorting machine developed by Eurotherm Controls Ltd., and a semiconductor testing plant belonging to SGS Thomson Microelectronics Ltd.
Resumo:
The recent explosive growth in advanced manufacturing technology (AMT) and continued development of sophisticated information technologies (IT) is expected to have a profound effect on the way we design and operate manufacturing businesses. Furthermore, the escalating capital requirements associated with these developments have significantly increased the level of risk associated with initial design, ongoing development and operation. This dissertation has examined the integration of two key sub-elements of the Computer Integrated Manufacturing (CIM) system, namely the manufacturing facility and the production control system. This research has concentrated on the interactions between production control (MRP) and an AMT based production facility. The disappointing performance of such systems has been discussed in the context of a number of potential technological and performance incompatibilities between these two elements. It was argued that the design and selection of operating policies for both is the key to successful integration. Furthermore, policy decisions are shown to play an important role in matching the performance of the total system to the demands of the marketplace. It is demonstrated that a holistic approach to policy design must be adopted if successful integration is to be achieved. It is shown that the complexity of the issues resulting from such an approach required the formulation of a structured design methodology. Such a methodology was subsequently developed and discussed. This combined a first principles approach to the behaviour of system elements with the specification of a detailed holistic model for use in the policy design environment. The methodology aimed to make full use of the `low inertia' characteristics of AMT, whilst adopting a JIT configuration of MRP and re-coupling the total system to the market demands. This dissertation discussed the application of the methodology to an industrial case study and the subsequent design of operational policies. Consequently a novel approach to production control resulted. A central feature of which was a move toward reduced manual intervention in the MRP processing and scheduling logic with increased human involvement and motivation in the management of work-flow on the shopfloor. Experimental results indicated that significant performance advantages would result from the adoption of the recommended policy set.
Resumo:
This thesis reviews the existing manufacturing control techniques and identifies their practical drawbacks when applied in a high variety, low and medium volume environment. It advocates that the significant drawbacks inherent in such systems, could impair their applications under such manufacturing environment. The key weaknesses identified in the system were: capacity insensitive nature of Material Requirements Planning (MRP); the centralised approach to planning and control applied in Manufacturing Resources Planning (MRP IT); the fact that Kanban can only be used in repetitive environments; Optimised Productivity Techniques's (OPT) inability to deal with transient bottlenecks, etc. On the other hand, cellular systems offer advantages in simplifying the control problems of manufacturing and the thesis reviews systems designed for cellular manufacturing including Distributed Manufacturing Resources Planning (DMRP) and Flexible Manufacturing System (FMS) controllers. It advocates that a newly developed cellular manufacturing control methodology, which is fully automatic, capacity sensitive and responsive, has the potential to resolve the core manufacturing control problems discussed above. It's development is envisaged within the framework of a DMRP environment, in which each cell is provided with its own MRP II system and decision making capability. It is a cellular based closed loop control system, which revolves on single level Bill-Of-Materials (BOM) structure and hence provides better linkage between shop level scheduling activities and relevant entries in the MPS. This provides a better prospect of undertaking rapid response to changes in the status of manufacturing resources and incoming enquiries. Moreover, it also permits automatic evaluation of capacity and due date constraints and hence facilitates the automation of MPS within such system. A prototype cellular manufacturing control model, was developed to demonstrate the underlying principles and operational logic of the cellular manufacturing control methodology, based on the above concept. This was shown to offer significant advantages from the prospective of operational planning and control. Results of relevant tests proved that the model is capable of producing reasonable due date and undertake automation of MPS. The overall performance of the model proved satisfactory and acceptable.
Resumo:
The present study describes a pragmatic approach to the implementation of production planning and scheduling techniques in foundries of all types and looks at the use of `state-of-the-art' management control and information systems. Following a review of systems for the classification of manufacturing companies, a definitive statement is made which highlights the important differences between foundries (i.e. `component makers') and other manufacturing companies (i.e. `component buyers'). An investigation of the manual procedures which are used to plan and control the manufacture of components reveals the inherent problems facing foundry production management staff, which suggests the unsuitability of many manufacturing techniques which have been applied to general engineering companies. From the literature it was discovered that computer-assisted systems are required which are primarily `information-based' rather than `decision based', whilst the availability of low-cost computers and `packaged-software' has enabled foundries to `get their feet wet' without the financial penalties which characterized many of the early attempts at computer-assistance (i.e. pre-1980). Moreover, no evidence of a single methodology for foundry scheduling emerged from the review. A philosophy for the development of a CAPM system is presented, which details the essential information requirements and puts forward proposals for the subsequent interactions between types of information and the sub-system of CAPM which they support. The work developed was oriented specifically at the functions of production planning and scheduling and introduces the concept of `manual interaction' for effective scheduling. The techniques developed were designed to use the information which is readily available in foundries and were found to be practically successful following the implementation of the techniques into a wide variety of foundries. The limitations of the techniques developed are subsequently discussed within the wider issues which form a CAPM system, prior to a presentation of the conclusions which can be drawn from the study.
Resumo:
The thesis presents an account of an attempt to utilize expert systems within the domain of production planning and control. The use of expert systems was proposed due to the problematical nature of a particular function within British Steel Strip Products' Operations Department: the function of Order Allocation, allocating customer orders to a production week and site. Approaches to tackling problems within production planning and control are reviewed, as are the general capabilities of expert systems. The conclusions drawn are that the domain of production planning and control contains both `soft' and `hard' problems, and that while expert systems appear to be a useful technology for this domain, this usefulness has by no means yet been demonstrated. Also, it is argued that the main stream methodology for developing expert systems is unsuited for the domain. A problem-driven approach is developed and used to tackle the Order Allocation function. The resulting system, UAAMS, contained two expert components. One of these, the scheduling procedure was not fully implemented due to inadequate software. The second expert component, the product routing procedure, was untroubled by such difficulties, though it was unusable on its own; thus a second system was developed. This system, MICRO-X10, duplicated the function of X10, a complex database query routine used daily by Order Allocation. A prototype version of MICRO-X10 proved too slow to be useful but allowed implementation and maintenance issues to be analysed. In conclusion, the usefulness of the problem-driven approach to expert systems development within production planning and control is demonstrated but restrictions imposed by current expert system software are highlighted in that the abilities of such software to cope with `hard' scheduling constructs and also the slow processing speeds of such software can restrict the current usefulness of expert systems within production planning and control.
Resumo:
This thesis describes work completed on the application of H controller synthesis to the design of controllers for single axis high speed independent drive design examples. H controller synthesis was used in a single controller format and in a self-tuning regulator, a type of adaptive controller. Three types of industrial design examples were attempted using H controller synthesis, both in simulation and on a Drives Test Facility at Aston University. The results were benchmarked against a Proportional, Integral and Derivative (PID) with velocity feedforward controller (VFF), the industrial standard for this application. An analysis of the differences between a H and PID with VFF controller was completed. A direct-form H controller was determined for a limited class of weighting function and plants which shows the relationship between the weighting function, nominal plant and the controller parameters. The direct-form controller was utilised in two ways. Firstly it allowed the production of simple guidelines for the industrial design of H controllers. Secondly it was used as the controller modifier in a self-tuning regulator (STR). The STR had a controller modification time (including nominal model parameter estimation) of 8ms. A Set-Point Gain Scheduling (SPGS) controller was developed and applied to an industrial design example. The applicability of each control strategy, PID with VFF, H, SPGS and STR, was investigated and a set of general guidelines for their use was determined. All controllers developed were implemented using standard industrial equipment.
Resumo:
Purpose – The purpose of this paper is to investigate the “last mile” delivery link between a hub and spoke distribution system and its customers. The proportion of retail, as opposed to non-retail (trade) customers using this type of distribution system has been growing in the UK. The paper shows the applicability of simulation to demonstrate changes in overall delivery policy to these customers. Design/methodology/approach – A case-based research method was chosen with the aim to provide an exemplar of practice and test the proposition that simulation can be used as a tool to investigate changes in delivery policy. Findings – The results indicate the potential improvement in delivery performance, specifically in meeting timed delivery performance, that could be made by having separate retail and non-retail delivery runs from the spoke terminal to the customer. Research limitations/implications – The simulation study does not attempt to generate a vehicle routing schedule but demonstrates the effects of a change on delivery performance when comparing delivery policies. Practical implications – Scheduling and spreadsheet software are widely used and provide useful assistance in the design of delivery runs and the allocation of staff to those delivery runs. This paper demonstrates to managers the usefulness of investigating the efficacy of current design rules and presents simulation as a suitable tool for this analysis. Originality/value – A simulation model is used in a novel application to test a change in delivery policy in response to a changing delivery profile of increased retail deliveries.
Resumo:
In this paper an evolutionary algorithm is proposed for solving the problem of production scheduling in assembly system. The aim of the paper is to investigate a possibility of the application of evolutionary algorithms in the assembly system of a normally functioning enterprise producing household appliances to make the production graphic schedule.
Resumo:
Catering to society's demand for high performance computing, billions of transistors are now integrated on IC chips to deliver unprecedented performances. With increasing transistor density, the power consumption/density is growing exponentially. The increasing power consumption directly translates to the high chip temperature, which not only raises the packaging/cooling costs, but also degrades the performance/reliability and life span of the computing systems. Moreover, high chip temperature also greatly increases the leakage power consumption, which is becoming more and more significant with the continuous scaling of the transistor size. As the semiconductor industry continues to evolve, power and thermal challenges have become the most critical challenges in the design of new generations of computing systems. ^ In this dissertation, we addressed the power/thermal issues from the system-level perspective. Specifically, we sought to employ real-time scheduling methods to optimize the power/thermal efficiency of the real-time computing systems, with leakage/ temperature dependency taken into consideration. In our research, we first explored the fundamental principles on how to employ dynamic voltage scaling (DVS) techniques to reduce the peak operating temperature when running a real-time application on a single core platform. We further proposed a novel real-time scheduling method, “M-Oscillations” to reduce the peak temperature when scheduling a hard real-time periodic task set. We also developed three checking methods to guarantee the feasibility of a periodic real-time schedule under peak temperature constraint. We further extended our research from single core platform to multi-core platform. We investigated the energy estimation problem on the multi-core platforms and developed a light weight and accurate method to calculate the energy consumption for a given voltage schedule on a multi-core platform. Finally, we concluded the dissertation with elaborated discussions of future extensions of our research. ^
Resumo:
For the past several decades, we have experienced the tremendous growth, in both scale and scope, of real-time embedded systems, thanks largely to the advances in IC technology. However, the traditional approach to get performance boost by increasing CPU frequency has been a way of past. Researchers from both industry and academia are turning their focus to multi-core architectures for continuous improvement of computing performance. In our research, we seek to develop efficient scheduling algorithms and analysis methods in the design of real-time embedded systems on multi-core platforms. Real-time systems are the ones with the response time as critical as the logical correctness of computational results. In addition, a variety of stringent constraints such as power/energy consumption, peak temperature and reliability are also imposed to these systems. Therefore, real-time scheduling plays a critical role in design of such computing systems at the system level. We started our research by addressing timing constraints for real-time applications on multi-core platforms, and developed both partitioned and semi-partitioned scheduling algorithms to schedule fixed priority, periodic, and hard real-time tasks on multi-core platforms. Then we extended our research by taking temperature constraints into consideration. We developed a closed-form solution to capture temperature dynamics for a given periodic voltage schedule on multi-core platforms, and also developed three methods to check the feasibility of a periodic real-time schedule under peak temperature constraint. We further extended our research by incorporating the power/energy constraint with thermal awareness into our research problem. We investigated the energy estimation problem on multi-core platforms, and developed a computation efficient method to calculate the energy consumption for a given voltage schedule on a multi-core platform. In this dissertation, we present our research in details and demonstrate the effectiveness and efficiency of our approaches with extensive experimental results.
Resumo:
“Availability” is the terminology used in asset intensive industries such as petrochemical and hydrocarbons processing to describe the readiness of equipment, systems or plants to perform their designed functions. It is a measure to suggest a facility’s capability of meeting targeted production in a safe working environment. Availability is also vital as it encompasses reliability and maintainability, allowing engineers to manage and operate facilities by focusing on one performance indicator. These benefits make availability a very demanding and highly desired area of interest and research for both industry and academia. In this dissertation, new models, approaches and algorithms have been explored to estimate and manage the availability of complex hydrocarbon processing systems. The risk of equipment failure and its effect on availability is vital in the hydrocarbon industry, and is also explored in this research. The importance of availability encouraged companies to invest in this domain by putting efforts and resources to develop novel techniques for system availability enhancement. Most of the work in this area is focused on individual equipment compared to facility or system level availability assessment and management. This research is focused on developing an new systematic methods to estimate system availability. The main focus areas in this research are to address availability estimation and management through physical asset management, risk-based availability estimation strategies, availability and safety using a failure assessment framework, and availability enhancement using early equipment fault detection and maintenance scheduling optimization.
Resumo:
Multiuser selection scheduling concept has been recently proposed in the literature in order to increase the multiuser diversity gain and overcome the significant feedback requirements for the opportunistic scheduling schemes. The main idea is that reducing the feedback overhead saves per-user power that could potentially be added for the data transmission. In this work, the authors propose to integrate the principle of multiuser selection and the proportional fair scheduling scheme. This is aimed especially at power-limited, multi-device systems in non-identically distributed fading channels. For the performance analysis, they derive closed-form expressions for the outage probabilities and the average system rate of the delay-sensitive and the delay-tolerant systems, respectively, and compare them with the full feedback multiuser diversity schemes. The discrete rate region is analytically presented, where the maximum average system rate can be obtained by properly choosing the number of partial devices. They optimise jointly the number of partial devices and the per-device power saving in order to maximise the average system rate under the power requirement. Through the authors’ results, they finally demonstrate that the proposed scheme leveraging the saved feedback power to add for the data transmission can outperform the full feedback multiuser diversity, in non-identical Rayleigh fading of devices’ channels.
Resumo:
Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.
Resumo:
Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.
Resumo:
New generation embedded systems demand high performance, efficiency and flexibility. Reconfigurable hardware can provide all these features. However the costly reconfiguration process and the lack of management support have prevented a broader use of these resources. To solve these issues we have developed a scheduler that deals with task-graphs at run-time, steering its execution in the reconfigurable resources while carrying out both prefetch and replacement techniques that cooperate to hide most of the reconfiguration delays. In our scheduling environment task-graphs are analyzed at design-time to extract useful information. This information is used at run-time to obtain near-optimal schedules, escaping from local-optimum decisions, while only carrying out simple computations. Moreover, we have developed a hardware implementation of the scheduler that applies all the optimization techniques while introducing a delay of only a few clock cycles. In the experiments our scheduler clearly outperforms conventional run-time schedulers based on As-Soon-As-Possible techniques. In addition, our replacement policy, specially designed for reconfigurable systems, achieves almost optimal results both regarding reuse and performance.