166 resultados para scalable parallel programming
Resumo:
In practice, parallel-machine job-shop scheduling (PMJSS) is very useful in the development of standard modelling approaches and generic solution techniques for many real-world scheduling problems. In this paper, based on the analysis of structural properties in an extended disjunctive graph model, a hybrid shifting bottleneck procedure (HSBP) algorithm combined with Tabu Search metaheuristic algorithm is developed to deal with the PMJSS problem. The original-version SBP algorithm for the job-shop scheduling (JSS) has been significantly improved to solve the PMJSS problem with four novelties: i) a topological-sequence algorithm is proposed to decompose the PMJSS problem into a set of single-machine scheduling (SMS) and/or parallel-machine scheduling (PMS) subproblems; ii) a modified Carlier algorithm based on the proposed lemmas and the proofs is developed to solve the SMS subproblem; iii) the Jackson rule is extended to solve the PMS subproblem; iv) a Tabu Search metaheuristic algorithm is embedded under the framework of SBP to optimise the JSS and PMJSS cases. The computational experiments show that the proposed HSBP is very efficient in solving the JSS and PMJSS problems.
Resumo:
The scheduling of locomotive movements on cane railways has proven to be a very complex task. Various optimisation methods have been used over the years to try and produce an optimised schedule that eliminates or minimises bin supply delays to harvesters and the factory, while minimising the number of locomotives, locomotive shifts and cane bins, and also the cane age. This paper reports on a new attempt to develop an automatic scheduler using a mathematical model solved using mixed integer programming and constraint programming approaches and blocking parallel job shop scheduling fundamentals. The model solution has been explored using conventional constraint programming search techniques and found to produce a reasonable schedule for small-scale problems with up to nine harvesters. While more effort is required to complete the development of the full model with metaheuristic search techniques, the work completed to date gives confidence that the metaheuristic techniques will provide near optimal solutions in reasonable time.
Resumo:
To cover wide range of pulsed power applications, this paper proposes a modularity concept to improve the performance and flexibility of the pulsed power supply. The proposed scheme utilizes the advantage of parallel and series configurations of flyback modules in obtaining high-voltage levels with fast rise time (dv/dt). Prototypes were implemented using 600-V insulated-gate bipolar transistor (IGBT) switches to generate up to 4-kV output pulses with 1-kHz repetition rate for experimentation. To assess the proposed modular approach for higher number of the modules, prototypes were implemented using 1700-V IGBTs switches, based on ten-series modules, and tested up to 20 kV. Conducted experimental results verified the effectiveness of the proposed method
Resumo:
Programming is a subject that many beginning students find difficult. This paper describes a knowledge base designed for the purpose of analyzing programs written in the PHP web development language. The aim is to use this knowledge base in an Intelligent Tutoring System that will provide effective feedback to students. The main focus of this research is that a programming exercise can have many correct solutions. This paper presents an overview of how the proposed knowledge base can be utilized to accept different solutions to a given exercise
Resumo:
Recognizing the impact of reconfiguration on the QoS of running systems is especially necessary for choosing an appropriate approach to dealing with dynamic evolution of mission-critical or non-stop business systems. The rationale is that the impaired QoS caused by inappropriate use of dynamic approaches is unacceptable for such running systems. To predict in advance the impact, the challenge is two-fold. First, a unified benchmark is necessary to expose QoS problems of existing dynamic approaches. Second, an abstract representation is necessary to provide a basis for modeling and comparing the QoS of existing and new dynamic reconfiguration approaches. Our previous work [8] has successfully evaluated the QoS assurance capabilities of existing dynamic approaches and provided guidance of appropriate use of particular approaches. This paper reinvestigates our evaluations, extending them into concurrent and parallel environments by abstracting hardware and software conditions to design an evaluation context. We report the new evaluation results and conclude with updated impact analysis and guidance.
Resumo:
Enterprise Systems (ES) have emerged as possibly the most important and challenging development in the corporate use of information technology in the last decade. Organizations have invested heavily in these large, integrated application software suites expecting improvments in; business processes, management of expenditure, customer service, and more generally, competitiveness, improved access to better information/knowledge (i.e., business intelligence and analytics). Forrester survey data consistently shows that investment in ES and enterprise applications in general remains the top IT spending priority, with the ES market estimated at $38 billion and predicted to grow at a steady rate of 6.9%, reaching $50 billion by 2012 (Wang & Hamerman, 2008). Yet, organizations have failed to realize all the anticipated benefits. One of the key reasons is the inability of employees to properly utilize the capabilities of the enterprise systems to complete the work and extract information critical to decision making. In response, universities (tertiary institutes) have developed academic programs aimed at addressing the skill gaps. In parallel with the proliferation of ES, there has been growing recognition of the importance of Teaching Enterprise Systems at tertiary education institutes. Many academic papers have discused the important role of Enterprise System curricula at tertiary education institutes (Ask, 2008; Hawking, 2004; Stewart, 2001), where the teaching philosophises, teaching approaches and challenges in Enterprise Systems education were discussed. Following the global trends, tertiary institutes in the Pacific-Asian region commenced introducing Enterprise System curricula in late 1990s with a range of subjects (a subject represents a single unit, rather than a collection of units; which we refer to as a course) in faculties / schools / departments of Information Technology, Business and in some cases in Engineering. Many tertiary educations commenced their initial subject offers around four salient concepts of Enterprise Systems: (1) Enterprise Systems implementations, (2) Introductions to core modules of Enterprise Systems, (3) Application customization using a programming language (e.g. ABAP) and (4) Systems Administration. While universities have come a long way in developing curricula in the enterprise system area, many obstacles remain: high cost of technology, qualified faculty to teach, lack of teaching materials, etc.
Resumo:
A composite SaaS (Software as a Service) is a software that is comprised of several software components and data components. The composite SaaS placement problem is to determine where each of the components should be deployed in a cloud computing environment such that the performance of the composite SaaS is optimal. From the computational point of view, the composite SaaS placement problem is a large-scale combinatorial optimization problem. Thus, an Iterative Cooperative Co-evolutionary Genetic Algorithm (ICCGA) was proposed. The ICCGA can find reasonable quality of solutions. However, its computation time is noticeably slow. Aiming at improving the computation time, we propose an unsynchronized Parallel Cooperative Co-evolutionary Genetic Algorithm (PCCGA) in this paper. Experimental results have shown that the PCCGA not only has quicker computation time, but also generates better quality of solutions than the ICCGA.
Resumo:
In Australia, railway systems play a vital role in transporting the sugarcane crop from farms to mills. The sugarcane transport system is very complex and uses daily schedules, consisting of a set of locomotives runs, to satisfy the requirements of the mill and harvesters. The total cost of sugarcane transport operations is very high; over 35% of the total cost of sugarcane production in Australia is incurred in cane transport. Efficient schedules for sugarcane transport can reduce the cost and limit the negative effects that this system can have on the raw sugar production system. There are several benefits to formulating the train scheduling problem as a blocking parallel-machine job shop scheduling (BPMJSS) problem, namely to prevent two trains passing in one section at the same time; to keep the train activities (operations) in sequence during each run (trip) by applying precedence constraints; to pass the trains on one section in the correct order (priorities of passing trains) by applying disjunctive constraints; and, to ease passing trains by solving rail conflicts by applying blocking constraints and Parallel Machine Scheduling. Therefore, the sugarcane rail operations are formulated as BPMJSS problem. A mixed integer programming and constraint programming approaches are used to describe the BPMJSS problem. The model is solved by the integration of constraint programming, mixed integer programming and search techniques. The optimality performance is tested by Optimization Programming Language (OPL) and CPLEX software on small and large size instances based on specific criteria. A real life problem is used to verify and validate the approach. Constructive heuristics and new metaheuristics including simulated annealing and tabu search are proposed to solve this complex and NP-hard scheduling problem and produce a more efficient scheduling system. Innovative hybrid and hyper metaheuristic techniques are developed and coded using C# language to improve the solutions quality and CPU time. Hybrid techniques depend on integrating heuristic and metaheuristic techniques consecutively, while hyper techniques are the complete integration between different metaheuristic techniques, heuristic techniques, or both.
Resumo:
Vehicular safety applications, such as cooperative collision warning systems, rely on beaconing to provide situational awareness that is needed to predict and therefore to avoid possible collisions. Beaconing is the continual exchange of vehicle motion-state information, such as position, speed, and heading, which enables each vehicle to track its neighboring vehicles in real time. This work presents a context-aware adaptive beaconing scheme that dynamically adapts the beaconing repetition rate based on an estimated channel load and the danger severity of the interactions among vehicles. The safety, efficiency, and scalability of the new scheme is evaluated by simulating vehicle collisions caused by inattentive drivers under various road traffic densities. Simulation results show that the new scheme is more efficient and scalable, and is able to improve safety better than the existing non-adaptive and adaptive rate schemes.
Resumo:
This paper presents a maintenance optimisation method for a multi-state series-parallel system considering economic dependence and state-dependent inspection intervals. The objective function considered in the paper is the average revenue per unit time calculated based on the semi-regenerative theory and the universal generating function (UGF). A new algorithm using the stochastic ordering is also developed in this paper to reduce the search space of maintenance strategies and to enhance the efficiency of optimisation algorithms. A numerical simulation is presented in the study to evaluate the efficiency of the proposed maintenance strategy and optimisation algorithms. The simulation result reveals that maintenance strategies with opportunistic maintenance and state-dependent inspection intervals are more cost-effective when the influence of economic dependence and inspection cost is significant. The study further demonstrates that the optimisation algorithm proposed in this paper has higher computational efficiency than the commonly employed heuristic algorithms.
Resumo:
Mesenchymal stem cells (MSC) are emerging as a leading cellular therapy for a number of diseases. However, for such treatments to become available as a routine therapeutic option, efficient and cost-effective means for industrial manufacture of MSC are required. At present, clinical grade MSC are manufactured through a process of manual cell culture in specialized cGMP facilities. This process is open, extremely labor intensive, costly, and impractical for anything more than a small number of patients. While it has been shown that MSC can be cultivated in stirred bioreactor systems using microcarriers, providing a route to process scale-up, the degree of numerical expansion achieved has generally been limited. Furthermore, little attention has been given to the issue of primary cell isolation from complex tissues such as placenta. In this article we describe the initial development of a closed process for bulk isolation of MSC from human placenta, and subsequent cultivation on microcarriers in scalable single-use bioreactor systems. Based on our initial data, we estimate that a single placenta may be sufficient to produce over 7,000 doses of therapeutic MSC using a large-scale process.
Resumo:
The act of computer programming is generally considered to be temporally removed from a computer program's execution. In this paper we discuss the idea of programming as an activity that takes place within the temporal bounds of a real-time computational process and its interactions with the physical world. We ground these ideas within the con- text of livecoding -- a live audiovisual performance practice. We then describe how the development of the programming environment "Impromptu" has addressed our ideas of programming with time and the notion of the programmer as an agent in a cyber-physical system.
Resumo:
The act of computer programming is generally considered to be temporally removed from a computer program’s execution. In this paper we discuss the idea of programming as an activity that takes place within the temporal bounds of a real-time computational process and its interactions with the physical world. We ground these ideas within the context of livecoding – a live audiovisual performance practice. We then describe how the development of the programming environment “Impromptu” has addressed our ideas of programming with time and the notion of the programmer as an agent in a cyber-physical system.
Resumo:
It is acknowledged around the world that many university students struggle with learning to program (McCracken et al., 2001; McGettrick et al., 2005). In this paper, we describe how we have developed a research programme to systematically study and incrementally improve our teaching. We have adopted a research programme with three elements: (1) a theory that provides an organising framework for defining the type of phenomena and data of interest, (2) data on how the class as a whole performs on formative assessment tasks that are framed from within the organising framework, and (3) data from one-on-one think aloud sessions, to establish why students struggle with some of those in-class formative assessment tasks. We teach introductory computer programming, but this three-element structure of our research is applicable to many areas of engineering education research.