85 resultados para Simplex. CPLEXR. Parallel Efficiency. Parallel Scalability. Linear Programming
em Queensland University of Technology - ePrints Archive
Resumo:
We present an algorithm called Optimistic Linear Programming (OLP) for learning to optimize average reward in an irreducible but otherwise unknown Markov decision process (MDP). OLP uses its experience so far to estimate the MDP. It chooses actions by optimistically maximizing estimated future rewards over a set of next-state transition probabilities that are close to the estimates, a computation that corresponds to solving linear programs. We show that the total expected reward obtained by OLP up to time T is within C(P) log T of the reward obtained by the optimal policy, where C(P) is an explicit, MDP-dependent constant. OLP is closely related to an algorithm proposed by Burnetas and Katehakis with four key differences: OLP is simpler, it does not require knowledge of the supports of transition probabilities, the proof of the regret bound is simpler, but our regret bound is a constant factor larger than the regret of their algorithm. OLP is also similar in flavor to an algorithm recently proposed by Auer and Ortner. But OLP is simpler and its regret bound has a better dependence on the size of the MDP.
Resumo:
We consider the problem of controlling a Markov decision process (MDP) with a large state space, so as to minimize average cost. Since it is intractable to compete with the optimal policy for large scale problems, we pursue the more modest goal of competing with a low-dimensional family of policies. We use the dual linear programming formulation of the MDP average cost problem, in which the variable is a stationary distribution over state-action pairs, and we consider a neighborhood of a low-dimensional subset of the set of stationary distributions (defined in terms of state-action features) as the comparison class. We propose a technique based on stochastic convex optimization and give bounds that show that the performance of our algorithm approaches the best achievable by any policy in the comparison class. Most importantly, this result depends on the size of the comparison class, but not on the size of the state space. Preliminary experiments show the effectiveness of the proposed algorithm in a queuing application.
Resumo:
In the paper, the flow-shop scheduling problem with parallel machines at each stage (machine center) is studied. For each job its release and due date as well as a processing time for its each operation are given. The scheduling criterion consists of three parts: the total weighted earliness, the total weighted tardiness and the total weighted waiting time. The criterion takes into account the costs of storing semi-manufactured products in the course of production and ready-made products as well as penalties for not meeting the deadlines stated in the conditions of the contract with customer. To solve the problem, three constructive algorithms and three metaheuristics (based one Tabu Search and Simulated Annealing techniques) are developed and experimentally analyzed. All the proposed algorithms operate on the notion of so-called operation processing order, i.e. the order of operations on each machine. We show that the problem of schedule construction on the base of a given operation processing order can be reduced to the linear programming task. We also propose some approximation algorithm for schedule construction and show the conditions of its optimality.
Resumo:
Organisations are constantly seeking new ways to improve operational efficiencies. This study investigates a novel way to identify potential efficiency gains in business operations by observing how they were carried out in the past and then exploring better ways of executing them by taking into account trade-offs between time, cost and resource utilisation. This paper demonstrates how these trade-offs can be incorporated in the assessment of alternative process execution scenarios by making use of a cost environment. A number of optimisation techniques are proposed to explore and assess alternative execution scenarios. The objective function is represented by a cost structure that captures different process dimensions. An experimental evaluation is conducted to analyse the performance and scalability of the optimisation techniques: integer linear programming (ILP), hill climbing, tabu search, and our earlier proposed hybrid genetic algorithm approach. The findings demonstrate that the hybrid genetic algorithm is scalable and performs better compared to other techniques. Moreover, we argue that the use of ILP is unrealistic in this setup and cannot handle complex cost functions such as the ones we propose. Finally, we show how cost-related insights can be gained from improved execution scenarios and how these can be utilised to put forward recommendations for reducing process-related cost and overhead within organisations.
Resumo:
This study estimates the environmental efficiency of international listed firms in 10 worldwide sectors from 2007 to 2013 by applying an order-m method, a non-parametric approach based on free disposal hull with subsampling bootstrapping. Using a conventional output of gross profit and two conventional inputs of labor and capital, this study examines the order-m environmental efficiency accounting for the presence of each of 10 undesirable inputs/outputs and measures the shadow prices of each undesirable input and output. The results show that there is greater potential for the reduction of undesirable inputs rather than bad outputs. On average, total energy, electricity, or water usage has the potential to be reduced by 50%. The median shadow prices of undesirable inputs, however, are much higher than the surveyed representative market prices. Approximately 10% of the firms in the sample appear to be potential sellers or production reducers in terms of undesirable inputs/outputs, which implies that the price of each item at the current level has little impact on most of the firms. Moreover, this study shows that the environmental, social, and governance activities of a firm do not considerably affect environmental efficiency.
Resumo:
This article contributes an original integrated model of an open-pit coal mine for supporting energy-efficient decisions. Mixed integer linear programming is used to formulate a general integrated model of the operational energy consumption of four common open-pit coal mining subsystems: excavation and haulage, stockpiles, processing plants and belt conveyors. Mines are represented as connected instances of the four subsystems, in a flow sheet manner, which are then fitted to data provided by the mine operators. Solving the integrated model ensures the subsystems’ operations are synchronised and whole-of-mine energy efficiency is encouraged. An investigation on a case study of an open-pit coal mine is conducted to validate the proposed methodology. Opportunities are presented for using the model to aid energy-efficient decision-making at various levels of a mine, and future work to improve the approach is described.
Resumo:
This thesis investigates factors that impact the energy efficiency of a mining operation. An innovative mathematical framework and solution approach are developed to model, solve and analyse an open-pit coal mine. A case study in South East Queensland is investigated to validate the approach and explore the opportunities for using it to aid long, medium and short term decision makers.
Resumo:
In the past few years, there has been a steady increase in the attention, importance and focus of green initiatives related to data centers. While various energy aware measures have been developed for data centers, the requirement of improving the performance efficiency of application assignment at the same time has yet to be fulfilled. For instance, many energy aware measures applied to data centers maintain a trade-off between energy consumption and Quality of Service (QoS). To address this problem, this paper presents a novel concept of profiling to facilitate offline optimization for a deterministic application assignment to virtual machines. Then, a profile-based model is established for obtaining near-optimal allocations of applications to virtual machines with consideration of three major objectives: energy cost, CPU utilization efficiency and application completion time. From this model, a profile-based and scalable matching algorithm is developed to solve the profile-based model. The assignment efficiency of our algorithm is then compared with that of the Hungarian algorithm, which does not scale well though giving the optimal solution.
Resumo:
Circular shortest paths represent a powerful methodology for image segmentation. The circularity condition ensures that the contour found by the algorithm is closed, a natural requirement for regular objects. Several implementations have been proposed in the past that either promise closure with high probability or ensure closure strictly, but with a mild computational efficiency handicap. Circularity can be viewed as a priori information that helps recover the correct object contour. Our "observation" is that circularity is only one among many possible constraints that can be imposed on shortest paths to guide them to a desirable solution. In this contribution, we illustrate this opportunity under a volume constraint but the concept is generally applicable. We also describe several adornments to the circular shortest path algorithm that proved useful in applications. © 2011 IEEE.
Resumo:
Operations management is an area concerned with the production of goods and services ensuring that business operations are efficient in utilizing resource and effective to meet customer requirements. It deals with the design and management of products, processes, services and supply chains and considers the acquisition, development, and effective and efficient utilization of resources. Unlike other engineering subjects, content of these units could be very wide and vast. It is therefore necessary to cover the content that is most related to the contemporary industries. It is also necessary to understand what engineering management skills are critical for engineers working in the contemporary organisations. Most of the operations management books contain traditional Operations Management techniques. For example ‘inventory management’ is an important topic in operations management. All OM books deal with effective method of inventory management. However, new trend in OM is Just in time (JIT) delivery or minimization of inventory. It is therefore important to decide whether to emphasise on keeping inventory (as suggested by most books) or minimization of inventory. Similarly, for OM decisions like forecasting, optimization and linear programming most organisations now a day’s use software. Now it is important for us to determine whether some of these software need to be introduced in tutorial/ lab classes. If so, what software? It is established in the Teaching and Learning literature that there must be a strong alignment between unit objectives, assessment and learning activities to engage students in learning. Literature also established that engaging students is vital for learning. However, engineering units (more specifically Operations management) is quite different from other majors. Only alignment between objectives, assessment and learning activities cannot guarantee student engagement. Unit content must be practical oriented and skills to be developed should be those demanded by the industry. Present active learning research, using a multi-method research approach, redesigned the operations management content based on latest developments in Engineering Management area and the necessity of Australian industries. The redesigned unit has significantly helped better student engagement and better learning. It was found that students are engaged in the learning if they find the contents are helpful in developing skills that are necessary in their practical life.
Resumo:
Traffic safety in rural highways can be considered as a constant source of concern in many countries. Nowadays, transportation professionals widely use Intelligent Transportation Systems (ITS) to address safety issues. However, compared to metropolitan applications, the rural highway (non-urban) ITS applications are still not well defined. This paper provides a comprehensive review on the existing ITS safety solutions for rural highways. This research is mainly focused on the infrastructure-based control and surveillance ITS technology, such as Crash Prevention and Safety, Road Weather Management and other applications, that is directly related to the reduction of frequency and severity of accidents. The main outcome of this research is the development of a ‘ITS control and surveillance device locating model’ to achieve the maximum safety benefit for rural highways. Using cost and benefits databases of ITS, an integer linear programming method is utilized as an optimization technique to choose the most suitable set of ITS devices. Finally, computational analysis is performed on an existing highway in Iran, to validate the effectiveness of the proposed locating model.
Resumo:
Vehicular safety applications, such as cooperative collision warning systems, rely on beaconing to provide situational awareness that is needed to predict and therefore to avoid possible collisions. Beaconing is the continual exchange of vehicle motion-state information, such as position, speed, and heading, which enables each vehicle to track its neighboring vehicles in real time. This work presents a context-aware adaptive beaconing scheme that dynamically adapts the beaconing repetition rate based on an estimated channel load and the danger severity of the interactions among vehicles. The safety, efficiency, and scalability of the new scheme is evaluated by simulating vehicle collisions caused by inattentive drivers under various road traffic densities. Simulation results show that the new scheme is more efficient and scalable, and is able to improve safety better than the existing non-adaptive and adaptive rate schemes.
Resumo:
Cloud computing is an emerging computing paradigm in which IT resources are provided over the Internet as a service to users. One such service offered through the Cloud is Software as a Service or SaaS. SaaS can be delivered in a composite form, consisting of a set of application and data components that work together to deliver higher-level functional software. SaaS is receiving substantial attention today from both software providers and users. It is also predicted to has positive future markets by analyst firms. This raises new challenges for SaaS providers managing SaaS, especially in large-scale data centres like Cloud. One of the challenges is providing management of Cloud resources for SaaS which guarantees maintaining SaaS performance while optimising resources use. Extensive research on the resource optimisation of Cloud service has not yet addressed the challenges of managing resources for composite SaaS. This research addresses this gap by focusing on three new problems of composite SaaS: placement, clustering and scalability. The overall aim is to develop efficient and scalable mechanisms that facilitate the delivery of high performance composite SaaS for users while optimising the resources used. All three problems are characterised as highly constrained, large-scaled and complex combinatorial optimisation problems. Therefore, evolutionary algorithms are adopted as the main technique in solving these problems. The first research problem refers to how a composite SaaS is placed onto Cloud servers to optimise its performance while satisfying the SaaS resource and response time constraints. Existing research on this problem often ignores the dependencies between components and considers placement of a homogenous type of component only. A precise problem formulation of composite SaaS placement problem is presented. A classical genetic algorithm and two versions of cooperative co-evolutionary algorithms are designed to now manage the placement of heterogeneous types of SaaS components together with their dependencies, requirements and constraints. Experimental results demonstrate the efficiency and scalability of these new algorithms. In the second problem, SaaS components are assumed to be already running on Cloud virtual machines (VMs). However, due to the environment of a Cloud, the current placement may need to be modified. Existing techniques focused mostly at the infrastructure level instead of the application level. This research addressed the problem at the application level by clustering suitable components to VMs to optimise the resource used and to maintain the SaaS performance. Two versions of grouping genetic algorithms (GGAs) are designed to cater for the structural group of a composite SaaS. The first GGA used a repair-based method while the second used a penalty-based method to handle the problem constraints. The experimental results confirmed that the GGAs always produced a better reconfiguration placement plan compared with a common heuristic for clustering problems. The third research problem deals with the replication or deletion of SaaS instances in coping with the SaaS workload. To determine a scaling plan that can minimise the resource used and maintain the SaaS performance is a critical task. Additionally, the problem consists of constraints and interdependency between components, making solutions even more difficult to find. A hybrid genetic algorithm (HGA) was developed to solve this problem by exploring the problem search space through its genetic operators and fitness function to determine the SaaS scaling plan. The HGA also uses the problem's domain knowledge to ensure that the solutions meet the problem's constraints and achieve its objectives. The experimental results demonstrated that the HGA constantly outperform a heuristic algorithm by achieving a low-cost scaling and placement plan. This research has identified three significant new problems for composite SaaS in Cloud. Various types of evolutionary algorithms have also been developed in addressing the problems where these contribute to the evolutionary computation field. The algorithms provide solutions for efficient resource management of composite SaaS in Cloud that resulted to a low total cost of ownership for users while guaranteeing the SaaS performance.