18 resultados para minimization algorithms
em Greenwich Academic Literature Archive - UK
Resumo:
In this note, we consider the scheduling problem of minimizing the sum of the weighted completion times on a single machine with one non-availability interval on the machine under the non-resumable scenario. Together with a recent 2-approximation algorithm designed by Kacem [I. Kacem, Approximation algorithm for the weighted flow-time minimization on a single machine with a fixed non-availability interval, Computers & Industrial Engineering 54 (2008) 401–410], this paper is the first successful attempt to develop a constant ratio approximation algorithm for this problem. We present two approaches to designing such an algorithm. Our best algorithm guarantees a worst-case performance ratio of 2+ε. © 2008 Elsevier B.V. All rights reserved.
Resumo:
In many practical situations, batching of similar jobs to avoid setups is performed while constructing a schedule. This paper addresses the problem of non-preemptively scheduling independent jobs in a two-machine flow shop with the objective of minimizing the makespan. Jobs are grouped into batches. A sequence independent batch setup time on each machine is required before the first job is processed, and when a machine switches from processing a job in some batch to a job of another batch. Besides its practical interest, this problem is a direct generalization of the classical two-machine flow shop problem with no grouping of jobs, which can be solved optimally by Johnson's well-known algorithm. The problem under investigation is known to be NP-hard. We propose two O(n logn) time heuristic algorithms. The first heuristic, which creates a schedule with minimum total setup time by forcing all jobs in the same batch to be sequenced in adjacent positions, has a worst-case performance ratio of 3/2. By allowing each batch to be split into at most two sub-batches, a second heuristic is developed which has an improved worst-case performance ratio of 4/3. © 1998 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V.
Resumo:
The performance of loadsharing algorithms for heterogeneous distributed systems is investigated by simulation. The systems considered are networks of workstations (nodes) which differ in processing power. Two parameters are proposed for characterising system heterogeneity, namely the variance and skew of the distribution of processing power among the network nodes. A variety of networks are investigated, with the same number of nodes and total processing power, but with the processing power distributed differently among the nodes. Two loadsharing algorithms are evaluated, at overall system loadings of 50% and 90%, using job response time as the performance metric. Comparison is made with the ideal situation of ‘perfect sharing’, where it is assumed that the communication delays are zero and that complete knowledge is available about job lengths and the loading at the different nodes, so that an arriving job can be sent to the node where it will be completed in the shortest time. The algorithms studied are based on those already in use for homogeneous networks, but were adapted to take account of system heterogeneity. Both algorithms take into account the differences in the processing powers of the nodes in their location policies, but differ in the extent to which they ‘discriminate’ against the slower nodes. It is seen that the relative performance of the two is strongly influenced by the system utilisation and the distribution of processing power among the nodes.
Resumo:
Three parallel optimisation algorithms, for use in the context of multilevel graph partitioning of unstructured meshes, are described. The first, interface optimisation, reduces the computation to a set of independent optimisation problems in interface regions. The next, alternating optimisation, is a restriction of this technique in which mesh entities are only allowed to migrate between subdomains in one direction. The third treats the gain as a potential field and uses the concept of relative gain for selecting appropriate vertices to migrate. The results are compared and seen to produce very high global quality partitions, very rapidly. The results are also compared with another partitioning tool and shown to be of higher quality although taking longer to compute.
Resumo:
We consider a knapsack problem to minimize a symmetric quadratic function. We demonstrate that this symmetric quadratic knapsack problem is relevant to two problems of single machine scheduling: the problem of minimizing the weighted sum of the completion times with a single machine non-availability interval under the non-resumable scenario; and the problem of minimizing the total weighted earliness and tardiness with respect to a common small due date. We develop a polynomial-time approximation algorithm that delivers a constant worst-case performance ratio for a special form of the symmetric quadratic knapsack problem. We adapt that algorithm to our scheduling problems and achieve a better performance. For the problems under consideration no fixed-ratio approximation algorithms have been previously known.
Resumo:
In this paper, we first demonstrate that the classical Purcell's vector method when combined with row pivoting yields a consistently small growth factor in comparison to the well-known Gauss elimination method, the Gauss–Jordan method and the Gauss–Huard method with partial pivoting. We then present six parallel algorithms of the Purcell method that may be used for direct solution of linear systems. The algorithms differ in ways of pivoting and load balancing. We recommend algorithms V and VI for their reliability and algorithms III and IV for good load balance if local pivoting is acceptable. Some numerical results are presented.
Resumo:
The paper considers an on-line single machine scheduling problem where the goal is to minimize the makespan. The jobs are partitioned into families and a setup is performed every time the machine starts processing a batch of jobs of the same family. The scheduler is aware of the number of families and knows the setup time of each family, although information about a job only becomes available when that job is released. We give a lower bound on the competitive ratio of any on-line algorithm. Moreover, for the case of two families, we provide an algorithm with a competitive ratio that achieves this lower bound. As the number of families increases, the lower bound approaches 2, and we give a simple algorithm with a competitive ratio of 2.
Resumo:
The intrinsic independent features of the optimal codebook cubes searching process in fractal video compression systems are examined and exploited. The design of a suitable parallel algorithm reflecting the concept is presented. The Message Passing Interface (MPI) is chosen to be the communication tool for the implementation of the parallel algorithm on distributed memory parallel computers. Experimental results show that the parallel algorithm is able to reduce the compression time and achieve a high speed-up without changing the compression ratio and the quality of the decompressed image. A scalability test was also performed, and the results show that this parallel algorithm is scalable.
Resumo:
A parallel genetic algorithm (PGA) is proposed for the solution of two-dimensional inverse heat conduction problems involving unknown thermophysical material properties. Experimental results show that the proposed PGA is a feasible and effective optimization tool for inverse heat conduction problems
Resumo:
We consider a variety of preemptive scheduling problems with controllable processing times on a single machine and on identical/uniform parallel machines, where the objective is to minimize the total compression cost. In this paper, we propose fast divide-and-conquer algorithms for these scheduling problems. Our approach is based on the observation that each scheduling problem we discuss can be formulated as a polymatroid optimization problem. We develop a novel divide-and-conquer technique for the polymatroid optimization problem and then apply it to each scheduling problem. We show that each scheduling problem can be solved in $ \O({\rm T}_{\rm feas}(n) \times\log n)$ time by using our divide-and-conquer technique, where n is the number of jobs and Tfeas(n) denotes the time complexity of the corresponding feasible scheduling problem with n jobs. This approach yields faster algorithms for most of the scheduling problems discussed in this paper.
Resumo:
In this paper, we shall critically examine a special class of graph matching algorithms that follow the approach of node-similarity measurement. A high-level algorithm framework, namely node-similarity graph matching framework (NSGM framework), is proposed, from which, many existing graph matching algorithms can be subsumed, including the eigen-decomposition method of Umeyama, the polynomial-transformation method of Almohamad, the hubs and authorities method of Kleinberg, and the kronecker product successive projection methods of Wyk, etc. In addition, improved algorithms can be developed from the NSGM framework with respects to the corresponding results in graph theory. As the observation, it is pointed out that, in general, any algorithm which can be subsumed from NSGM framework fails to work well for graphs with non-trivial auto-isomorphism structure.
Resumo:
We consider a problem of scheduling jobs on m parallel machines. The machines are dedicated, i.e., for each job the processing machine is known in advance. We mainly concentrate on the model in which at any time there is one unit of an additional resource. Any job may be assigned the resource and this reduces its processing time. A job that is given the resource uses it at each time of its processing. No two jobs are allowed to use the resource simultaneously. The objective is to minimize the makespan. We prove that the two-machine problem is NP-hard in the ordinary sense, describe a pseudopolynomial dynamic programming algorithm and convert it into an FPTAS. For the problem with an arbitrary number of machines we present an algorithm with a worst-case ratio close to 3/2, and close to 3, if a job can be given several units of the resource. For the problem with a fixed number of machines we give a PTAS. Virtually all algorithms rely on a certain variant of the linear knapsack problem (maximization, minimization, multiple-choice, bicriteria). © 2008 Wiley Periodicals, Inc. Naval Research Logistics, 2008
Resumo:
Electrodeposition is a widely used technique for the fabrication of high aspect ratio microstructures. In recent years, much research has been focused within this area aiming to understand the physics behind the filling of high aspect ratio vias and trenches on substrates and in particular how they can be made without the formation of voids in the deposited material. This paper reports on the fundamental work towards the advancement of numerical algorithms that can predict the electrodeposition process in micron scaled features. Two different numerical approaches have been developed, which capture the motion of the deposition interface and 2-D simulations are presented for both methods under two deposition regimes: those where surface kinetics is governed by Ohm’s law and the Butler–Volmer equation, respectively. In the last part of this paper the modelling of acoustic forces and their subsequent impact on the deposition profile through convection is examined.