47 resultados para Tamer, Chris
em Greenwich Academic Literature Archive - UK
Resumo:
This paper considers the problem of sequencing n jobs in a three-machine flow shop with the objective of minimizing the makespan, which is the completion time of the last job. An O(n log n) time heuristic that is based on Johnson's algorithm is presented. It is shown to generate a schedule with length at most 5/3 times that of an optimal schedule, thereby reducing the previous best available worst-case performance ratio of 2. An application to the general flow shop is also discussed.
Resumo:
In many practical situations, batching of similar jobs to avoid setups is performed while constructing a schedule. This paper addresses the problem of non-preemptively scheduling independent jobs in a two-machine flow shop with the objective of minimizing the makespan. Jobs are grouped into batches. A sequence independent batch setup time on each machine is required before the first job is processed, and when a machine switches from processing a job in some batch to a job of another batch. Besides its practical interest, this problem is a direct generalization of the classical two-machine flow shop problem with no grouping of jobs, which can be solved optimally by Johnson's well-known algorithm. The problem under investigation is known to be NP-hard. We propose two O(n logn) time heuristic algorithms. The first heuristic, which creates a schedule with minimum total setup time by forcing all jobs in the same batch to be sequenced in adjacent positions, has a worst-case performance ratio of 3/2. By allowing each batch to be split into at most two sub-batches, a second heuristic is developed which has an improved worst-case performance ratio of 4/3. © 1998 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V.
Resumo:
Solder materials are used to provide a connection between electronic components and printed circuit boards (PCBs) using either the reflow or wave soldering process. As a board assembly passes through a reflow furnace the solder (initially in the form of solder paste) melts, reflows, then solidifies, and finally deforms between the chip and board. A number of defects may occur during this process such as flux entrapment, void formation, and cracking of the joint, chip or board. These defects are a serious concern to industry, especially with trends towards increasing component miniaturisation and smaller pitch sizes. This paper presents a modelling methodology for predicting solder joint shape, solidification, and deformation (stress) during the assembly process.
Resumo:
Realizing scalable performance on high performance computing systems is not straightforward for single-phenomenon codes (such as computational fluid dynamics [CFD]). This task is magnified considerably when the target software involves the interactions of a range of phenomena that have distinctive solution procedures involving different discretization methods. The problems of addressing the key issues of retaining data integrity and the ordering of the calculation procedures are significant. A strategy for parallelizing this multiphysics family of codes is described for software exploiting finite-volume discretization methods on unstructured meshes using iterative solution procedures. A mesh partitioning-based SPMD approach is used. However, since different variables use distinct discretization schemes, this means that distinct partitions are required; techniques for addressing this issue are described using the mesh-partitioning tool, JOSTLE. In this contribution, the strategy is tested for a variety of test cases under a wide range of conditions (e.g., problem size, number of processors, asynchronous / synchronous communications, etc.) using a variety of strategies for mapping the mesh partition onto the processor topology.
Resumo:
The future of many companies will depend to a large extent on their ability to initiate techniques that bring schedules, performance, tests, support, production, life-cycle-costs, reliability prediction and quality control into the earliest stages of the product creation process. Important questions for an engineer who is responsible for the quality of electronic parts such as printed circuit boards (PCBs) during design, production, assembly and after-sales support are: What is the impact of temperature? What is the impact of this temperature on the stress produced in the components? What is the electromagnetic compatibility (EMC) associated with such a design? At present, thermal, stress and EMC calculations are undertaken using different software tools that each require model build and meshing. This leads to a large investment in time, and hence cost, to undertake each of these simulations. This paper discusses the progression towards a fully integrated software environment, based on a common data model and user interface, having the capability to predict temperature, stress and EMC fields in a coupled manner. Such a modelling environment used early within the design stage of an electronic product will provide engineers with fast solutions to questions regarding thermal, stress and EMC issues. The paper concentrates on recent developments in creating such an integrated modeling environment with preliminary results from the analyses conducted. Further research into the thermal and stress related aspects of the paper is being conducted under a nationally funded project, while their application in reliability prediction will be addressed in a new European project called PROFIT.
Resumo:
We present a dynamic distributed load balancing algorithm for parallel, adaptive Finite Element simulations in which we use preconditioned Conjugate Gradient solvers based on domain-decomposition. The load balancing is designed to maintain good partition aspect ratio and we show that cut size is not always the appropriate measure in load balancing. Furthermore, we attempt to answer the question why the aspect ratio of partitions plays an important role for certain solvers. We define and rate different kinds of aspect ratio and present a new center-based partitioning method of calculating the initial distribution which implicitly optimizes this measure. During the adaptive simulation, the load balancer calculates a balancing flow using different versions of the diffusion algorithm and a variant of breadth first search. Elements to be migrated are chosen according to a cost function aiming at the optimization of subdomain shapes. Experimental results for Bramble's preconditioner and comparisons to state-of-the-art load balancers show the benefits of the construction.
Resumo:
A comprehensive simulation of solidification/melting processes requires the simultaneous representation of free surface fluid flow, heat transfer, phase change, non-linear solid mechanics and, possibly, electromagnetics together with their interactions in what is now referred to as "multi-physics" simulation. A 3D computational procedure and software tool, PHYSICA, embedding the above multi-physics models using finite volume methods on unstructured meshes (FV-UM) has been developed. Multi-physics simulations are extremely compute intensive and a strategy to parallelise such codes has, therefore, been developed. This strategy has been applied to PHYSICA and evaluated on a range of challenging multi-physics problems drawn from actual industrial cases.
Resumo:
We motivate, derive, and implement a multilevel approach to the travelling salesman problem.The resulting algorithm progressively coarsens the problem, initialises a tour, and then employs either the Lin-Kernighan (LK) or the Chained Lin-Kernighan (CLK) algorithm to refine the solution on each of the coarsened problems in reverse order.In experiments on a well-established test suite of 80 problem instances we found multilevel configurations that either improved the tour quality by over 25% as compared to the standard CLK algorithm using the same amount of execution time, or that achieved approximately the same tour quality over seven times more rapidly. Moreover, the multilevel variants seem to optimise far better the more clustered instances with which the LK and CLK algorithms have the most difficulties.
Resumo:
Preface [Special Issue containing a selection of papers presented at the International Symposium on Combinatorial Optimisation (CO2000) held at the University of Greenwich, London, from 12-14 July 2000.
Resumo:
Reliability of electronic parts is a major concern for many manufacturers, since early failures in the field can cost an enormous amount to repair - in many cases far more than the original cost of the product. A great deal of effort is expended by manufacturers to determine the failure rates for a process or the fraction of parts that will fail in a period of time. It is widely recognized that the traditional approach to reliability predictions for electronic systems are not suitable for today's products. This approach, based on statistical methods only, does not address the physics governing the failure mechanisms in electronic systems. This paper discusses virtual prototyping technologies which can predict the physics taking place and relate this to appropriate failure mechanisms. Simulation results illustrate the effect of temperature on the assembly process of an electronic package and the lifetime of a flip-chip package.
Resumo:
Traditionally, before flip chips can be assembled the dies have to be attached with solder bumps. This process involves the deposition of metal layers on the Al pads on the dies and this is called the under bump metallurgy (UBM). In an alternative process, however, Copper (Cu) columns can be used to replace solder bumps and the UBM process may be omitted altogether. After the bumping process, the bumped dies can be assembled on to the printed circuit board (PCB) by using either solder or conductive adhesives. In this work, the reliability issues of flip chips with Cu column bumped dies have been studied. The flip chip lifetime associated with the solder fatigue failure has been modeled for a range of geometric parameters. The relative importance of these parameters is given and solder volume has been identified as the most important design parameter for long-term reliability. Another important problem that has been studied in this work is the dissolution of protection metals on the pad and Cu column in the reflow process. For small solder joints the amount of Cu which dissolves into the molten solder after the protection layers have worn out may significantly affect solder joint properties.
Resumo:
FEA and CFD analysis is becoming ever more complex with an emerging demand for simulation software technologies that can address ranges of problems that involve combinations of interactions amongst varying physical phenomena over a variety of time and length scales. Computation modelling of such problems requires software technologies that enable the representation of these complex suites of 'physical' interactions. This functionality requires the structuring of simulation modules for specific physical phemonmena so that the coupling can be effectiely represented. These 'multi-physics' and 'multi-scale' computations are very compute intensive and so the simulation software must operate effectively in parallel if it is to be used in this context. Of course the objective of 'multi-physics' and 'multi-scale' simulation is the optimal design of engineered systems so optimistation is an important feature of such classes of simulation. In this presentation, a multi-disciplinary approach to simulation based optimisation is described with some key examples of application to challenging engineering problems.
Resumo:
We describe a heuristic method for drawing graphs which uses a multilevel framework combined with a force-directed placement algorithm. The multilevel technique matches and coalesces pairs of adjacent vertices to define a new graph and is repeated recursively to create a hierarchy of increasingly coarse graphs, G0, G1, …, GL. The coarsest graph, GL, is then given an initial layout and the layout is refined and extended to all the graphs starting with the coarsest and ending with the original. At each successive change of level, l, the initial layout for Gl is taken from its coarser and smaller child graph, Gl+1, and refined using force-directed placement. In this way the multilevel framework both accelerates and appears to give a more global quality to the drawing. The algorithm can compute both 2 & 3 dimensional layouts and we demonstrate it on examples ranging in size from 10 to 225,000 vertices. It is also very fast and can compute a 2D layout of a sparse graph in around 12 seconds for a 10,000 vertex graph to around 5-7 minutes for the largest graphs. This is an order of magnitude faster than recent implementations of force-directed placement algorithms.