20 resultados para optimising compiler

em Greenwich Academic Literature Archive - UK


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multilevel algorithms are a successful class of optimisation techniques which address the mesh partitioning problem. They usually combine a graph contraction algorithm together with a local optimisation method which refines the partition at each graph level. To date these algorithms have been used almost exclusively to minimise the cut-edge weight, however it has been shown that for certain classes of solution algorithm, the convergence of the solver is strongly influenced by the subdomain aspect ratio. In this paper therefore, we modify the multilevel algorithms in order to optimise a cost function based on aspect ratio. Several variants of the algorithms are tested and shown to provide excellent results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many Web applications walk the thin line between the need for dynamic data and the need to meet user performance expectations. In environments where funds are not available to constantly upgrade hardware inline with user demand, alternative approaches need to be considered. This paper introduces a ‘Data farming’ model whereby dynamic data, which is ‘grown’ in operational applications, is ‘harvested’ and ‘packaged’ for various consumer markets. Like any well managed agricultural operation, crops are harvested according to historical and perceived demand as inferred by a self-optimising process. This approach aims to make enhanced use of available resources through better utlilisation of system downtime - thereby improving application performance and increasing the availability of key business data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nitrogen is now used in wave soldering machines to help lower the amount of dross that can be formed on the solder bath surface. The paper provides details on the use of computational fluid dynamics in helping understand the flow profiles of nitrogen in a wave soldering machine and to predict the concentration of nitrogen and oxygen around the solder bath.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Evaluating ship layout for human factors (HF) issues using simulation software such as maritimeEXODUS can be a long and complex process. The analysis requires the identification of relevant evaluation scenarios; encompassing evacuation and normal operations; the development of appropriate measures which can be used to gauge the performance of crew and vessel and finally; the interpretation of considerable simulation data. In this paper we present a systematic and transparent methodology for assessing the HF performance of ship design which is both discriminating and diagnostic.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multilevel algorithms are a successful class of optimisation techniques which address the mesh partitioning problem for mapping meshes onto parallel computers. They usually combine a graph contraction algorithm together with a local optimisation method which refines the partition at each graph level. To date these algorithms have been used almost exclusively to minimise the cut-edge weight in the graph with the aim of minimising the parallel communication overhead. However it has been shown that for certain classes of problem, the convergence of the underlying solution algorithm is strongly influenced by the shape or aspect ratio of the subdomains. In this paper therefore, we modify the multilevel algorithms in order to optimise a cost function based on aspect ratio. Several variants of the algorithms are tested and shown to provide excellent results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multilevel algorithms are a successful class of optimisation techniques which address the mesh partitioning problem for mapping meshes onto parallel computers. They usually combine a graph contraction algorithm together with a local optimisation method which refines the partition at each graph level. To date these algorithms have been used almost exclusively to minimise the cut-edge weight in the graph with the aim of minimising the parallel communication overhead. However it has been shown that for certain classes of problem, the convergence of the underlying solution algorithm is strongly influenced by the shape or aspect ratio of the subdomains. In this paper therefore, we modify the multilevel algorithms in order to optimise a cost function based on aspect ratio. Several variants of the algorithms are tested and shown to provide excellent results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Three paradigms for distributed-memory parallel computation that free the application programmer from the details of message passing are compared for an archetypal structured scientific computation -- a nonlinear, structured-grid partial differential equation boundary value problem -- using the same algorithm on the same hardware. All of the paradigms -- parallel languages represented by the Portland Group's HPF, (semi-)automated serial-to-parallel source-to-source translation represented by CAP-Tools from the University of Greenwich, and parallel libraries represented by Argonne's PETSc -- are found to be easy to use for this problem class, and all are reasonably effective in exploiting concurrency after a short learning curve. The level of involvement required by the application programmer under any paradigm includes specification of the data partitioning, corresponding to a geometrically simple decomposition of the domain of the PDE. Programming in SPMD style for the PETSc library requires writing only the routines that discretize the PDE and its Jacobian, managing subdomain-to-processor mappings (affine global-to-local index mappings), and interfacing to library solver routines. Programming for HPF requires a complete sequential implementation of the same algorithm as a starting point, introduction of concurrency through subdomain blocking (a task similar to the index mapping), and modest experimentation with rewriting loops to elucidate to the compiler the latent concurrency. Programming with CAPTools involves feeding the same sequential implementation to the CAPTools interactive parallelization system, and guiding the source-to-source code transformation by responding to various queries about quantities knowable only at runtime. Results representative of "the state of the practice" for a scaled sequence of structured grid problems are given on three of the most important contemporary high-performance platforms: the IBM SP, the SGI Origin 2000, and the CRAYY T3E.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper discusses load-balancing issues when using heterogeneous cluster computers. There is a growing trend towards the use of commodity microprocessor clusters. Although today's microprocessors have reached a theoretical peak performance in the range of one GFLOPS/s, heterogeneous clusters of commodity processors are amongst the most challenging parallel systems to programme efficiently. We will outline an approach for optimising the performance of parallel mesh-based applications for heterogeneous cluster computers and present case studies with the GeoFEM code. The focus is on application cost monitoring and load balancing using the DRAMA library.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Removing zinc by distillation can leave the lead bullion virtually free of zinc and also produces pure zinc crystals. Batch distillation is considered in a hemispherical kettle with water-cooled lid, under high vacuum (50 Pa or less). Sufficient zinc concentration at the evaporating surface is achieved by means of a mechanical stirrer. The numerical model is based on the multiphysics simulation package PHYSICA. The fluid flow module of the code is used to simulate the action of the stirring impeller and to determine the temperature and concentration fields throughout the liquid volume including the evaporating surface. The rate of zinc evaporation and condensation is then modelled using Langmuir’s equations. Diffusion of the zinc vapour through the residual air in the vacuum gap is also taken into account. Computed results show that the mixing is sufficient and the rate-limiting step of the process is the surface evaporation driven by the difference of the equilibrium vapour pressure and the actual partial pressure of zinc vapour. However, at higher zinc concentrations, the heat transfer through the growing zinc crystal crust towards the cold steel lid may become the limiting factor because the crystallization front may reach the melting point. The computational model can be very useful in optimising the process within its safe limits.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The shared-memory programming model can be an effective way to achieve parallelism on shared memory parallel computers. Historically however, the lack of a programming standard using directives and the limited scalability have affected its take-up. Recent advances in hardware and software technologies have resulted in improvements to both the performance of parallel programs with compiler directives and the issue of portability with the introduction of OpenMP. In this study, the Computer Aided Parallelisation Toolkit has been extended to automatically generate OpenMP-based parallel programs with nominal user assistance. We categorize the different loop types and show how efficient directives can be placed using the toolkit's in-depth interprocedural analysis. Examples are taken from the NAS parallel benchmarks and a number of real-world application codes. This demonstrates the great potential of using the toolkit to quickly parallelise serial programs as well as the good performance achievable on up to 300 processors for hybrid message passing-directive parallelisations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose – To present key challenges associated with the evolution of system-in-package technologies and present technical work in reliability modeling and embedded test that contributes to these challenges. Design/methodology/approach – Key challenges have been identified from the electronics and integrated MEMS industrial sectors. Solutions to optimising the reliability of a typical assembly process and reducing the cost of production test have been studied through simulation and modelling studies based on technology data released by NXP and in collaboration with EDA tool vendors Coventor and Flomerics. Findings – Characterised models that deliver special and material dependent reliability data that can be used to optimize robustness of SiP assemblies together with results that indicate relative contributions of various structural variables. An initial analytical model for solder ball reliability and a solution for embedding a low cost test for a capacitive RF-MEMS switch identified as an SiP component presenting a key test challenge. Research limitations/implications – Results will contribute to the further development of NXP wafer level system-in-package technology. Limitations are that feedback on the implementation of recommendations and the physical characterisation of the embedded test solution. Originality/value – Both the methodology and associated studies on the structural reliability of an industrial SiP technology are unique. The analytical model for solder ball life is new as is the embedded test solution for the RF-MEMS switch.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose – This paper aims to present an open-ended microwave curing system for microelectronics components and a numerical analysis framework for virtual testing and prototyping of the system, enabling design of physical prototypes to be optimized, expediting the development process. Design/methodology/approach – An open-ended microwave oven system able to enhance the cure process for thermosetting polymer materials utilised in microelectronics applications is presented. The system is designed to be mounted on a precision placement machine enabling curing of individual components on a circuit board. The design of the system allows the heating pattern and heating rate to be carefully controlled optimising cure rate and cure quality. A multi-physics analysis approach has been adopted to form a numerical model capable of capturing the complex coupling that exists between physical processes. Electromagnetic analysis has been performed using a Yee finite-difference time-domain scheme, while an unstructured finite volume method has been utilized to perform thermophysical analysis. The two solvers are coupled using a sampling-based cross-mapping algorithm. Findings – The numerical results obtained demonstrate that the numerical model is able to obtain solutions for distribution of temperature, rate of cure, degree of cure and thermally induced stresses within an idealised polymer load heated by the proposed microwave system. Research limitations/implications – The work is limited by the absence of experimentally derived material property data and comparative experimental results. However, the model demonstrates that the proposed microwave system would seem to be a feasible method of expediting the cure rate of polymer materials. Originality/value – The findings of this paper will help to provide an understanding of the behaviour of thermosetting polymer materials during microwave cure processing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nano-imprint forming (NIF) is among the most attractive manufacturing technologies offering high yield and low-cost fabrication of three-dimensional fine structures and patterns with resolution of few nanometres. Optimising NIF process is critical for achieving high quality products and minimising the risk of commonly observed defects. Using finite element analysis, the effect of various process parameters is evaluated and design rules for safe and reliable NIF fabrication formulated. This work is part of a major UK Grand Challenge project - 3D-Mintegration - for design, simulation, fabrication, assembly and test of next generation 3D-miniaturised systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nano-imprint forming (NIF) as manufacturing technology is ideally placed to enable high resolution, low-cost and high-throughput fabrication of three-dimensional fine structures and the packaging of heterogeneous micro-systems (S.Y. Chou and P.R. Krauss, 1997). This paper details a thermo-mechanical modelling methodology for optimising this process for different materials used in components such as mini-fluidics and bio-chemical systems, optoelectronics, photonics and health usage monitoring systems (HUMS). This work is part of a major UK Grand Challenge project - 3D-Mintegration - which is aiming to develop modelling and design technologies for the next generation of fabrication, assembly and test processes for 3D-miniaturised systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The recognition that urban groundwater is a potentially valuable resource for potable and industrial uses due to growing pressures on perceived less polluted rural groundwater has led to a requirement to assess the groundwater contamination risk in urban areas from industrial contaminants such as chlorinated solvents. The development of a probabilistic risk based management tool that predicts groundwater quality at potential new urban boreholes is beneficial in determining the best sites for future resource development. The Borehole Optimisation System (BOS) is a custom Geographic Information System (GIs) application that has been developed with the objective of identifying the optimum locations for new abstraction boreholes. BOS can be applied to any aquifer subject to variable contamination risk. The system is described in more detail by Tait et al. [Tait, N.G., Davison, J.J., Whittaker, J.J., Lehame, S.A. Lerner, D.N., 2004a. Borehole Optimisation System (BOS) - a GIs based risk analysis tool for optimising the use of urban groundwater. Environmental Modelling and Software 19, 1111-1124]. This paper applies the BOS model to an urban Permo-Triassic Sandstone aquifer in the city centre of Nottingham, UK. The risk of pollution in potential new boreholes from the industrial chlorinated solvent tetrachloroethene (PCE) was assessed for this region. The risk model was validated against contaminant concentrations from 6 actual field boreholes within the study area. In these studies the model generally underestimated contaminant concentrations. A sensitivity analysis showed that the most responsive model parameters were recharge, effective porosity and contaminant degradation rate. Multiple simulations were undertaken across the study area in order to create surface maps indicating areas of low PCE concentrations, thus indicating the best locations to place new boreholes. Results indicate that northeastern, eastern and central regions have the lowest potential PCE concentrations in abstraction groundwater and therefore are the best sites for locating new boreholes. These locations coincide with aquifer areas that are confined by low permeability Mercia Mudstone deposits. Conversely southern and northwestern areas are unconfined and have shallower depth to groundwater. These areas have the highest potential PCE concentrations. These studies demonstrate the applicability of BOS as a tool for informing decision makers on the development of urban groundwater resources. (c) 2007 Elsevier Ltd. All rights reserved.