329 resultados para QA75
Resumo:
Electrodeposition is a widely used technique for the fabrication of high aspect ratio microstructure components. In recent years much research has been focused within this area with an aim to understanding the physics behind the filling of high-aspect ratio vias and trenches on PCB's and in particular how they can be made without the formation of voids in the deposited material. This paper describes some of the fundamental work towards the advancement of numerical models that can predict the electrodeposition process and addresses: i) A novel technique for interface motion based on a variation of a donor-acceptor technique ii) A methodology for the investigation of stress profiles in deposits iii) The implementation of acoustic forces to generate replenishing electrolytic flow circulation in recessed features.
Resumo:
A numerical modelling method for the analysis of solder joint damage and crack propagation has been described in this paper. The method is based on the disturbed state concept. Under cyclic thermal-mechanical loading conditions, the level of damage that occurs in solder joints is assumed to be a simple monotonic scalar function of the accumulated equivalent plastic strain. The increase of damage leads to crack initiation and propagation. By tracking the evolution of the damage level in solder joints, crack propagation path and rate can be simulated using Finite Element Analysis method. The discussions are focused on issues in the implementation of the method. The technique of speeding up the simulation and the mesh dependency issues are analysed. As an example of the application of this method, crack propagation in solder joints of power electronics modules under cyclic thermal-mechanical loading conditions has been analyzed and the predicted cracked area size after 3000 loading cycles is consistent with experimental results.
Resumo:
In this paper, computer modelling techniques are used to analyse the effects of globtops on the reliability of aluminium wirebonds in power electronics modules under cyclic thermal-mechanical loading conditions. The sensitivity of the wirehond reliability to the changes of the geometric and the material property parameters of wirebond globtop are evaluated and the optimal combination of the Young's modulus and the coefficient of thermal expansion have been predicted.
Resumo:
This paper describes a methodology for deploying flexible dynamic configuration into embedded systems whilst preserving the reliability advantages of static systems. The methodology is based on the concept of decision points (DP) which are strategically placed to achieve fine-grained distribution of self-management logic to meet application-specific requirements. DP logic can be changed easily, and independently of the host component, enabling self-management behavior to be deferred beyond the point of system deployment. A transparent Dynamic Wrapper mechanism (DW) automatically detects and handles problems arising from the evaluation of self-management logic within each DP and ensures that the dynamic aspects of the system collapse down to statically defined default behavior to ensure safety and correctness despite failures. Dynamic context management contributes to flexibility, and removes the need for design-time binding of context providers and consumers, thus facilitating run-time composition and incremental component upgrade.
Resumo:
This paper describes ways in which emergence engineering principles can be applied to the development of distributed applications. A distributed solution to the graph-colouring problem is used as a vehicle to illustrate some novel techniques. Each node acts autonomously to colour itself based only on its local view of its neighbourhood, and following a simple set of carefully tuned rules. Randomness breaks symmetry and thus enhances stability. The algorithm has been developed to enable self-configuration in wireless sensor networks, and to reflect real-world configurations the algorithm operates with 3 dimensional topologies (reflecting the propagation of radio waves and the placement of sensors in buildings, bridge structures etc.). The algorithm’s performance is evaluated and results presented. It is shown to be simultaneously highly stable and scalable whilst achieving low convergence times. The use of eavesdropping gives rise to low interaction complexity and high efficiency in terms of the communication overheads.
Resumo:
This paper presents an investigation into applying Case-Based Reasoning to Multiple Heterogeneous Case Bases using agents. The adaptive CBR process and the architecture of the system are presented. A case study is presented to illustrate and evaluate the approach. The process of creating and maintaining the dynamic data structures is discussed. The similarity metrics employed by the system are used to support the process of optimisation of the collaboration between the agents which is based on the use of a blackboard architecture. The blackboard architecture is shown to support the efficient collaboration between the agents to achieve an efficient overall CBR solution, while using case-based reasoning methods to allow the overall system to adapt and “learn” new collaborative strategies for achieving the aims of the overall CBR problem solving process.
Resumo:
An aerodynamic sound source extraction from a general flow field is applied to a number of model problems and to a problem of engineering interest. The extraction technique is based on a variable decomposition, which results to an acoustic correction method, of each of the flow variables into a dominant flow component and a perturbation component. The dominant flow component is obtained with a general-purpose Computational Fluid Dynamics (CFD) code which uses a cell-centred finite volume method to solve the Reynolds-averaged Navier–Stokes equations. The perturbations are calculated from a set of acoustic perturbation equations with source terms extracted from unsteady CFD solutions at each time step via the use of a staggered dispersion-relation-preserving (DRP) finite-difference scheme. Numerical experiments include (1) propagation of a 1-D acoustic pulse without mean flow, (2) propagation of a 2-D acoustic pulse with/without mean flow, (3) reflection of an acoustic pulse from a flat plate with mean flow, and (4) flow-induced noise generated by the an unsteady laminar flow past a 2-D cavity. The computational results demonstrate the accuracy for model problems and illustrate the feasibility for more complex aeroacoustic problems of the source extraction technique.
Resumo:
Image inpainting refers to restoring a damaged image with missing information. The total variation (TV) inpainting model is one such method that simultaneously fills in the regions with available information from their surroundings and eliminates noises. The method works well with small narrow inpainting domains. However there remains an urgent need to develop fast iterative solvers, as the underlying problem sizes are large. In addition one needs to tackle the imbalance of results between inpainting and denoising. When the inpainting regions are thick and large, the procedure of inpainting works quite slowly and usually requires a significant number of iterations and leads inevitably to oversmoothing in the outside of the inpainting domain. To overcome these difficulties, we propose a solution for TV inpainting method based on the nonlinear multi-grid algorithm.
Resumo:
This article provides a broad overview of project HEED (High-rise Evacuation Evaluation Database) and the methodologies employed in the collection and storage of first-hand accounts of evacuation experiences derived from face-to-face interviews of evacuees from the World Trade Center (WTC) Twin Towers complex on September 11, 2001. In particular, the article describes the development of the HEED database. This is a flexible research tool which contains qualitative type data in the form of coded evacuee experiences along with the full interview transcripts. The data and information captured and stored in the HEED database is not only unique, but provides a means to address current and emerging issues relating to human factors associated with the evacuation of high-rise buildings
Resumo:
This paper describes an autonomics development tool which serves as both a powerful and flexible policy-expression language and a policy-based framework that supports the integration and dynamic composition of several autonomic computing techniques including signal processing, automated trend analysis and utility functions. Each of these technologies has specific advantages and applicability to different types of dynamic adaptation. The AGILE platform enables seamless interoperability of the different technologies to each perform various aspects of self-management within a single application. Self-management behaviour is specified using the policy language semantics to bind the various technologies together as required. Since the policy semantics support run-time re-configuration, the self-management architecture is dynamically composable. The policy language and implementation library have integrated support for self-stabilising behaviour, enabling oscillation and other forms of instability to be handled at the policy level with very little effort on the part of the application developer. Example applications are presented to illustrate the integration of different autonomics techniques, and the achievement of dynamic composition.
Resumo:
This paper takes forward the discussion for the development of a Framework for e-Learning. It briefly describes how the discussion has progressed from the suggested development of a Framework and the findings of a study investigating the use of Blended Learning, to the application of PESTE factors from Sociology and the proposal of new PESTE factors for educational software and e-Learning, asking if the current use of Computer-Mediated Communication (CMC) is leading to the deskilling of professions, by the provision of direct, front-line service applications and the implications for e-Learning.
Resumo:
This paper briefly describes the methodologies employed in the collection and storage of first-hand accounts of evacuation experiences derived from face-to-face interviews with evacuees from the World Trade Center (WTC) Twin Towers complex on 11 September 2001 and the development of the High-rise Evacuation Evaluation Database (HEED). The main focus of the paper is to present an overview of preliminary analysis of data derived from the evacuation of the North Tower.
Resumo:
This paper considers two-machine flow shop scheduling problems with machine availability constraints. When the processing of a job is interrupted by an unavailability period of a machine, we consider both the resumable scenario in which the processing can be resumed when the machine next becomes available, and the semi-resumable scenario in which some portion of the processing is repeated but the job is otherwise resumable. For the problem with several non-availability intervals on the first machine under the resumable scenario, we present a fast (3/2)-approximation algorithm. For the problem with one non-availability interval under the semi-resumable scenario, a polynomial-time approximation scheme is developed.
Resumo:
In this note, we consider the scheduling problem of minimizing the sum of the weighted completion times on a single machine with one non-availability interval on the machine under the non-resumable scenario. Together with a recent 2-approximation algorithm designed by Kacem [I. Kacem, Approximation algorithm for the weighted flow-time minimization on a single machine with a fixed non-availability interval, Computers & Industrial Engineering 54 (2008) 401–410], this paper is the first successful attempt to develop a constant ratio approximation algorithm for this problem. We present two approaches to designing such an algorithm. Our best algorithm guarantees a worst-case performance ratio of 2+ε. © 2008 Elsevier B.V. All rights reserved.
Resumo:
We consider a problem of scheduling jobs on m parallel machines. The machines are dedicated, i.e., for each job the processing machine is known in advance. We mainly concentrate on the model in which at any time there is one unit of an additional resource. Any job may be assigned the resource and this reduces its processing time. A job that is given the resource uses it at each time of its processing. No two jobs are allowed to use the resource simultaneously. The objective is to minimize the makespan. We prove that the two-machine problem is NP-hard in the ordinary sense, describe a pseudopolynomial dynamic programming algorithm and convert it into an FPTAS. For the problem with an arbitrary number of machines we present an algorithm with a worst-case ratio close to 3/2, and close to 3, if a job can be given several units of the resource. For the problem with a fixed number of machines we give a PTAS. Virtually all algorithms rely on a certain variant of the linear knapsack problem (maximization, minimization, multiple-choice, bicriteria). © 2008 Wiley Periodicals, Inc. Naval Research Logistics, 2008