879 resultados para Programming languages (Electronic computers)
Resumo:
Despite the apparent simplicity of the OpenMP directive shared memory programming model and the sophisticated dependence analysis and code generation capabilities of the ParaWise/CAPO tools, experience shows that a level of expertise is required to produce efficient parallel code. In a real world application the investigation of a single loop in a generated parallel code can soon become an in-depth inspection of numerous dependencies in many routines. The additional understanding of dependencies is also needed to effectively interpret the information provided and supply the required feedback. The ParaWise Expert Assistant has been developed to automate this investigation and present questions to the user about, and in the context of, their application code. In this paper, we demonstrate that knowledge of dependence information and OpenMP are no longer essential to produce efficient parallel code with the Expert Assistant. It is hoped that this will enable a far wider audience to use the tools and subsequently, exploit the benefits of large parallel systems.
Resumo:
Web services based systems have recently found their way into many applications such as e-commerce, corporate integration and e-learning. Construction of new services or introducing new functions to existing services requires composition of web services. Current approaches to service composition often require major programming effort; this is time consuming and requires considerable developer expertise. In this paper, we explore the real and rich scenarios found in e-learning where education services are offered through the Internet by networked universities to potentially millions in the world. These services are derived from existing/emerging business operation processes and commonly offered through a web interface, combined with other services such as email and ftp services, to support partial/full business processes. We identify the requirements for a generic portal framework for easy integration of existing expertise and services of individual institutions (enterprises). We examine the existing technologies and standards, and point out the gaps to be filled in designing the architecture of the framework
Resumo:
A cross-domain workflow application may be constructed using a standard reference model such as the one by the Workflow Management Coalition (WfMC) [7] but the requirements for this type of application are inherently different from one organization to another. The existing models and systems built around them meet some but not all the requirements from all the organizations involved in a collaborative process. Furthermore the requirements change over time. This makes the applications difficult to develop and distribute. Service Oriented Architecture (SOA) based approaches such as the BPET (Business Process Execution Language) intend to provide a solution but fail to address the problems sufficiently, especially in the situations where the expectations and level of skills of the users (e.g. the participants of the processes) in different organisations are likely to be different. In this paper, we discuss a design pattern that provides a novel approach towards a solution. In the solution, business users can design the applications at a high level of abstraction: the use cases and user interactions; the designs are documented and used, together with the data and events captured later that represents the user interactions with the systems, to feed an intermediate component local to the users -the IFM (InterFace Mapper) -which bridges the gaps between the users and the systems. We discuss the main issues faced in the design and prototyping. The approach alleviates the need for re-programming with the APIs to any back-end service thus easing the development and distribution of the applications
Resumo:
In this paper the reliability of the isolation substrate and chip mountdown solder interconnect of power modules under thermal-mechanical loading has been analysed using a numerical modelling approach. The damage indicators such as the peel stress and the accumulated plastic work density in solder interconnect are calculated for a range of geometrical design parameters, and the effects of these parameters on the reliability are studied by using a combination of the finite element analysis (FEA) method and optimisation techniques. The sensitivities of the reliability of the isolation substrate and solder interconnect to the changes of the design parameters are obtained and optimal designs are studied using response surface approximation and gradient optimization method
Resumo:
In this paper, thermal cycling reliability along with ANSYS analysis of the residual stress generated in heavy-gauge Al bond wires at different bonding temperatures is reported. 99.999% pure Al wires of 375 mum in diameter, were ultrasonically bonded to silicon dies coated with a 5mum thick Al metallisation at 25degC (room temperature), 100degC and 200degC, respectively (with the same bonding parameters). The wire bonded samples were then subjected to thermal cycling in air from -60degC to +150degC. The degradation rate of the wire bonds was assessed by means of bond shear test and via microstructural characterisation. Prior to thermal cycling, the shear strength of all of the wire bonds was approximately equal to the shear strength of pure aluminum and independent of bonding temperature. During thermal cycling, however, the shear strength of room temperature bonded samples was observed to decrease more rapidly (as compared to bonds formed at 100degC and 200degC) as a result of a high crack propagation rate across the bonding area. In addition, modification of the grain structure at the bonding interface was also observed with bonding temperature, leading to changes in the mechanical properties of the wire. The heat and pressure induced by the high temperature bonding is believed to promote grain recovery and recrystallisation, softening the wires through removal of the dislocations and plastic strain energy. Coarse grains formed at the bonding interface after bonding at elevated temperatures may also contribute to greater resistance for crack propagation, thus lowering the wire bond degradation rate
Resumo:
The electric car, the all electric aircraft and requirements for renewable energy are examples of potential technologies needed to address the world problem of global warming/carbon emission etc. Power electronics and packaged modules are fundamental for the underpinning of these technologies and with the diverse requirements for electrical configurations and the range of environmental conditions, time to market is paramount for module manufacturers and systems designers alike. This paper details some of the results from a major UK project into the reliability of power electronic modules using physics of failure techniques. This paper presents a design methodology together with results that demonstrate enhanced product design with improved reliability, performance and value within acceptable time scales
Resumo:
This paper discusses the reliability of an IGBT power electronics module. This work is part of a major UK funded initiative into the design, packaging and reliability of power electronic modules. The predictive methodology combines numerical modeling techniques with experimentation and accelerated testing to identify failure modes and mechanisms for these type of power electronic module structures. The paper details results for solder joint failure substrate solder. Finite element method modeling techniques have been used to predict the stress and strain distribution within the module structures. Together with accelerated life testing, these results have provided a failure model for these joints which has been used to predict reliability of a rail traction application
Resumo:
We study the two-machine flow shop problem with an uncapacitated interstage transporter. The jobs have to be split into batches, and upon completion on the first machine, each batch has to be shipped to the second machine by a transporter. The best known heuristic for the problem is a –approximation algorithm that outputs a two-shipment schedule. We design a –approximation algorithm that finds schedules with at most three shipments, and this ratio cannot be improved, unless schedules with more shipments are created. This improvement is achieved due to a thorough analysis of schedules with two and three shipments by means of linear programming. We formulate problems of finding an optimal schedule with two or three shipments as integer linear programs and develop strongly polynomial algorithms that find solutions to their continuous relaxations with a small number of fractional variables
Resumo:
High current density induced damages such as electromigration in the on-chip interconnection /metallization of Al or Cu has been the subject of intense study over the last 40 years. Recently, because of the increasing trend of miniaturization of the electronic packaging that encloses the chip, electromigration as well as other high current density induced damages are becoming a growing concern for off-chip interconnection where low melting point solder joints are commonly used. Before long, a huge number of publications have been explored on the electromigration issue of solder joints. However, a wide spectrum of findings might confuse electronic companies/designers. Thus, a review of the high current induced damages in solder joints is timely right this moment. We have selected 6 major phenomena to review in this paper. They are (i) electromigration (mass transfer due electron bombardment), (ii) thermomigration (mass transfer due to thermal gradient), (iii) enhanced intermetallic compound growth, (iv) enhanced current crowding, (v) enhanced under bump metallisation dissolution and (vi) high Joule heating and (vii) solder melting. the damage mechanisms under high current stressing in the tiny solder joint, mentioned in the review article, are significant roadblocks to further miniaturization of electronics. Without through understanding of these failure mechanisms by experiments coupled with mathematical modeling work, further miniaturization in electronics will be jeopardized
Resumo:
Single machine scheduling problems are considered, in which the processing of jobs depend on positions of the jobs in a schedule and the due-dates are assigned either according to the CON rule (a due-date common to all jobs is chosen) or according to the SLK rule (the due-dates are computed by increasing the actual processing times of each job by a slack, common to all jobs). Polynomial-time dynamic programming algorithms are proposed for the problems with the objective functions that include the cost of assigning the due-dates, the total cost of disgarded jobs (which are not scheduled) and, possibly, the total earliness of the scheduled jobs.
Resumo:
We consider a problem of scheduling jobs on m parallel machines. The machines are dedicated, i.e., for each job the processing machine is known in advance. We mainly concentrate on the model in which at any time there is one unit of an additional resource. Any job may be assigned the resource and this reduces its processing time. A job that is given the resource uses it at each time of its processing. No two jobs are allowed to use the resource simultaneously. The objective is to minimize the makespan. We prove that the two-machine problem is NP-hard in the ordinary sense, describe a pseudopolynomial dynamic programming algorithm and convert it into an FPTAS. For the problem with an arbitrary number of machines we present an algorithm with a worst-case ratio close to 3/2, and close to 3, if a job can be given several units of the resource. For the problem with a fixed number of machines we give a PTAS. Virtually all algorithms rely on a certain variant of the linear knapsack problem (maximization, minimization, multiple-choice, bicriteria). © 2008 Wiley Periodicals, Inc. Naval Research Logistics, 2008
Resumo:
Tese de doutoramento, Informática (Ciências da Computação), Universidade de Lisboa, Faculdade de Ciências, 2015
Resumo:
Currently, the teaching-learning process in domains, such as computer programming, is characterized by an extensive curricula and a high enrolment of students. This poses a great workload for faculty and teaching assistants responsible for the creation, delivery, and assessment of student exercises. The main goal of this chapter is to foster practice-based learning in complex domains. This objective is attained with an e-learning framework—called Ensemble—as a conceptual tool to organize and facilitate technical interoperability among services. The Ensemble framework is used on a specific domain: computer programming. Content issues are tacked with a standard format to describe programming exercises as learning objects. Communication is achieved with the extension of existing specifications for the interoperation with several systems typically found in an e-learning environment. In order to evaluate the acceptability of the proposed solution, an Ensemble instance was validated on a classroom experiment with encouraging results.
Resumo:
Ce mémoire vise à recenser les avantages et les inconvénients de l'utilisation du langage de programmation fonctionnel dynamique Scheme pour le développement de jeux vidéo. Pour ce faire, la méthode utilisée est d'abord basée sur une approche plus théorique. En effet, une étude des besoins au niveau de la programmation exprimés par ce type de développement, ainsi qu'une description détaillant les fonctionnalités du langage Scheme pertinentes au développement de jeux vidéo sont données afin de bien mettre en contexte le sujet. Par la suite, une approche pratique est utilisée en effectuant le développement de deux jeux vidéo de complexités croissantes: Space Invaders et Lode Runner. Le développement de ces jeux vidéo a mené à l'extension du langage Scheme par plusieurs langages spécifiques au domaine et bibliothèques, dont notamment un système de programmation orienté objets et un système de coroutines. L'expérience acquise par le développement de ces jeux est finalement comparée à celle d'autres développeurs de jeux vidéo de l'industrie qui ont utilisé Scheme pour la création de titres commerciaux. En résumé, l'utilisation de ce langage a permis d'atteindre un haut niveau d'abstraction favorisant la modularité des jeux développés sans affecter les performances de ces derniers.
Resumo:
Ce mémoire présente une implantation de la création paresseuse de tâches desti- née à des systèmes multiprocesseurs à mémoire distribuée. Elle offre un sous-ensemble des fonctionnalités du Message-Passing Interface et permet de paralléliser certains problèmes qui se partitionnent difficilement de manière statique grâce à un système de partitionnement dynamique et de balancement de charge. Pour ce faire, il se base sur le langage Multilisp, un dialecte de Scheme orienté vers le traitement parallèle, et implante sur ce dernier une interface semblable à MPI permettant le calcul distribué multipro- cessus. Ce système offre un langage beaucoup plus riche et expressif que le C et réduit considérablement le travail nécessaire au programmeur pour pouvoir développer des programmes équivalents à ceux en MPI. Enfin, le partitionnement dynamique permet de concevoir des programmes qui seraient très complexes à réaliser sur MPI. Des tests ont été effectués sur un système local à 16 processeurs et une grappe à 16 processeurs et il offre de bonnes accélérations en comparaison à des programmes séquentiels équiva- lents ainsi que des performances acceptables par rapport à MPI. Ce mémoire démontre que l’usage des futures comme technique de partitionnement dynamique est faisable sur des multiprocesseurs à mémoire distribuée.