208 resultados para Assembling (Electronic computers)
Resumo:
The factors that are driving the development and use of grids and grid computing, such as size, dynamic features, distribution and heterogeneity, are also pushing to the forefront service quality issues. These include performance, reliability and security. Although grid middleware can address some of these issues on a wider scale, it has also become imperative to ensure adequate service provision at local level. Load sharing in clusters can contribute to the provision of a high quality service, by exploiting both static and dynamic information. This paper is concerned with the presentation of a load sharing scheme, that can satisfy grid computing requirements. It follows a proactive, non preemptive and distributed approach. Load information is gathered continuously before it is needed, and a task is allocated to the most appropriate node for execution. Performance and reliability are enhanced by the decentralised nature of the scheme and the symmetric roles of the nodes. In addition, the scheme exhibits transparency characteristics that facilitate integration with the grid.
Resumo:
Existing election algorithms suffer limited scalability. This limit stems from the communication design which in turn stems from their fundamentally two-state behaviour. This paper presents a new election algorithm specifically designed to be highly scalable in broadcast networks whilst allowing any processing node to become coordinator with initially equal probability. To achieve this, careful attention has been paid to the communication design, and an additional state has been introduced. The design of the tri-state election algorithm has been motivated by the requirements analysis of a major research project to deliver robust scalable distributed applications, including load sharing, in hostile computing environments in which it is common for processing nodes to be rebooted frequently without notice. The new election algorithm is based in-part on a simple 'emergent' design. The science of emergence is of great relevance to developers of distributed applications because it describes how higher-level self-regulatory behaviour can arise from many participants following a small set of simple rules. The tri-state election algorithm is shown to have very low communication complexity in which the number of messages generated remains loosely-bounded regardless of scale for large systems; is highly scalable because nodes in the idle state do not transmit any messages; and because of its self-organising characteristics, is very stable.
Resumo:
This paper presents a proactive approach to load sharing and describes the architecture of a scheme, Concert, based on this approach. A proactive approach is characterized by a shift of emphasis from reacting to load imbalance to avoiding its occurrence. In contrast, in a reactive load sharing scheme, activity is triggered when a processing node is either overloaded or underloaded. The main drawback of this approach is that a load imbalance is allowed to develop before costly corrective action is taken. Concert is a load sharing scheme for loosely-coupled distributed systems. Under this scheme, load and task behaviour information is collected and cached in advance of when it is needed. Concert uses Linux as a platform for development. Implemented partially in kernel space and partially in user space, it achieves transparency to users and applications whilst keeping the extent of kernel modifications to a minimum. Non-preemptive task transfers are used exclusively, motivated by lower complexity, lower overheads and faster transfers. The goal is to minimize the average response-time of tasks. Concert is compared with other schemes by considering the level of transparency it provides with respect to users, tasks and the underlying operating system.
Resumo:
The aim of this paper is to develop a mathematical model with the ability to predict particle degradation during dilute phase pneumatic conveying. A numerical procedure, based on a matrix representation of degradation processes, is presented to determine the particle impact degradation propensity from a small number of particle single impact tests carried out in a new designed laboratory scale degradation tester. A complete model of particle degradation during dilute phase pneumatic conveying is then described, where the calculation of degradation propensity is coupled with a flow model of the solids and gas phases in the pipeline. Numerical results are presented for degradation of granulated sugar in an industrial scale pneumatic conveyor.
Resumo:
The scheduling problem of minimizing the makespan for m parallel dedicated machines under single resource constraints is considered. For different variants of the problem the complexity status is established. Heuristic algorithms employing the so-called group technology approach are presented and their worst-case behavior is examined. Finally, a polynomial time approximation scheme is presented for the problem with fixed number of machines.
Resumo:
Based upon relevant literature, this study investigated the assessment policy and practices for the BSc (Hons) Computing Science programme at the University of Greenwich (UOG), contextualising these in terms of broad social and educational purposes. It discusses Assessment, and then proceeds to give a critical evaluation of the assessment policy and practices at the UOG. Although this is one case study, because any of the features of the programme are generic to other programmes and institutions, it is of wider value and has further implications. The study was concluded in the summer of 2002. It concludes that overall, the programme's assessment policy and practices are well considered in terms of broad social and educational purposes, although it identifies and outlines several possible improvements, as well as raising some major issues still to be addressed which go beyond assessment practices.
Resumo:
This poster describes a "real world" example of the teaching of Human-Computer Interaction at the final level of a Computer Science degree. It highlights many of the problems of the ever expanding HCI domain and the consequential issues of what to teach and why. The poster describes the conception and development of a new HCI course, its historical background, the justification for decisions made, lessons learnt from its implementation, and questions arising from its implementation that are yet to be addressed. For example, should HCI be taught as a course in its own right or as a component of another course? At what level is the teaching of HCI appropriate, and how is teaching influenced by industry? It considers suitable learning pedagogies as well as the demands and the contribution of industry. The experiences presented will no doubt be familiar to many HCI educators. Whilst the poster raises more questions than it answers, the resolution of some questions will hopefully be achieved by the workshop.
Resumo:
Computer equipment, once viewed as leading edge, is quickly condemned as obsolete and banished to basement store rooms or rubbish bins. The magpie instincts of some of the academics and technicians at the University of Greenwich, London, preserved some such relics in cluttered offices and garages to the dismay of colleagues and partners. When the University moved into its new campus in the historic buildings of the Old Royal Naval College in the center of Greenwich, corridor space in King William Court provided an opportunity to display some of this equipment so that students could see these objects and gain a more vivid appreciation of their subject's history.
Resumo:
Natural distributed systems are adaptive, scalable and fault-tolerant. Emergence science describes how higher-level self-regulatory behaviour arises in natural systems from many participants following simple rulesets. Emergence advocates simple communication models, autonomy and independence, enhancing robustness and self-stabilization. High-quality distributed applications such as autonomic systems must satisfy the appropriate nonfunctional requirements which include scalability, efficiency, robustness, low-latency and stability. However the traditional design of distributed applications, especially in terms of the communication strategies employed, can introduce compromises between these characteristics. This paper discusses ways in which emergence science can be applied to distributed computing, avoiding some of the compromises associated with traditionally-designed applications. To demonstrate the effectiveness of this paradigm, an emergent election algorithm is described and its performance evaluated. The design incorporates nondeterministic behaviour. The resulting algorithm has very low communication complexity, and is simultaneously very stable, scalable and robust.
Resumo:
This chapter discusses the code parallelization environment, where a number of tools that address the main tasks, such as code parallelization, debugging, and optimization are available. The parallelization tools include ParaWise and CAPO, which enable the near automatic parallelization of real world scientific application codes for shared and distributed memory-based parallel systems. The chapter discusses the use of ParaWise and CAPO to transform the original serial code into an equivalent parallel code that contains appropriate OpenMP directives. Additionally, as user involvement can introduce errors, a relative debugging tool (P2d2) is also available and can be used to perform near automatic relative debugging of an OpenMP program that has been parallelized either using the tools or manually. In order for these tools to be effective in parallelizing a range of applications, a high quality fully inter-procedural dependence analysis, as well as user interaction is vital to the generation of efficient parallel code and in the optimization of the backtracking and speculation process used in relative debugging. Results of parallelized NASA codes are discussed and show the benefits of using the environment.
Resumo:
Hybrid OECB (Opto-Electrical Circuit Boards) are expected to make a significant impact in the telecomm switches arena within the next five years, creating optical backplanes with high speed point-to-point optical interconnects. The critical aspect in the manufacture of the optical backplane is the successful coupling between VCSEL (Vertical Cavity Surface Emitting Laser) device and embedded waveguide in the OECB. Optical performance will be affected by CTE mismatch in the material properties, and manufacturing tolerances. This paper will discuss results from a multidisciplinary research project involving both experimentation and modelling. Key process parameters are being investigated using Design of Experiments and Finite Element Modelling. Simulations have been undertaken that predict the temperature in the VCSEL during normal operation, and the subsequent misalignment that this imposes. The results from the thermomechanical analysis are being used with optimisation software and the experimental DOE (Design of Experiments) to identify packaging parameters that minimise misalignment. These results are also imported into an optical model which solves optical energy and attenuation from the VCSEL aperture into, and then through, the waveguide. Results from the thermomechanical and optical models will be discussed as will the experimental results from the DOE.
Resumo:
Cu column bumping is a novel flip chip packaging technique that allows Cu columns to be bonded directly with the dies. It has eliminated the under-bump-metallurgy (UBM) fonnation step of the traditional flip chip manufacturing process. This bumping technique has the potential benefits of simplifying the flip chip manufacturing process, increasing productivity and the UO counts. In this paper, a study of reliability of Cu column bumped flip chips will be presented. Computer modelling methods have been used to predict the shape of solder joints and the response of flip chips to cyclic thermal-mechanical loading. The accumulated plastic strain energy at the corner solder joints has been used as an indicator of the solder joint reliability. Models with a wide range of design parameters have been compared for their reliability. The design parameters that have been investigated are the copper column height and radius, PCB pad radius, solder volume and Cu column wetting height. The relative importance ranking of these parameters has been obtained. The Lead-free solder material 96.5Sn3.5Ag has been used in this modelling work.