21 resultados para training load
Resumo:
Computer based mathematical models describing the aircraft evacuation process have a vital role to play in the design and development of safer aircraft, in the implementation of safer and more rigorous certification criteria, cabin crew training and in post mortuum accident investigation. As the risk of personal injury and costs involved in performing large-scale evacuation experiments for the next generation 'Ultra High Capacity Aircraft' (UHCA) are expected to be high, the development and use of these evacuation modelling tools may become essential if these aircraft are to prove a viable reality. In this paper the capabilities and limitations of the airEXODUS evacuation model are described. Its successful application to the prediction of a recent certification trial, prior to the actual trial taking place, is described. Also described is a newly defined parameter known as OPS which can be used as a measure of evacuation trial optimality. In addition, sample evacuation simulations in the presence of fire atmospheres are described. Finally, the data requiremnets of the airEXODUS evacuation model is discussed along with several projects currently underway at the the Univesity of Greenwich designed to obtain this data. Included in this discussion is a description of the AASK - Aircraft Accident Statistics and Knowledge - data base which contains detailed information from aircraft accident survivors.
Resumo:
Parallel computing is now widely used in numerical simulation, particularly for application codes based on finite difference and finite element methods. A popular and successful technique employed to parallelize such codes onto large distributed memory systems is to partition the mesh into sub-domains that are then allocated to processors. The code then executes in parallel, using the SPMD methodology, with message passing for inter-processor interactions. In order to improve the parallel efficiency of an imbalanced structured mesh CFD code, a new dynamic load balancing (DLB) strategy has been developed in which the processor partition range limits of just one of the partitioned dimensions uses non-coincidental limits, as opposed to coincidental limits. The ‘local’ partition limit change allows greater flexibility in obtaining a balanced load distribution, as the workload increase, or decrease, on a processor is no longer restricted by the ‘global’ (coincidental) limit change. The automatic implementation of this generic DLB strategy within an existing parallel code is presented in this chapter, along with some preliminary results.
Resumo:
We present a dynamic distributed load balancing algorithm for parallel, adaptive Finite Element simulations in which we use preconditioned Conjugate Gradient solvers based on domain-decomposition. The load balancing is designed to maintain good partition aspect ratio and we show that cut size is not always the appropriate measure in load balancing. Furthermore, we attempt to answer the question why the aspect ratio of partitions plays an important role for certain solvers. We define and rate different kinds of aspect ratio and present a new center-based partitioning method of calculating the initial distribution which implicitly optimizes this measure. During the adaptive simulation, the load balancer calculates a balancing flow using different versions of the diffusion algorithm and a variant of breadth first search. Elements to be migrated are chosen according to a cost function aiming at the optimization of subdomain shapes. Experimental results for Bramble's preconditioner and comparisons to state-of-the-art load balancers show the benefits of the construction.
Resumo:
The DRAMA library, developed within the European Commission funded (ESPRIT) project DRAMA, supports dynamic load-balancing for parallel (message-passing) mesh-based applications. The target applications are those with dynamic and solution-adaptive features. The focus within the DRAMA project was on finite element simulation codes for structural mechanics. An introduction to the DRAMA library will illustrate that the very general cost model and the interface designed specifically for application requirements provide simplified and effective access to a range of parallel partitioners. The main body of the paper will demonstrate the ability to provide dynamic load-balancing for parallel FEM problems that include: adaptive meshing, re-meshing, the need for multi-phase partitioning.
Resumo:
This paper discusses load-balancing issues when using heterogeneous cluster computers. There is a growing trend towards the use of commodity microprocessor clusters. Although today's microprocessors have reached a theoretical peak performance in the range of one GFLOPS/s, heterogeneous clusters of commodity processors are amongst the most challenging parallel systems to programme efficiently. We will outline an approach for optimising the performance of parallel mesh-based applications for heterogeneous cluster computers and present case studies with the GeoFEM code. The focus is on application cost monitoring and load balancing using the DRAMA library.
Resumo:
The original concept was to create a 'simulation' which would provide trainee teachers, specializing in Information and Communications Technology (ICT) with the opportunity to explore a primary school environment. Within the simulation, factors affecting the development and implementation of ICT would be modelled so that trainees would be able to develop the skills, knowledge and understanding necessary to identify appropriate strategies to overcome the limitations. To this end, we have developed Allsorts Primary - the prototype of a simulated interactive environment, representing a typical primary school
Resumo:
In this Chapter we discuss the load-balancing issues arising in parallel mesh based computational mechanics codes for which the processor loading changes during the run. We briefly touch on geometric repartitioning ideas and then focus on different ways of using a graph both to solve the load-balancing problem and the optimisation problem, both locally and globally. We also briefly discuss whether repartitioning is always valid. Sample illustrative results are presented and we conclude that repartitioning is an attractive option if the load changes are not too dramatic and that there is a certain trade-off between partition quality and volume of data that the underlying application needs to migrate.
Resumo:
The factors that are driving the development and use of grids and grid computing, such as size, dynamic features, distribution and heterogeneity, are also pushing to the forefront service quality issues. These include performance, reliability and security. Although grid middleware can address some of these issues on a wider scale, it has also become imperative to ensure adequate service provision at local level. Load sharing in clusters can contribute to the provision of a high quality service, by exploiting both static and dynamic information. This paper is concerned with the presentation of a load sharing scheme, that can satisfy grid computing requirements. It follows a proactive, non preemptive and distributed approach. Load information is gathered continuously before it is needed, and a task is allocated to the most appropriate node for execution. Performance and reliability are enhanced by the decentralised nature of the scheme and the symmetric roles of the nodes. In addition, the scheme exhibits transparency characteristics that facilitate integration with the grid.
Resumo:
This paper presents a proactive approach to load sharing and describes the architecture of a scheme, Concert, based on this approach. A proactive approach is characterized by a shift of emphasis from reacting to load imbalance to avoiding its occurrence. In contrast, in a reactive load sharing scheme, activity is triggered when a processing node is either overloaded or underloaded. The main drawback of this approach is that a load imbalance is allowed to develop before costly corrective action is taken. Concert is a load sharing scheme for loosely-coupled distributed systems. Under this scheme, load and task behaviour information is collected and cached in advance of when it is needed. Concert uses Linux as a platform for development. Implemented partially in kernel space and partially in user space, it achieves transparency to users and applications whilst keeping the extent of kernel modifications to a minimum. Non-preemptive task transfers are used exclusively, motivated by lower complexity, lower overheads and faster transfers. The goal is to minimize the average response-time of tasks. Concert is compared with other schemes by considering the level of transparency it provides with respect to users, tasks and the underlying operating system.
Resumo:
Parallel processing techniques have been used in the past to provide high performance computing resources for activities such as Computational Fluid Dynamics. This is normally achieved using specialized hardware and software, the expense of which would be difficult to justify for many fire engineering practices. In this paper, we demonstrate how typical office-based PCs attached to a local area network have the potential to offer the benefits of parallel processing with minimal costs associated with the purchase of additional hardware or software. A dynamic load balancing scheme was devised to allow the effective use of the software on heterogeneous PC networks. This scheme ensured that the impact between the parallel processing task and other computer users on the network was minimized thus allowing practical parallel processing within a conventional office environment. Copyright © 2006 John Wiley & Sons, Ltd.
Resumo:
This paper gives a general overview of some of the factors involved in the training and knowledge of learning disabilities staff relating to mental health. Early indications from research have shown that training may be effective as a way of addressing these problems, but further research and clear guidance on best practice in implementing staff training are needed in this important area.
Resumo:
This paper discusses the development of provision for the training of teachers in English further and technical education from 1945 to 1956. While these years saw little growth in this provision, they were formative in that the institutional and curricular patterns of teacher training for the diverse fields of technical and further education were developed at this time. The work of the three national centres in Bolton, London and Huddersfield, during the period of the Emergency Training Scheme (ETS) is summarised with particular reference to the influence of the Ministry of Education‟s conditions for ETS colleges and courses. With the ending of the ETS in 1951 the three centres were given permanent status as teacher training colleges which in turn brought them into association with their local universities as constituent colleges of their Area Training Organisations. The consequences of this transfer to the universities for the curriculum and assessment of technical teacher training and the 'policy dichotomy' of teacher training for secondary and technical education are examined.