224 resultados para Parallel processing (Electronic computers) - Research


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper uses a case study approach to consider the effectiveness of the electronic survey as a research tool to measure the learner voice about experiences of e-learning in a particular institutional case. Two large scale electronic surveys were carried out for the Student Experience of e-Learning (SEEL) project at the University of Greenwich in 2007 and 2008, funded by the UK Higher Education Academy (HEA). The paper considers this case to argue that, although the electronic web-based survey is a convenient method of quantitative and qualitative data collection, enabling higher education institutions swiftly to capture multiple views of large numbers of students regarding experiences of e-learning, for more robust analysis, electronic survey research is best combined with other methods of in-depth qualitative data collection. The advantages and disadvantages of the electronic survey as a research method to capture student experiences of e-learning are the focus of analysis in this short paper, which reports an overview of large-scale data collection (1,000+ responses) from two electronic surveys administered to students using surveymonkey as a web-based survey tool as part of the SEEL research project. Advantages of web-based electronic survey design include flexibility, ease of design, high degree of designer control, convenience, low costs, data security, ease of access and guarantee of confidentiality combined with researcher ability to identify users through email addresses. Disadvantages of electronic survey design include the self-selecting nature of web-enabled respondent participation, which tends to skew data collection towards students who respond effectively to email invitations. The relative inadequacy of electronic surveys to capture in-depth qualitative views of students is discussed with regard to prior recommendations from the JISC-funded Learners' Experiences of e-Learning (LEX) project, in consideration of the results from SEEL in-depth interviews with students. The paper considers the literature on web-based and email electronic survey design, summing up the relative advantages and disadvantages of electronic surveys as a tool for student experience of e-learning research. The paper concludes with a range of recommendations for designing future electronic surveys to capture the learner voice on e-learning, contributing to evidence-based learning technology research development in higher education.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new parallel approach for solving a pentadiagonal linear system is presented. The parallel partition method for this system and the TW parallel partition method on a chain of P processors are introduced and discussed. The result of this algorithm is a reduced pentadiagonal linear system of order P \Gamma 2 compared with a system of order 2P \Gamma 2 for the parallel partition method. More importantly the new method involves only half the number of communications startups than the parallel partition method (and other standard parallel methods) and hence is a far more efficient parallel algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Embedded software systems in vehicles are of rapidly increasing commercial importance for the automotive industry. Current systems employ a static run-time environment; due to the difficulty and cost involved in the development of dynamic systems in a high-integrity embedded control context. A dynamic system, referring to the system configuration, would greatly increase the flexibility of the offered functionality and enable customised software configuration for individual vehicles, adding customer value through plug-and-play capability, and increased quality due to its inherent ability to adjust to changes in hardware and software. We envisage an automotive system containing a variety of components, from a multitude of organizations, not necessarily known at development time. The system dynamically adapts its configuration to suit the run-time system constraints. This paper presents our vision for future automotive control systems that will be regarded in an EU research project, referred to as DySCAS (Dynamically Self-Configuring Automotive Systems). We propose a self-configuring vehicular control system architecture, with capabilities that include automatic discovery and inclusion of new devices, self-optimisation to best-use the processing, storage and communication resources available, self-diagnostics and ultimately self-healing. Such an architecture has benefits extending to reduced development and maintenance costs, improved passenger safety and comfort, and flexible owner customisation. Specifically, this paper addresses the following issues: The state of the art of embedded software systems in vehicles, emphasising the current limitations arising from fixed run-time configurations; and the benefits and challenges of dynamic configuration, giving rise to opportunities for self-healing, self-optimisation, and the automatic inclusion of users’ Consumer Electronic (CE) devices. Our proposal for a dynamically reconfigurable automotive software system platform is outlined and a typical use-case is presented as an example to exemplify the benefits of the envisioned dynamic capabilities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The PHYSICA software was developed to enable multiphysics modelling allowing for interaction between Computational Fluid Dynamics (CFD) and Computational Solid Mechanics (CSM) and Computational Aeroacoustics (CAA). PHYSICA uses the finite volume method with 3-D unstructured meshes to enable the modelling of complex geometries. Many engineering applications involve significant computational time which needs to be reduced by means of a faster solution method or parallel and high performance algorithms. It is well known that multigrid methods serve as a fast iterative scheme for linear and nonlinear diffusion problems. This papers attempts to address two major issues of this iterative solver, including parallelisation of multigrid methods and their applications to time dependent multiscale problems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a proactive approach to load sharing and describes the architecture of a scheme, Concert, based on this approach. A proactive approach is characterized by a shift of emphasis from reacting to load imbalance to avoiding its occurrence. In contrast, in a reactive load sharing scheme, activity is triggered when a processing node is either overloaded or underloaded. The main drawback of this approach is that a load imbalance is allowed to develop before costly corrective action is taken. Concert is a load sharing scheme for loosely-coupled distributed systems. Under this scheme, load and task behaviour information is collected and cached in advance of when it is needed. Concert uses Linux as a platform for development. Implemented partially in kernel space and partially in user space, it achieves transparency to users and applications whilst keeping the extent of kernel modifications to a minimum. Non-preemptive task transfers are used exclusively, motivated by lower complexity, lower overheads and faster transfers. The goal is to minimize the average response-time of tasks. Concert is compared with other schemes by considering the level of transparency it provides with respect to users, tasks and the underlying operating system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This chapter discusses the code parallelization environment, where a number of tools that address the main tasks, such as code parallelization, debugging, and optimization are available. The parallelization tools include ParaWise and CAPO, which enable the near automatic parallelization of real world scientific application codes for shared and distributed memory-based parallel systems. The chapter discusses the use of ParaWise and CAPO to transform the original serial code into an equivalent parallel code that contains appropriate OpenMP directives. Additionally, as user involvement can introduce errors, a relative debugging tool (P2d2) is also available and can be used to perform near automatic relative debugging of an OpenMP program that has been parallelized either using the tools or manually. In order for these tools to be effective in parallelizing a range of applications, a high quality fully inter-procedural dependence analysis, as well as user interaction is vital to the generation of efficient parallel code and in the optimization of the backtracking and speculation process used in relative debugging. Results of parallelized NASA codes are discussed and show the benefits of using the environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A major percentage of the heat emitted from electronic packages can be extracted by air cooling whether by means of natural or forced convection. This flow of air throughout an electronic system and the heat extracted is highly dependable on the nature of turbulence present in the flow field. This paper will discuss results from an investigation into the accuracy of turbulence models to predict air cooling for electronic packages and systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hybrid OECB (Opto-Electrical Circuit Boards) are expected to make a significant impact in the telecomm switches arena within the next five years, creating optical backplanes with high speed point-to-point optical interconnects. The critical aspect in the manufacture of the optical backplane is the successful coupling between VCSEL (Vertical Cavity Surface Emitting Laser) device and embedded waveguide in the OECB. Optical performance will be affected by CTE mismatch in the material properties, and manufacturing tolerances. This paper will discuss results from a multidisciplinary research project involving both experimentation and modelling. Key process parameters are being investigated using Design of Experiments and Finite Element Modelling. Simulations have been undertaken that predict the temperature in the VCSEL during normal operation, and the subsequent misalignment that this imposes. The results from the thermomechanical analysis are being used with optimisation software and the experimental DOE (Design of Experiments) to identify packaging parameters that minimise misalignment. These results are also imported into an optical model which solves optical energy and attenuation from the VCSEL aperture into, and then through, the waveguide. Results from the thermomechanical and optical models will be discussed as will the experimental results from the DOE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computational Fluid Dynamics (CFD) is gradually becoming a powerful and almost essential tool for the design, development and optimization of engineering applications. However the mathematical modelling of the erratic turbulent motion remains the key issue when tackling such flow phenomena. The reliability of CFD analysis depends heavily on the turbulence model employed together with the wall functions implemented. In order to resolve the abrupt changes in the turbulent energy and other parameters situated at near wall regions a particularly fine mesh is necessary which inevitably increases the computer storage and run-time requirements. Turbulence modelling can be considered to be one of the three key elements in CFD. Precise mathematical theories have evolved for the other two key elements, grid generation and algorithm development. The principal objective of turbulence modelling is to enhance computational procedures of efficient accuracy to reproduce the main structures of three dimensional fluid flows. The flow within an electronic system can be characterized as being in a transitional state due to the low velocities and relatively small dimensions encountered. This paper presents simulated CFD results for an investigation into the predictive capability of turbulence models when considering both fluid flow and heat transfer phenomena. Also a new two-layer hybrid kε / kl turbulence model for electronic application areas will be presented which holds the advantages of being cheap in terms of the computational mesh required and is also economical with regards to run-time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A simulation program has been developed to calculate the power-spectral density of thin avalanche photodiodes, which are used in optical networks. The program extends the time-domain analysis of the dead-space multiplication model to compute the autocorrelation function of the APD impulse response. However, the computation requires a large amount of memory space and is very time consuming. We describe our experiences in parallelizing the code using both MPI and OpenMP. Several array partitioning schemes and scheduling policies are implemented and tested Our results show that the OpenMP code is scalable up to 64 processors on an SGI Origin 2000 machine and has small average errors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Despite the apparent simplicity of the OpenMP directive shared memory programming model and the sophisticated dependence analysis and code generation capabilities of the ParaWise/CAPO tools, experience shows that a level of expertise is required to produce efficient parallel code. In a real world application the investigation of a single loop in a generated parallel code can soon become an in-depth inspection of numerous dependencies in many routines. The additional understanding of dependencies is also needed to effectively interpret the information provided and supply the required feedback. The ParaWise Expert Assistant has been developed to automate this investigation and present questions to the user about, and in the context of, their application code. In this paper, we demonstrate that knowledge of dependence information and OpenMP are no longer essential to produce efficient parallel code with the Expert Assistant. It is hoped that this will enable a far wider audience to use the tools and subsequently, exploit the benefits of large parallel systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A comprehensive solution of solidification/melting processes requires the simultaneous representation of free surface fluid flow, heat transfer, phase change, nonlinear solid mechanics and, possibly, electromagnetics together with their interactions, in what is now known as multiphysics simulation. Such simulations are computationally intensive and the implementation of solution strategies for multiphysics calculations must embed their effective parallelization. For some years, together with our collaborators, we have been involved in the development of numerical software tools for multiphysics modeling on parallel cluster systems. This research has involved a combination of algorithmic procedures, parallel strategies and tools, plus the design of a computational modeling software environment and its deployment in a range of real world applications. One output from this research is the three-dimensional parallel multiphysics code, PHYSICA. In this paper we report on an assessment of its parallel scalability on a range of increasingly complex models drawn from actual industrial problems, on three contemporary parallel cluster systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Parallel processing techniques have been used in the past to provide high performance computing resources for activities such as fire-field modelling. This has traditionally been achieved using specialized hardware and software, the expense of which would be difficult to justify for many fire engineering practices. In this article we demonstrate how typical office-based PCs attached to a Local Area Network has the potential to offer the benefits of parallel processing with minimal costs associated with the purchase of additional hardware or software. It was found that good speedups could be achieved on homogeneous networks of PCs, for example a problem composed of ~100,000 cells would run 9.3 times faster on a network of 12 800MHz PCs than on a single 800MHz PC. It was also found that a network of eight 3.2GHz Pentium 4 PCs would run 7.04 times faster than a single 3.2GHz Pentium computer. A dynamic load balancing scheme was also devised to allow the effective use of the software on heterogeneous PC networks. This scheme also ensured that the impact between the parallel processing task and other computer users on the network was minimized.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a generic framework that can be used to describe study plans using meta-data. The context of this research and associated technologies and standards is presented. The approach adopted here has been developed within the mENU project that aims to provide a model for a European Networked University. The methodology for the design of the generic Framework is discussed and the main design requirements are presented. The approach adopted was based on a set of templates containing meta-data required for the description of programs of study and consisting of generic building elements annotated appropriately. The process followed to develop the templates is presented together with a set of evaluation criteria to test the suitability of the approach. The templates structure is presented and example templates are shown. A first evaluation of the approach has shown that the proposed framework can provide a flexible and competent means for the generic description of study plans for the purposes of a networked university.