14 resultados para Hardware and software

em Greenwich Academic Literature Archive - UK


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Embedded software systems in vehicles are of rapidly increasing commercial importance for the automotive industry. Current systems employ a static run-time environment; due to the difficulty and cost involved in the development of dynamic systems in a high-integrity embedded control context. A dynamic system, referring to the system configuration, would greatly increase the flexibility of the offered functionality and enable customised software configuration for individual vehicles, adding customer value through plug-and-play capability, and increased quality due to its inherent ability to adjust to changes in hardware and software. We envisage an automotive system containing a variety of components, from a multitude of organizations, not necessarily known at development time. The system dynamically adapts its configuration to suit the run-time system constraints. This paper presents our vision for future automotive control systems that will be regarded in an EU research project, referred to as DySCAS (Dynamically Self-Configuring Automotive Systems). We propose a self-configuring vehicular control system architecture, with capabilities that include automatic discovery and inclusion of new devices, self-optimisation to best-use the processing, storage and communication resources available, self-diagnostics and ultimately self-healing. Such an architecture has benefits extending to reduced development and maintenance costs, improved passenger safety and comfort, and flexible owner customisation. Specifically, this paper addresses the following issues: The state of the art of embedded software systems in vehicles, emphasising the current limitations arising from fixed run-time configurations; and the benefits and challenges of dynamic configuration, giving rise to opportunities for self-healing, self-optimisation, and the automatic inclusion of users’ Consumer Electronic (CE) devices. Our proposal for a dynamically reconfigurable automotive software system platform is outlined and a typical use-case is presented as an example to exemplify the benefits of the envisioned dynamic capabilities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The shared-memory programming model can be an effective way to achieve parallelism on shared memory parallel computers. Historically however, the lack of a programming standard using directives and the limited scalability have affected its take-up. Recent advances in hardware and software technologies have resulted in improvements to both the performance of parallel programs with compiler directives and the issue of portability with the introduction of OpenMP. In this study, the Computer Aided Parallelisation Toolkit has been extended to automatically generate OpenMP-based parallel programs with nominal user assistance. We categorize the different loop types and show how efficient directives can be placed using the toolkit's in-depth interprocedural analysis. Examples are taken from the NAS parallel benchmarks and a number of real-world application codes. This demonstrates the great potential of using the toolkit to quickly parallelise serial programs as well as the good performance achievable on up to 300 processors for hybrid message passing-directive parallelisations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

From the model geometry creation to the model analysis, the stages in between such as mesh generation are the most manpower intensive phase in a mesh-based computational mechanics simulation process. On the other hand the model analysis is the most computing intensive phase. Advanced computational hardware and software have significantly reduced the computing time - and more importantly the trend is downward. With the kind of models envisaged coming, which are larger, more complex in geometry and modelling, and multiphysics, there is no clear trend that the manpower intensive phase is to decrease significantly in time - in the present way of operation it is more likely to increase with model complexity. In this paper we address this dilemma in collaborating components for models in electronic packaging application.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Parallel processing techniques have been used in the past to provide high performance computing resources for activities such as fire-field modelling. This has traditionally been achieved using specialized hardware and software, the expense of which would be difficult to justify for many fire engineering practices. In this article we demonstrate how typical office-based PCs attached to a Local Area Network has the potential to offer the benefits of parallel processing with minimal costs associated with the purchase of additional hardware or software. It was found that good speedups could be achieved on homogeneous networks of PCs, for example a problem composed of ~100,000 cells would run 9.3 times faster on a network of 12 800MHz PCs than on a single 800MHz PC. It was also found that a network of eight 3.2GHz Pentium 4 PCs would run 7.04 times faster than a single 3.2GHz Pentium computer. A dynamic load balancing scheme was also devised to allow the effective use of the software on heterogeneous PC networks. This scheme also ensured that the impact between the parallel processing task and other computer users on the network was minimized.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Parallel processing techniques have been used in the past to provide high performance computing resources for activities such as Computational Fluid Dynamics. This is normally achieved using specialized hardware and software, the expense of which would be difficult to justify for many fire engineering practices. In this paper, we demonstrate how typical office-based PCs attached to a local area network have the potential to offer the benefits of parallel processing with minimal costs associated with the purchase of additional hardware or software. A dynamic load balancing scheme was devised to allow the effective use of the software on heterogeneous PC networks. This scheme ensured that the impact between the parallel processing task and other computer users on the network was minimized thus allowing practical parallel processing within a conventional office environment. Copyright © 2006 John Wiley & Sons, Ltd.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Numerical modelling technology and software is now being used to underwrite the design of many microelectronic and microsystems components. The demands for greater capability of these analysis tools are increasing dramatically, as the user community is faced with the challenge of producing reliable products in ever shorter lead times. This leads to the requirement for analysis tools to represent the interactions amongst the distinct phenomena and physics at multiple length and timescales. Multi-physics and Multi-scale technology is now becoming a reality with many code vendors. This chapter discusses the current status of modelling tools that assess the impact of nano-technology on the fabrication/packaging and testing of microsystems. The chapter is broken down into three sections: Modelling Technologies, Modelling Application to Fabrication, and Modelling Application to Assembly/Packing and Modelling Applied for Test and Metrology.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The effects of natural language comments, meaningful variable names, and structure on the comprehensibility of Z specifications are investigated through a designed experiment conducted with a range of undergraduate and post-graduate student subjects. The times taken on three assessment questions are analysed and related to the abilities of the students as indicated by their total score, with the result that stronger students need less time than weaker students to complete the assessment. Individual question scores, and total score, are then analysed and the influence of comments, naming, structure and level of student's class are determined. In the whole experimental group, only meaningful naming significantly enhances comprehension. In contrast, for those obtaining the best score of 3/3 the only significant factor is commenting. Finally, the subjects' ratings of the five specifications used in the study in terms of their perceived comprehensibility have been analysed. Comments, naming and structure are again found to be of importance in the group when analysed as a whole, but in the sub-group of best performing subjects only the comments had an effect on perceived comprehensibility.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The manufacture of materials products involves the control of a range of interacting physical phenomena. The material to be used is synthesised and then manipulated into some component form. The structure and properties of the final component are influenced by both interactions of continuum-scale phenomena and those at an atomistic-scale level. Moreover, during the processing phase there are some properties that cannot be measured (typically the liquid-solid phase change). However, it seems there is a potential to derive properties and other features from atomistic-scale simulations that are of key importance at the continuum scale. Some of the issues that need to be resolved in this context focus upon computational techniques and software tools facilitating: (i) the multiphysics modeling at continuum scale; (ii) the interaction and appropriate degrees of coupling between the atomistic through microstructure to continuum scale; and (iii) the exploitation of high-performance parallel computing power delivering simulation results in a practical time period. This paper discusses some of the attempts to address each of the above issues, particularly in the context of materials processing for manufacture.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The demands of the process of engineering design, particularly for structural integrity, have exploited computational modelling techniques and software tools for decades. Frequently, the shape of structural components or assemblies is determined to optimise the flow distribution or heat transfer characteristics, and to ensure that the structural performance in service is adequate. From the perspective of computational modelling these activities are typically separated into: • fluid flow and the associated heat transfer analysis (possibly with chemical reactions), based upon Computational Fluid Dynamics (CFD) technology • structural analysis again possibly with heat transfer, based upon finite element analysis (FEA) techniques.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A comprehensive simulation of solidification/melting processes requires the simultaneous representation of free surface fluid flow, heat transfer, phase change, non-linear solid mechanics and, possibly, electromagnetics together with their interactions in what is now referred to as "multi-physics" simulation. A 3D computational procedure and software tool, PHYSICA, embedding the above multi-physics models using finite volume methods on unstructured meshes (FV-UM) has been developed. Multi-physics simulations are extremely compute intensive and a strategy to parallelise such codes has, therefore, been developed. This strategy has been applied to PHYSICA and evaluated on a range of challenging multi-physics problems drawn from actual industrial cases.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The recognition that urban groundwater is a potentially valuable resource for potable and industrial uses due to growing pressures on perceived less polluted rural groundwater has led to a requirement to assess the groundwater contamination risk in urban areas from industrial contaminants such as chlorinated solvents. The development of a probabilistic risk based management tool that predicts groundwater quality at potential new urban boreholes is beneficial in determining the best sites for future resource development. The Borehole Optimisation System (BOS) is a custom Geographic Information System (GIs) application that has been developed with the objective of identifying the optimum locations for new abstraction boreholes. BOS can be applied to any aquifer subject to variable contamination risk. The system is described in more detail by Tait et al. [Tait, N.G., Davison, J.J., Whittaker, J.J., Lehame, S.A. Lerner, D.N., 2004a. Borehole Optimisation System (BOS) - a GIs based risk analysis tool for optimising the use of urban groundwater. Environmental Modelling and Software 19, 1111-1124]. This paper applies the BOS model to an urban Permo-Triassic Sandstone aquifer in the city centre of Nottingham, UK. The risk of pollution in potential new boreholes from the industrial chlorinated solvent tetrachloroethene (PCE) was assessed for this region. The risk model was validated against contaminant concentrations from 6 actual field boreholes within the study area. In these studies the model generally underestimated contaminant concentrations. A sensitivity analysis showed that the most responsive model parameters were recharge, effective porosity and contaminant degradation rate. Multiple simulations were undertaken across the study area in order to create surface maps indicating areas of low PCE concentrations, thus indicating the best locations to place new boreholes. Results indicate that northeastern, eastern and central regions have the lowest potential PCE concentrations in abstraction groundwater and therefore are the best sites for locating new boreholes. These locations coincide with aquifer areas that are confined by low permeability Mercia Mudstone deposits. Conversely southern and northwestern areas are unconfined and have shallower depth to groundwater. These areas have the highest potential PCE concentrations. These studies demonstrate the applicability of BOS as a tool for informing decision makers on the development of urban groundwater resources. (c) 2007 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Student Experience of e-Learning Laboratory (SEEL) project at the University of Greenwich was designed to explore and then implement a number of approaches to investigate learners’ experiences of using technology to support their learning. In this paper members of the SEEL team present initial findings from a University-wide survey of nearly a 1000 students. A selection of 90 ‘cameos’, drawn from the survey data, offer further insights into personal perceptions of e-learning and illustrate the diversity of students experiences. The cameos provide a more coherent picture of individual student experience based on the totality of each person’s responses to the questionnaire. Finally, extracts from follow-up case studies, based on interviews with a small number of students, allow us to ‘hear’ the student voice more clearly. Issues arising from an analysis of the data include student preferences for communication and social networking tools, views on the ‘smartness’ of their tutors’ uses of technology and perceptions of the value of e-learning. A primary finding and the focus of this paper, is that students effectively arrive at their own individualised selection, configuration and use of technologies and software that meets their perceived needs. This ‘personalisation’ does not imply that such configurations are the most efficient, nor does it automatically suggest that effective learning is occurring. SEEL reminds us that learners are individuals, who approach learning both with and without technology in their own distinctive ways. Hearing, understanding and responding to the student voice is fundamental in maximising learning effectiveness. Institutions should consider actively developing the capacity of academic staff to advise students on the usefulness of particular online tools and resources in support of learning and consider the potential benefits of building on what students already use in their everyday lives. Given the widespread perception that students tend to be ‘digital natives’ and academic staff ‘digital immigrants’ (Prensky, 2001), this could represent a considerable cultural challenge.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents novel collaboration methods implemented using a centralized client/server product development integration architecture, and a decentralized peer-to-peer network for smaller and larger companies using open source solutions. The product development integration architecture has been developed for the integration of disparate technologies and software systems for the benefit of collaborative work teams in design and manufacturing. This will facilitate the communication of early design and product development within a distributed and collaborative environment. The novelty of this work is the introduction of an‘out-of-box’ concept which provides a standard framework and deploys this utilizing a proprietary state-of-the-art product lifecycle management system (PLM). The term ‘out-of-box’ means to modify the product development and business processes to suit the technologies rather than vice versa. The key business benefits of adopting such an approach are a rapidly reconfigurable network and minimal requirements for software customization to avoid systems instability