9 resultados para prosthetic platforms
em Greenwich Academic Literature Archive - UK
Resumo:
Three paradigms for distributed-memory parallel computation that free the application programmer from the details of message passing are compared for an archetypal structured scientific computation -- a nonlinear, structured-grid partial differential equation boundary value problem -- using the same algorithm on the same hardware. All of the paradigms -- parallel languages represented by the Portland Group's HPF, (semi-)automated serial-to-parallel source-to-source translation represented by CAP-Tools from the University of Greenwich, and parallel libraries represented by Argonne's PETSc -- are found to be easy to use for this problem class, and all are reasonably effective in exploiting concurrency after a short learning curve. The level of involvement required by the application programmer under any paradigm includes specification of the data partitioning, corresponding to a geometrically simple decomposition of the domain of the PDE. Programming in SPMD style for the PETSc library requires writing only the routines that discretize the PDE and its Jacobian, managing subdomain-to-processor mappings (affine global-to-local index mappings), and interfacing to library solver routines. Programming for HPF requires a complete sequential implementation of the same algorithm as a starting point, introduction of concurrency through subdomain blocking (a task similar to the index mapping), and modest experimentation with rewriting loops to elucidate to the compiler the latent concurrency. Programming with CAPTools involves feeding the same sequential implementation to the CAPTools interactive parallelization system, and guiding the source-to-source code transformation by responding to various queries about quantities knowable only at runtime. Results representative of "the state of the practice" for a scaled sequence of structured grid problems are given on three of the most important contemporary high-performance platforms: the IBM SP, the SGI Origin 2000, and the CRAYY T3E.
Resumo:
We report on practical experience using the Oxford BSP Library to parallelize a large electromagnetic code, the British Aerospace finite-difference time-domain code EMMA T:FD3D. The Oxford BS Library is one of the first realizations of the Bulk Synchronous Parallel computational model to be targeted at numerically intensive scientific (typically Fortran) computing. The BAe EMMA code is one of the first large-scale applications to be parallelized using this library, and it is an important demonstration of the cost effectiveness of the BSP approach. We illustrate how BSP cost-modelling techniques can be used to predict and optimize performance for single-source programs across different parallel platforms. We provide predicted and observed performance figures for an industrial-strength, single-source parallel code for a variety of real parallel architectures: shared memory multiprocessors, workstation clusters and massively parallel platforms.
Resumo:
This paper presents an investigation into dynamic self-adjustment of task deployment and other aspects of self-management, through the embedding of multiple policies. Non-dedicated loosely-coupled computing environments, such as clusters and grids are increasingly popular platforms for parallel processing. These abundant systems are highly dynamic environments in which many sources of variability affect the run-time efficiency of tasks. The dynamism is exacerbated by the incorporation of mobile devices and wireless communication. This paper proposes an adaptive strategy for the flexible run-time deployment of tasks; to continuously maintain efficiency despite the environmental variability. The strategy centres on policy-based scheduling which is informed by contextual and environmental inputs such as variance in the round-trip communication time between a client and its workers and the effective processing performance of each worker. A self-management framework has been implemented for evaluation purposes. The framework integrates several policy-controlled, adaptive services with the application code, enabling the run-time behaviour to be adapted to contextual and environmental conditions. Using this framework, an exemplar self-managing parallel application is implemented and used to investigate the extent of the benefits of the strategy
Resumo:
This paper describes work towards the deployment of flexible self-management into real-time embedded systems. A challenging project which focuses specifically on the development of a dynamic, adaptive automotive middleware is described, and the specific self-management requirements of this project are discussed. These requirements have been identified through the refinement of a wide-ranging set of use cases requiring context-sensitive behaviours. A sample of these use-cases is presented to illustrate the extent of the demands for self-management. The strategy that has been adopted to achieve self-management, based on the use of policies is presented. The embedded and real-time nature of the target system brings the constraints that dynamic adaptation capabilities must not require changes to the run-time code (except during hot update of complete binary modules), adaptation decisions must have low latency, and because the target platforms are resource-constrained the self-management mechanism have low resource requirements (especially in terms of processing and memory). Policy-based computing is thus and ideal candidate for achieving the self-management because the policy itself is loaded at run-time and can be replaced or changed in the future in the same way that a data file is loaded. Policies represent a relatively low complexity and low risk means of achieving self-management, with low run-time costs. Policies can be stored internally in ROM (such as default policies) as well as externally to the system. The architecture of a designed-for-purpose powerful yet lightweight policy library is described. A suitable evaluation platform, supporting the whole life-cycle of feasibility analysis, concept evaluation, development, rigorous testing and behavioural validation has been devised and is described.
Resumo:
Kurzel(2004) points out that researchers in e-learning and educational technologists, in a quest to provide improved Learning Environments (LE) for students are focusing on personalising the experience through a Learning Management System (LMS) that attempts to tailor the LE to the individual (see amongst others Eklund & Brusilovsky, 1998; Kurzel, Slay, & Hagenus, 2003; Martinez,2000; Sampson, Karagiannidis, & Kinshuk, 2002; Voigt & Swatman; 2003). According to Kurzel (2004) this tailoring can have an impact on content and how it’s accessed; the media forms used; method of instruction employed and the learning styles supported. This project is aiming to move personalisation forward to the next generation, by tackling the issue of Personalised e-Learning platforms as pre-requisites for building and generating individualised learning solutions. The proposed development is to create an e-learning platform with personalisation built-in. This personalisation is proposed to be set from different levels of within the system starting from being guided by the information that the user inputs into the system down to the lower level of being set using information inferred by the system’s processing engine. This paper will discuss some of our early work and ideas.
Resumo:
The miniaturization and dissemination of audiovisual media into small, mobile assemblages of cameras, screens and microphones has brought "database cinema" (Manovich) into pockets and handbags. In turn, this micro-portability of video production calls for a reconsideration of database cinema, not as an aesthetic but rather as a media ecology that makes certain experiences and forms of interaction possible. In this context the clip and the fragment become a social currency (showing, trading online, etc.), and the enjoyment of a moment or "occasion" becomes an opportunity for recording, extending, preserving and displaying. If we are now the documentarists of our lives (as so many mobile phone adverts imply), it follows that we are also our own archivists as well. From the folksonomies of Flickr and YouTube to the slick "media centres" of Sony, Apple and Microsoft, the audiovisual home archive is a prized territory of struggle among platforms and brands. The database is emerging as the dominant (screen) medium of popular creativity and distribution – but it also brings the categories of "home" and "person" closer to that of the archive.
Resumo:
Broadly argues that the distinction between print, radio and TV has become untenable, and we need different concepts for database-driven media platforms, their interfaces, their scale, temporality and modes of reception.
Resumo:
The present recession has prompted scholarly and journalistic questioning of the contributions of the cultural industries to the economy. The talent-rich metropolitan clusters of London and New York are well-placed to ride out a thoroughgoing shakeup of the media markets if they manage their infrastructure, space and resources strategically, as Richard Florida has recently argued. This seems to be the assumption behind the recent Digital Britain interim report, and Gordon Brown's remarks that a digital revolution "lies at the heart" of Britain's economic recovery and that broadband and the media industry can play a leading role in pulling the UK out of the recession. Focusing on the Digital Britain report and consultation documents, this presentation seeks to unpack some of the fundamental assumptions behind this link between digital infrastructure, creativity and profitability. In particular the implicit notion of an engaged audience of users, generating "content" as well as shaping new media platforms calls into question long-held theoretical constructions of the mass audience of consumers as spectators; instead, the audience emerges as a potential economic powerhouse, an underused resource for tomorrow's cultural industries.
Resumo:
The present recession has prompted scholarly and journalistic questioning of the contributions of the cultural industries to the economy. The talent-rich metropolitan clusters of London and New York are well-placed to ride out a thoroughgoing shakeup of the media markets if they manage their infrastructure, space and resources strategically, as Richard Florida has recently argued. This seems to be the assumption behind the recent Digital Britain interim report, and Gordon Brown's remarks that a digital revolution "lies at the heart" of Britain's economic recovery and that broadband and the media industry can play a leading role in pulling the UK out of the recession. Focusing on the Digital Britain report and consultation documents, this presentation seeks to unpack some of the fundamental assumptions behind this link between digital infrastructure, creativity and profitability. In particular the implicit notion of an engaged audience of users, generating "content" as well as shaping new media platforms calls into question longheld theoretical constructions of the mass audience of consumers as spectators. [From the Author]