963 resultados para Hardware


Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a novel system to be used in the rehabilitation of patients with forearm injuries. The system uses surface electromyography (sEMG) recordings from a wireless sleeve to control video games designed to provide engaging biofeedback to the user. An integrated hardware/software system uses a neural net to classify the signals from a userâs muscles as they perform one of a number of common forearm physical therapy exercises. These classifications are used as input for a suite of video games that have been custom-designed to hold the patientâs attention and decrease the risk of noncompliance with the physical therapy regimen necessary to regain full function in the injured limb. The data is transmitted wirelessly from the on-sleeve board to a laptop computer using a custom-designed signal-processing algorithm that filters and compresses the data prior to transmission. We believe that this system has the potential to significantly improve the patient experience and efficacy of physical therapy using biofeedback that leverages the compelling nature of video games.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An abstract of this work will be presented at the Compiler, Architecture and Tools Conference (CATC), Intel Development Center, Haifa, Israel November 23, 2015.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Three paradigms for distributed-memory parallel computation that free the application programmer from the details of message passing are compared for an archetypal structured scientific computation -- a nonlinear, structured-grid partial differential equation boundary value problem -- using the same algorithm on the same hardware. All of the paradigms -- parallel languages represented by the Portland Group's HPF, (semi-)automated serial-to-parallel source-to-source translation represented by CAP-Tools from the University of Greenwich, and parallel libraries represented by Argonne's PETSc -- are found to be easy to use for this problem class, and all are reasonably effective in exploiting concurrency after a short learning curve. The level of involvement required by the application programmer under any paradigm includes specification of the data partitioning, corresponding to a geometrically simple decomposition of the domain of the PDE. Programming in SPMD style for the PETSc library requires writing only the routines that discretize the PDE and its Jacobian, managing subdomain-to-processor mappings (affine global-to-local index mappings), and interfacing to library solver routines. Programming for HPF requires a complete sequential implementation of the same algorithm as a starting point, introduction of concurrency through subdomain blocking (a task similar to the index mapping), and modest experimentation with rewriting loops to elucidate to the compiler the latent concurrency. Programming with CAPTools involves feeding the same sequential implementation to the CAPTools interactive parallelization system, and guiding the source-to-source code transformation by responding to various queries about quantities knowable only at runtime. Results representative of "the state of the practice" for a scaled sequence of structured grid problems are given on three of the most important contemporary high-performance platforms: the IBM SP, the SGI Origin 2000, and the CRAYY T3E.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The shared-memory programming model can be an effective way to achieve parallelism on shared memory parallel computers. Historically however, the lack of a programming standard using directives and the limited scalability have affected its take-up. Recent advances in hardware and software technologies have resulted in improvements to both the performance of parallel programs with compiler directives and the issue of portability with the introduction of OpenMP. In this study, the Computer Aided Parallelisation Toolkit has been extended to automatically generate OpenMP-based parallel programs with nominal user assistance. We categorize the different loop types and show how efficient directives can be placed using the toolkit's in-depth interprocedural analysis. Examples are taken from the NAS parallel benchmarks and a number of real-world application codes. This demonstrates the great potential of using the toolkit to quickly parallelise serial programs as well as the good performance achievable on up to 300 processors for hybrid message passing-directive parallelisations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

From the model geometry creation to the model analysis, the stages in between such as mesh generation are the most manpower intensive phase in a mesh-based computational mechanics simulation process. On the other hand the model analysis is the most computing intensive phase. Advanced computational hardware and software have significantly reduced the computing time - and more importantly the trend is downward. With the kind of models envisaged coming, which are larger, more complex in geometry and modelling, and multiphysics, there is no clear trend that the manpower intensive phase is to decrease significantly in time - in the present way of operation it is more likely to increase with model complexity. In this paper we address this dilemma in collaborating components for models in electronic packaging application.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many Web applications walk the thin line between the need for dynamic data and the need to meet user performance expectations. In environments where funds are not available to constantly upgrade hardware inline with user demand, alternative approaches need to be considered. This paper introduces a â˜Data farmingâ model whereby dynamic data, which is â˜grownâ in operational applications, is â˜harvestedâ and â˜packagedâ for various consumer markets. Like any well managed agricultural operation, crops are harvested according to historical and perceived demand as inferred by a self-optimising process. This approach aims to make enhanced use of available resources through better utlilisation of system downtime - thereby improving application performance and increasing the availability of key business data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Parallel processing techniques have been used in the past to provide high performance computing resources for activities such as fire-field modelling. This has traditionally been achieved using specialized hardware and software, the expense of which would be difficult to justify for many fire engineering practices. In this article we demonstrate how typical office-based PCs attached to a Local Area Network has the potential to offer the benefits of parallel processing with minimal costs associated with the purchase of additional hardware or software. It was found that good speedups could be achieved on homogeneous networks of PCs, for example a problem composed of ~100,000 cells would run 9.3 times faster on a network of 12 800MHz PCs than on a single 800MHz PC. It was also found that a network of eight 3.2GHz Pentium 4 PCs would run 7.04 times faster than a single 3.2GHz Pentium computer. A dynamic load balancing scheme was also devised to allow the effective use of the software on heterogeneous PC networks. This scheme also ensured that the impact between the parallel processing task and other computer users on the network was minimized.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The scalability of a computer system is its response to growth. It is also depended on its hardware, its operating system and the applications it is running. Most distributed systems technology today still depends on bus-based shared memory which do not scale well, and systems based on the grid or hypercube scheme requires significantly less connections than a full inter-connection that would exhibit a quadratic growth rate. The rapid convergence of mobile communication, digital broadcasting and network infrastructures calls for rich multimedia content that is adaptive and responsive to the needs of individuals, businesses and the public organisations. This paper will discuss the emergence of mobile Multimedia systems and provides an overview of the issues regarding design and delivery of multimedia content to mobile devices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Parallel processing techniques have been used in the past to provide high performance computing resources for activities such as Computational Fluid Dynamics. This is normally achieved using specialized hardware and software, the expense of which would be difficult to justify for many fire engineering practices. In this paper, we demonstrate how typical office-based PCs attached to a local area network have the potential to offer the benefits of parallel processing with minimal costs associated with the purchase of additional hardware or software. A dynamic load balancing scheme was devised to allow the effective use of the software on heterogeneous PC networks. This scheme ensured that the impact between the parallel processing task and other computer users on the network was minimized thus allowing practical parallel processing within a conventional office environment. Copyright © 2006 John Wiley & Sons, Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Embedded electronic systems in vehicles are of rapidly increasing commercial importance for the automotive industry. While current vehicular embedded systems are extremely limited and static, a more dynamic configurable system would greatly simplify the integration work and increase quality of vehicular systems. This brings in features like separation of concerns, customised software configuration for individual vehicles, seamless connectivity, and plug-and-play capability. Furthermore, such a system can also contribute to increased dependability and resource optimization due to its inherent ability to adjust itself dynamically to changes in software, hardware resources, and environment condition. This paper describes the architectural approach to achieving the goals of dynamically self-configuring automotive embedded electronic systems by the EU research project DySCAS. The architecture solution outlined in this paper captures the application and operational contexts, expected features, middleware services, functions and behaviours, as well as the basic mechanisms and technologies. The paper also covers the architecture conceptualization by presenting the rationale, concerning the architecture structuring, control principles, and deployment concept. In this paper, we also present the adopted architecture V&V strategy and discuss some open issues in regards to the industrial acceptance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The established (digital) leisure game industry is historically one dominated by large international hardware vendors (e.g. Sony, Microsoft and Nintendo), major publishers and supported by a complex network of development studios, distributors and retailers. New modes of digital distribution and development practice are challenging this business model and the leisure games industry landscape is one experiencing rapid change. The established (digital) leisure games industry, at least anecdotally, appears reluctant to participate actively in the applied games sector (Stewart et al., 2013). There are a number of potential explanations as to why this may indeed be the case including ; A concentration on large-scale consolidation of their (proprietary) platforms, content, entertainment brand and credibility which arguably could be weakened by association with the conflicting notion of purposefulness (in applied games) in market niches without clear business models or quantifiable returns on investment. In contrast, the applied games industry exhibits the characteristics of an emerging, immature industry namely: weak interconnectedness, limited knowledge exchange, an absence of harmonising standards, limited specialisations, limited division of labour and arguably insufficient evidence of the products efficacies (Stewart et al., 2013; Garcia Sanchez, 2013) and could, arguably, be characterised as a dysfunctional market. To test these assertions the Realising an Applied Gaming Ecosystem (RAGE) project will develop a number of self contained gaming assets to be actively employed in the creation of a number of applied games to be implemented and evaluated as regional pilots across a variety of European educational, training and vocational contexts. RAGE is a European Commission Horizon 2020 project with twenty (pan European) partners from industry, research and education with the aim of developing, transforming and enriching advanced technologies from the leisure games industry into self-contained gaming assets (i.e. solutions showing economic value potential) that could support a variety of stakeholders including teachers, students, and, significantly, game studios interested in developing applied games. RAGE will provide these assets together with a large quantity of high-quality knowledge resources through a self-sustainable Ecosystem, a social space that connects research, the gaming industries, intermediaries, education providers, policy makers and end-users in order to stimulate the development and application of applied games in educational, training and vocational contexts. The authors identify barriers (real and perceived) and opportunities facing stakeholders in engaging, exploring new emergent business models ,developing, establishing and sustaining an applied gaming eco system in Europe.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As the UK's national marine data centre, a key responsibility of the British Oceanographic Data Centre (BODC) is to provide data management support for the scientific activities of complex multi-disciplinary long-term research programmes. Since the initial cruise in 1995, the NERC funded Atlantic Meridional Transect (AMT) project has undertaken 18 northâsouth transects of the Atlantic Ocean. As the project has evolved there has been a steady growth in the number of participants, the volume of data, its complexity and the demand for data. BODC became involved in AMT in 2002 at the beginning of phase II of this programme and since then has provided continuous support to the AMT and the wider scientific community through the rescue, quality control, processing and access to the data. The data management is carried out by a team of specialists using a sophisticated infrastructure and hardware to manage, integrate and serve physical, biological and chemical data. Here, we discuss the approach adopted, techniques applied and some guiding principles for management of large multi-disciplinary programmes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It has been proposed that the field of appropriate technology (AT) - small-scale, energy efficient and low-cost solutions, can be of tremendous assistance in many of the sustainable development challenges, such as food and water security, health, shelter, education and work opportunities. Unfortunately, there has not yet been a significant uptake of AT by organizations, researchers, policy makers or the mainstream public working in the many areas of the development sector. Some of the biggest barriers to higher AT engagement include: 1) AT perceived as inferior or â˜poor persons technologyâ, 2) questions of technological robustness, design, fit and transferability, 3) funding, 4) institutional support, as well as 5) general barriers associated with tackling rural poverty. With the rise of information and communication technologies (ICTs) for online networking and knowledge sharing, the possibilities to tap into the collaborative open-access and open-source AT are growing, and so is the prospect for collective poverty reducing strategies, enhancement of entrepreneurship, communications, education and a diffusion of life-changing technologies. In short, the same collaborative philosophy employed in the success of open source software can be applied to hardware design of technologies to improve sustainable development efforts worldwide. To analyze current barriers to open source appropriate technology (OSAT) and explore opportunities to overcome such obstacles, a series of interviews with researchers and organizations working in the field of AT were conducted. The results of the interviews confirmed the majority of literature identified barriers, but also revealed that the most pressing problem for organizations and researchers currently working in the field of AT is the need for much better communication and collaboration to share the knowledge and resources and work in partnership. In addition, interviews showcased general receptiveness to the principles of collaborative innovation and open source on the ground level. A much greater focus on networking, collaboration, demand-led innovation, community participation, and the inclusion of educational institutions through student involvement can be of significant help to build the necessary knowledge base, networks and the critical mass exposure for the growth of appropriate technology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent growth in the shape-from-shading psychophysics literature has been paralled by an increasing availability of computer graphics hardware and software, to the extent that most psychophysical studies in this area now employ computer lighting algorithms. The most widely used of algorithms is shape-from-shading psychophysics is the Phong lighting model. This model, and other shading models of its genre, produce readily ineterpretable imiages of three-dimensional scenes. However, such algorithms are only approximations of how light interacts with real objects in the natural environment. Nevertheless, the results from psychophysical experiments using these techniques have been used to infer the processes underlying the perception of shape-from-shading in natural environments. It is important to establish whether this substitution is ever valid. We report a series of experiments investigating whether two recently reported illusions seen computer-generated, Phond shaded images occur for solid objects under real illuminants. The two illusions investigated are three-dimensional curvature contrast and the illuminant-position effect on perceived curvature. We show that both effects do occur for solid objects, and that the magnitude of these effects are equivalent regardless of whether subjects are presented with ray traced or solid objects.