963 resultados para hardware deskribapen lengoaiak


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many Web applications walk the thin line between the need for dynamic data and the need to meet user performance expectations. In environments where funds are not available to constantly upgrade hardware inline with user demand, alternative approaches need to be considered. This paper introduces a ‘Data farming’ model whereby dynamic data, which is ‘grown’ in operational applications, is ‘harvested’ and ‘packaged’ for various consumer markets. Like any well managed agricultural operation, crops are harvested according to historical and perceived demand as inferred by a self-optimising process. This approach aims to make enhanced use of available resources through better utlilisation of system downtime - thereby improving application performance and increasing the availability of key business data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Parallel processing techniques have been used in the past to provide high performance computing resources for activities such as fire-field modelling. This has traditionally been achieved using specialized hardware and software, the expense of which would be difficult to justify for many fire engineering practices. In this article we demonstrate how typical office-based PCs attached to a Local Area Network has the potential to offer the benefits of parallel processing with minimal costs associated with the purchase of additional hardware or software. It was found that good speedups could be achieved on homogeneous networks of PCs, for example a problem composed of ~100,000 cells would run 9.3 times faster on a network of 12 800MHz PCs than on a single 800MHz PC. It was also found that a network of eight 3.2GHz Pentium 4 PCs would run 7.04 times faster than a single 3.2GHz Pentium computer. A dynamic load balancing scheme was also devised to allow the effective use of the software on heterogeneous PC networks. This scheme also ensured that the impact between the parallel processing task and other computer users on the network was minimized.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The scalability of a computer system is its response to growth. It is also depended on its hardware, its operating system and the applications it is running. Most distributed systems technology today still depends on bus-based shared memory which do not scale well, and systems based on the grid or hypercube scheme requires significantly less connections than a full inter-connection that would exhibit a quadratic growth rate. The rapid convergence of mobile communication, digital broadcasting and network infrastructures calls for rich multimedia content that is adaptive and responsive to the needs of individuals, businesses and the public organisations. This paper will discuss the emergence of mobile Multimedia systems and provides an overview of the issues regarding design and delivery of multimedia content to mobile devices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Parallel processing techniques have been used in the past to provide high performance computing resources for activities such as Computational Fluid Dynamics. This is normally achieved using specialized hardware and software, the expense of which would be difficult to justify for many fire engineering practices. In this paper, we demonstrate how typical office-based PCs attached to a local area network have the potential to offer the benefits of parallel processing with minimal costs associated with the purchase of additional hardware or software. A dynamic load balancing scheme was devised to allow the effective use of the software on heterogeneous PC networks. This scheme ensured that the impact between the parallel processing task and other computer users on the network was minimized thus allowing practical parallel processing within a conventional office environment. Copyright © 2006 John Wiley & Sons, Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Embedded electronic systems in vehicles are of rapidly increasing commercial importance for the automotive industry. While current vehicular embedded systems are extremely limited and static, a more dynamic configurable system would greatly simplify the integration work and increase quality of vehicular systems. This brings in features like separation of concerns, customised software configuration for individual vehicles, seamless connectivity, and plug-and-play capability. Furthermore, such a system can also contribute to increased dependability and resource optimization due to its inherent ability to adjust itself dynamically to changes in software, hardware resources, and environment condition. This paper describes the architectural approach to achieving the goals of dynamically self-configuring automotive embedded electronic systems by the EU research project DySCAS. The architecture solution outlined in this paper captures the application and operational contexts, expected features, middleware services, functions and behaviours, as well as the basic mechanisms and technologies. The paper also covers the architecture conceptualization by presenting the rationale, concerning the architecture structuring, control principles, and deployment concept. In this paper, we also present the adopted architecture V&V strategy and discuss some open issues in regards to the industrial acceptance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The established (digital) leisure game industry is historically one dominated by large international hardware vendors (e.g. Sony, Microsoft and Nintendo), major publishers and supported by a complex network of development studios, distributors and retailers. New modes of digital distribution and development practice are challenging this business model and the leisure games industry landscape is one experiencing rapid change. The established (digital) leisure games industry, at least anecdotally, appears reluctant to participate actively in the applied games sector (Stewart et al., 2013). There are a number of potential explanations as to why this may indeed be the case including ; A concentration on large-scale consolidation of their (proprietary) platforms, content, entertainment brand and credibility which arguably could be weakened by association with the conflicting notion of purposefulness (in applied games) in market niches without clear business models or quantifiable returns on investment. In contrast, the applied games industry exhibits the characteristics of an emerging, immature industry namely: weak interconnectedness, limited knowledge exchange, an absence of harmonising standards, limited specialisations, limited division of labour and arguably insufficient evidence of the products efficacies (Stewart et al., 2013; Garcia Sanchez, 2013) and could, arguably, be characterised as a dysfunctional market. To test these assertions the Realising an Applied Gaming Ecosystem (RAGE) project will develop a number of self contained gaming assets to be actively employed in the creation of a number of applied games to be implemented and evaluated as regional pilots across a variety of European educational, training and vocational contexts. RAGE is a European Commission Horizon 2020 project with twenty (pan European) partners from industry, research and education with the aim of developing, transforming and enriching advanced technologies from the leisure games industry into self-contained gaming assets (i.e. solutions showing economic value potential) that could support a variety of stakeholders including teachers, students, and, significantly, game studios interested in developing applied games. RAGE will provide these assets together with a large quantity of high-quality knowledge resources through a self-sustainable Ecosystem, a social space that connects research, the gaming industries, intermediaries, education providers, policy makers and end-users in order to stimulate the development and application of applied games in educational, training and vocational contexts. The authors identify barriers (real and perceived) and opportunities facing stakeholders in engaging, exploring new emergent business models ,developing, establishing and sustaining an applied gaming eco system in Europe.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As the UK's national marine data centre, a key responsibility of the British Oceanographic Data Centre (BODC) is to provide data management support for the scientific activities of complex multi-disciplinary long-term research programmes. Since the initial cruise in 1995, the NERC funded Atlantic Meridional Transect (AMT) project has undertaken 18 north–south transects of the Atlantic Ocean. As the project has evolved there has been a steady growth in the number of participants, the volume of data, its complexity and the demand for data. BODC became involved in AMT in 2002 at the beginning of phase II of this programme and since then has provided continuous support to the AMT and the wider scientific community through the rescue, quality control, processing and access to the data. The data management is carried out by a team of specialists using a sophisticated infrastructure and hardware to manage, integrate and serve physical, biological and chemical data. Here, we discuss the approach adopted, techniques applied and some guiding principles for management of large multi-disciplinary programmes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It has been proposed that the field of appropriate technology (AT) - small-scale, energy efficient and low-cost solutions, can be of tremendous assistance in many of the sustainable development challenges, such as food and water security, health, shelter, education and work opportunities. Unfortunately, there has not yet been a significant uptake of AT by organizations, researchers, policy makers or the mainstream public working in the many areas of the development sector. Some of the biggest barriers to higher AT engagement include: 1) AT perceived as inferior or ‘poor persons technology’, 2) questions of technological robustness, design, fit and transferability, 3) funding, 4) institutional support, as well as 5) general barriers associated with tackling rural poverty. With the rise of information and communication technologies (ICTs) for online networking and knowledge sharing, the possibilities to tap into the collaborative open-access and open-source AT are growing, and so is the prospect for collective poverty reducing strategies, enhancement of entrepreneurship, communications, education and a diffusion of life-changing technologies. In short, the same collaborative philosophy employed in the success of open source software can be applied to hardware design of technologies to improve sustainable development efforts worldwide. To analyze current barriers to open source appropriate technology (OSAT) and explore opportunities to overcome such obstacles, a series of interviews with researchers and organizations working in the field of AT were conducted. The results of the interviews confirmed the majority of literature identified barriers, but also revealed that the most pressing problem for organizations and researchers currently working in the field of AT is the need for much better communication and collaboration to share the knowledge and resources and work in partnership. In addition, interviews showcased general receptiveness to the principles of collaborative innovation and open source on the ground level. A much greater focus on networking, collaboration, demand-led innovation, community participation, and the inclusion of educational institutions through student involvement can be of significant help to build the necessary knowledge base, networks and the critical mass exposure for the growth of appropriate technology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent growth in the shape-from-shading psychophysics literature has been paralled by an increasing availability of computer graphics hardware and software, to the extent that most psychophysical studies in this area now employ computer lighting algorithms. The most widely used of algorithms is shape-from-shading psychophysics is the Phong lighting model. This model, and other shading models of its genre, produce readily ineterpretable imiages of three-dimensional scenes. However, such algorithms are only approximations of how light interacts with real objects in the natural environment. Nevertheless, the results from psychophysical experiments using these techniques have been used to infer the processes underlying the perception of shape-from-shading in natural environments. It is important to establish whether this substitution is ever valid. We report a series of experiments investigating whether two recently reported illusions seen computer-generated, Phond shaded images occur for solid objects under real illuminants. The two illusions investigated are three-dimensional curvature contrast and the illuminant-position effect on perceived curvature. We show that both effects do occur for solid objects, and that the magnitude of these effects are equivalent regardless of whether subjects are presented with ray traced or solid objects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A BSP (Bulk Synchronous Parallelism) computation is characterized by the generation of asynchronous messages in packages during independent execution of a number of processes and their subsequent delivery at synchronization points. Bundling messages together represents a significant departure from the traditional ‘one communication at a time’ approach. In this paper the semantic consequences of communication packaging are explored. In particular, the BSP communication structure is identified with a general form of substitution—predicate substitution. Predicate substitution provides a means of reasoning about the synchronized delivery of asynchronous communications when the immediate programming context does not explicitly refer to the variables that are to be updated (unlike traditional operations, such as the assignment $x := e$, where the names of the updated variables can be extracted from the context). Proofs of implementations of Newton's root finding method and prefix sum are used to illustrate the practical application of the proposed approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Architectures and methods for the rapid design of silicon cores for implementing discrete wavelet transforms over a wide range of specifications are described. These architectures are efficient, modular, scalable, and cover orthonormal and biorthogonal wavelet transform families. They offer efficient hardware utilization by exploiting a number of core wavelet filter properties and allow the creation of silicon designs that are highly parameterized, including in terms of wavelet type and wordlengths. Control circuitry is embedded within these systems allowing them to be cascaded for any desired level of decomposition without any interface glue logic. The time to produce chip designs for a specific wavelet application is typically less than a day and these are comparable in area and performance to handcrafted designs. They are also portable across a wide range of silicon foundries and suitable for field programmable gate array and programmable logic data implementation. The approach described has also been extended to wavelet packet transforms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A novel application-specific instruction set processor (ASIP) for use in the construction of modern signal processing systems is presented. This is a flexible device that can be used in the construction of array processor systems for the real-time implementation of functions such as singular-value decomposition (SVD) and QR decomposition (QRD), as well as other important matrix computations. It uses a coordinate rotation digital computer (CORDIC) module to perform arithmetic operations and several approaches are adopted to achieve high performance including pipelining of the micro-rotations, the use of parallel instructions and a dual-bus architecture. In addition, a novel method for scale factor correction is presented which only needs to be applied once at the end of the computation. This also reduces computation time and enhances performance. Methods are described which allow this processor to be used in reduced dimension (i.e., folded) array processor structures that allow tradeoffs between hardware and performance. The net result is a flexible matrix computational processing element (PE) whose functionality can be changed under program control for use in a wider range of scenarios than previous work. Details are presented of the results of a design study, which considers the application of this decomposition PE architecture in a combined SVD/QRD system and demonstrates that a combination of high performance and efficient silicon implementation are achievable. © 2005 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A novel wireless local area network (WLAN) security processor is described in this paper. It is designed to offload security encapsulation processing from the host microprocessor in an IEEE 802.11i compliant medium access control layer to a programmable hardware accelerator. The unique design, which comprises dedicated cryptographic instructions and hardware coprocessors, is capable of performing wired equivalent privacy, temporal key integrity protocol, counter mode with cipher block chaining message authentication code protocol, and wireless robust authentication protocol. Existing solutions to wireless security have been implemented on hardware devices and target specific WLAN protocols whereas the programmable security processor proposed in this paper provides support for all WLAN protocols and thus, can offer backwards compatibility as well as future upgrade ability as standards evolve. It provides this additional functionality while still achieving equivalent throughput rates to existing architectures. © 2006 IEEE.