87 resultados para Pervasive Computing


Relevância:

20.00% 20.00%

Publicador:

Resumo:

How can a bridge be built between autonomic computing approaches and parallel computing systems? The work reported in this paper is motivated towards bridging this gap by proposing a swarm-array computing approach based on ‘Intelligent Agents’ to achieve autonomy for distributed parallel computing systems. In the proposed approach, a task to be executed on parallel computing cores is carried onto a computing core by carrier agents that can seamlessly transfer between processing cores in the event of a predicted failure. The cognitive capabilities of the carrier agents on a parallel processing core serves in achieving the self-ware objectives of autonomic computing, hence applying autonomic computing concepts for the benefit of parallel computing systems. The feasibility of the proposed approach is validated by simulation studies using a multi-agent simulator on an FPGA (Field-Programmable Gate Array) and experimental studies using MPI (Message Passing Interface) on a computer cluster. Preliminary results confirm that applying autonomic computing principles to parallel computing systems is beneficial.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent research in multi-agent systems incorporate fault tolerance concepts. However, the research does not explore the extension and implementation of such ideas for large scale parallel computing systems. The work reported in this paper investigates a swarm array computing approach, namely ‘Intelligent Agents’. In the approach considered a task to be executed on a parallel computing system is decomposed to sub-tasks and mapped onto agents that traverse an abstracted hardware layer. The agents intercommunicate across processors to share information during the event of a predicted core/processor failure and for successfully completing the task. The agents hence contribute towards fault tolerance and towards building reliable systems. The feasibility of the approach is validated by simulations on an FPGA using a multi-agent simulator and implementation of a parallel reduction algorithm on a computer cluster using the Message Passing Interface.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose – Facilities managers have less visibility of how buildings are being used due to flexible working and unpredictable workers. The purpose of this paper is to examine the current issues in workspace management and an automatic solution through radio frequency identification (RFID) that could provide real time information on the volume and capacity of buildings. Design/methodology/approach – The study described in this paper is based on a case study at a facilities management (FM) department. The department is examining a ubiquitous technology in the form of innovative RFID for security and workspace management. Interviews and observations are conducted within the facilities department for the initial phase of the implementation of RFID technology. Findings – Research suggests that work methods are evolving and becoming more flexible. With this in mind, facilities managers face new challenges to create a suitable environment for an unpredictable workforce. RFID is one solution that could provide facilities managers with an automatic way of examining space in real time and over a wider area than currently possible. RFID alone for space management is financially expensive but by making the application multiple for other areas makes more business sense. Practical implications – This paper will provide practicing FM and academics with the knowledge gained from the application of RFID in this organisation. While the concept of flexible working seems attractive, there is an emerging need to provide various forms of spaces that enable employees’ satisfaction and enhance the productivity of the organisation. Originality/value – The paper introduces new thinking on the subject of “workspace management”. It highlights the current difficulties in workspace management and how an RFID solution will benefit workspace methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Clusters of computers can be used together to provide a powerful computing resource. Large Monte Carlo simulations, such as those used to model particle growth, are computationally intensive and take considerable time to execute on conventional workstations. By spreading the work of the simulation across a cluster of computers, the elapsed execution time can be greatly reduced. Thus a user has apparently the performance of a supercomputer by using the spare cycles on other workstations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Analogue computers provide actual rather than virtual representations of model systems. They are powerful and engaging computing machines that are cheap and simple to build. This two-part Retronics article helps you build (and understand!) your own analogue computer to simulate the Lorenz butterfly that's become iconic for Chaos theory.