7 resultados para interlocked architectures
em Digital Peer Publishing
Resumo:
PDP++ is a freely available, open source software package designed to support the development, simulation, and analysis of research-grade connectionist models of cognitive processes. It supports most popular parallel distributed processing paradigms and artificial neural network architectures, and it also provides an implementation of the LEABRA computational cognitive neuroscience framework. Models are typically constructed and examined using the PDP++ graphical user interface, but the system may also be extended through the incorporation of user-written C++ code. This article briefly reviews the features of PDP++, focusing on its utility for teaching cognitive modeling concepts and skills to university undergraduate and graduate students. An informal evaluation of the software as a pedagogical tool is provided, based on the author’s classroom experiences at three research universities and several conference-hosted tutorials.
Resumo:
As education providers increasingly integrate digital learning media into their education processes, the need for the systematic management of learning materials and learning arrangements becomes clearer. Digital repositories, often called Learning Object Repositories (LOR), promise to provide an answer to this challenge. This article is composed of two parts. In this part, we derive technological and pedagogical requirements for LORs from a concretization of information quality criteria for e-learning technology. We review the evolution of learning object repositories and discuss their core features in the context of pedagogical requirements, information quality demands, and e-learning technology standards. We conclude with an outlook in Part 2, which presents concrete technical solutions, in particular networked repository architectures.
Resumo:
In Part 1 of this article we discussed the need for information quality and the systematic management of learning materials and learning arrangements. Digital repositories, often called Learning Object Repositories (LOR), were introduced as a promising answer to this challenge. We also derived technological and pedagogical requirements for LORs from a concretization of information quality criteria for e-learning technology. This second part presents technical solutions that particularly address the demands of open education movements, which aspire to a global reuse and sharing culture. From this viewpoint, we develop core requirements for scalable network architectures for educational content management. We then present edu-sharing, an advanced example of a network of homogeneous repositories for learning resources, and discuss related technology. We conclude with an outlook in terms of emerging developments towards open and networked system architectures in e-learning.
Resumo:
The next generation of learners expect more informality in learning. Formal learning systems such as traditional LMS systems no longer meet the needs of a generation of learners used to Twitter and Facebook, social networking and user-generated content. Regardless of this, however, formal content and learning models are still important and play a major role in educating learners, particularly in enterprise. The eLite project at DERI addressed this emerging dichotomy of learning styles, reconciling the traditional with the avant garde by using innovative technology to add informal learning capabilities to formal learning architectures.
Resumo:
Mixed Reality (MR) aims to link virtual entities with the real world and has many applications such as military and medical domains [JBL+00, NFB07]. In many MR systems and more precisely in augmented scenes, one needs the application to render the virtual part accurately at the right time. To achieve this, such systems acquire data related to the real world from a set of sensors before rendering virtual entities. A suitable system architecture should minimize the delays to keep the overall system delay (also called end-to-end latency) within the requirements for real-time performance. In this context, we propose a compositional modeling framework for MR software architectures in order to specify, simulate and validate formally the time constraints of such systems. Our approach is first based on a functional decomposition of such systems into generic components. The obtained elements as well as their typical interactions give rise to generic representations in terms of timed automata. A whole system is then obtained as a composition of such defined components. To write specifications, a textual language named MIRELA (MIxed REality LAnguage) is proposed along with the corresponding compilation tools. The generated output contains timed automata in UPPAAL format for simulation and verification of time constraints. These automata may also be used to generate source code skeletons for an implementation on a MR platform. The approach is illustrated first on a small example. A realistic case study is also developed. It is modeled by several timed automata synchronizing through channels and including a large number of time constraints. Both systems have been simulated in UPPAAL and checked against the required behavioral properties.
Resumo:
Die hohe Komplexität zellularer intralogistischer Systeme und deren Steuerungsarchitektur legt die Verwendung moderner Simulations- und Visualisierungstechniken nahe, um schon im Vorfeld Aussagen über die Leistungsfähigkeit und Zukunftssicherheit eines geplanten Systems treffen zu können. In dieser Arbeit wird ein Konzept für ein Simulationssystem zur VR-basierten Steuerungsverifikation zellularer Intralogistiksysteme vorgestellt. Beschrieben wird die Erstellung eines Simulationsmodells für eine real existierende Anlage und es wird ein Überblick über die Bestandteile der Simulation, insbesondere die Anbindung der Steuerung des realen agentenbasierten Systems, gegeben.
Resumo:
We present in this paper several contributions on the collision detection optimization centered on hardware performance. We focus on the broad phase which is the first step of the collision detection process and propose three new ways of parallelization of the well-known Sweep and Prune algorithm. We first developed a multi-core model takes into account the number of available cores. Multi-core architecture enables us to distribute geometric computations with use of multi-threading. Critical writing section and threads idling have been minimized by introducing new data structures for each thread. Programming with directives, like OpenMP, appears to be a good compromise for code portability. We then proposed a new GPU-based algorithm also based on the "Sweep and Prune" that has been adapted to multi-GPU architectures. Our technique is based on a spatial subdivision method used to distribute computations among GPUs. Results show that significant speed-up can be obtained by passing from 1 to 4 GPUs in a large-scale environment.