909 resultados para supramolecular architectures


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ultimate goal of this research plan is to improve the learning experience of students through the combination of pedagogical eLearning services. Service oriented architectures are already being used in eLearning but in this work the focus is on services of pedagogical value, rather then on generic services adapted from other business systems. This approach to the architecture of eLearning platforms raises challenges addressed by this work, namely: conceptual modeling of the pedagogical eLearning services domain; interoperability and coordination of pedagogical eLearning service; conversion of existing eLearning systems to pedagogical services; adaptation of eLearning services to individual learners. An improved eLearning platform will incorporate learning tools adequate to the domains it covers and will focus on the individual learner that uses it. With this approach we expect to raise the pedagogical value of eLearning platforms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Learning systems are evolving from component based and centralized architectures towards service oriented and decentralized architectures. The standardization of e-learning content and interoperability is a powerful force in this evolution. In this chapter we put in perspective the evolution of e-learning systems and standards, and argue that specialized services will play an important role in future learning systems, especially in those targeted for competitive learning.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In recent years emerged several initiatives promoted by educational organizations to adapt Service Oriented Architectures (SOA) to e-learning. These initiatives commonly named eLearning Frameworks share a common goal: to create flexible learning environments by integrating heterogeneous systems already available in many educational institutions. However, these frameworks were designed for integration of systems participating in business like processes rather than on complex pedagogical processes as those related to automatic evaluation. Consequently, their knowledge bases lack some fundamental components that are needed to model pedagogical processes. The objective of the research described in this paper is to study the applicability of eLearning frameworks for modelling a network of heterogeneous eLearning systems, using the automatic evaluation of programming exercises as a case study. The paper surveys the existing eLearning frameworks to justify the selection of the e-Framework. This framework is described in detail and identified the necessary components missing from its knowledge base, more precisely, a service genre, expression and usage model for an evaluation service. The extensibility of the framework is tested with the definition of this service. A concrete model for evaluation of programming exercises is presented as a validation of the proposed approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

IEEE International Symposium on Circuits and Systems, pp. 220 – 223, Seattle, EUA

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Electrotécnica e Computadores

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Reconfigurable computing experienced a considerable expansion in the last few years, due in part to the fast run-time partial reconfiguration features offered by recent SRAM-based Field Programmable Gate Arrays (FPGAs), which allowed the implementation in real-time of dynamic resource allocation strategies, with multiple independent functions from different applications sharing the same logic resources in the space and temporal domains. However, when the sequence of reconfigurations to be performed is not predictable, the efficient management of the logic space available becomes the greatest challenge posed to these systems. Resource allocation decisions have to be made concurrently with system operation, taking into account function priorities and optimizing the space currently available. As a consequence of the unpredictability of this allocation procedure, the logic space becomes fragmented, with many small areas of free resources failing to satisfy most requests and so remaining unused. A rearrangement of the currently running functions is therefore necessary, so as to obtain enough contiguous space to implement incoming functions, avoiding the spreading of their components and the resulting degradation of system performance. A novel active relocation procedure for Configurable Logic Blocks (CLBs) is herein presented, able to carry out online rearrangements, defragmenting the available FPGA resources without disturbing functions currently running.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Multi-agent architectures are well suited for complex inherently distributed problem solving domains. From the many challenging aspects that arise within this framework, a crucial one emerges: how to incorporate dynamic and conflicting agent beliefs? While the belief revision activity in a single agent scenario is concentrated on incorporating new information while preserving consistency, in a multi-agent system it also has to deal with possible conflicts between the agents perspectives. To provide an adequate framework, each agent, built as a combination of an assumption based belief revision system and a cooperation layer, was enriched with additional features: a distributed search control mechanism allowing dynamic context management, and a set of different distributed consistency methodologies. As a result, a Distributed Belief Revision Testbed (DiBeRT) was developed. This paper is a preliminary report presenting some of DiBeRT contributions: a concise representation of external beliefs; a simple and innovative methodology to achieve distributed context management; and a reduced inter-agent data exchange format.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Environmental management is a complex task. The amount and heterogeneity of the data needed for an environmental decision making tool is overwhelming without adequate database systems and innovative methodologies. As far as data management, data interaction and data processing is concerned we here propose the use of a Geographical Information System (GIS) whilst for the decision making we suggest a Multi-Agent System (MAS) architecture. With the adoption of a GIS we hope to provide a complementary coexistence between heterogeneous data sets, a correct data structure, a good storage capacity and a friendly user’s interface. By choosing a distributed architecture such as a Multi-Agent System, where each agent is a semi-autonomous Expert System with the necessary skills to cooperate with the others in order to solve a given task, we hope to ensure a dynamic problem decomposition and to achieve a better performance compared with standard monolithical architectures. Finally, and in view of the partial, imprecise, and ever changing character of information available for decision making, Belief Revision capabilities are added to the system. Our aim is to present and discuss an intelligent environmental management system capable of suggesting the more appropriate land-use actions based on the existing spatial and non-spatial constraints.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

New homoditopic bis-calix[4]arene-carbazole conjugates, armed with hydrophilic carboxylic acid functions at their lower rims, are disclosed. Evidence for their self-association in solution was gathered from solvatochromic and thermochromic studies, as well as from gel-permeation chromatography analysis. Their ability to function as highly sensitive sensors toward polar electron-deficient aromatic compounds is demonstrated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Monitoring systems have traditionally been developed with rigid objectives and functionalities, and tied to specific languages, libraries and run-time environments. There is a need for more flexible monitoring systems which can be easily adapted to distinct requirements. On-line monitoring has been considered as increasingly important for observation and control of a distributed application. In this paper we discuss monitoring interfaces and architectures which support more extensible monitoring and control services. We describe our work on the development of a distributed monitoring infrastructure, and illustrate how it eases the implementation of a complex distributed debugging architecture. We also discuss several issues concerning support for tool interoperability and illustrate how the cooperation among multiple concurrent tools can ease the task of distributed debugging.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the last decade, both scientific community and automotive industry enabled communications among vehicles in different kinds of scenarios proposing different vehicular architectures. Vehicular delay-tolerant networks (VDTNs) were proposed as a solution to overcome some of the issues found in other vehicular architectures, namely, in dispersed regions and emergency scenarios. Most of these issues arise from the unique characteristics of vehicular networks. Contrary to delay-tolerant networks (DTNs), VDTNs place the bundle layer under the network layer in order to simplify the layered architecture and enable communications in sparse regions characterized by long propagation delays, high error rates, and short contact durations. However, such characteristics turn contacts very important in order to exchange as much information as possible between nodes at every contact opportunity. One way to accomplish this goal is to enforce cooperation between network nodes. To promote cooperation among nodes, it is important that nodes share their own resources to deliver messages from others. This can be a very difficult task, if selfish nodes affect the performance of cooperative nodes. This paper studies the performance of a cooperative reputation system that detects, identify, and avoid communications with selfish nodes. Two scenarios were considered across all the experiments enforcing three different routing protocols (First Contact, Spray and Wait, and GeoSpray). For both scenarios, it was shown that reputation mechanisms that punish aggressively selfish nodes contribute to increase the overall network performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Single processor architectures are unable to provide the required performance of high performance embedded systems. Parallel processing based on general-purpose processors can achieve these performances with a considerable increase of required resources. However, in many cases, simplified optimized parallel cores can be used instead of general-purpose processors achieving better performance at lower resource utilization. In this paper, we propose a configurable many-core architecture to serve as a co-processor for high-performance embedded computing on Field-Programmable Gate Arrays. The architecture consists of an array of configurable simple cores with support for floating-point operations interconnected with a configurable interconnection network. For each core it is possible to configure the size of the internal memory, the supported operations and number of interfacing ports. The architecture was tested in a ZYNQ-7020 FPGA in the execution of several parallel algorithms. The results show that the proposed many-core architecture achieves better performance than that achieved with a parallel generalpurpose processor and that up to 32 floating-point cores can be implemented in a ZYNQ-7020 SoC FPGA.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a new parallel implementation of a previously hyperspectral coded aperture (HYCA) algorithm for compressive sensing on graphics processing units (GPUs). HYCA method combines the ideas of spectral unmixing and compressive sensing exploiting the high spatial correlation that can be observed in the data and the generally low number of endmembers needed in order to explain the data. The proposed implementation exploits the GPU architecture at low level, thus taking full advantage of the computational power of GPUs using shared memory and coalesced accesses to memory. The proposed algorithm is evaluated not only in terms of reconstruction error but also in terms of computational performance using two different GPU architectures by NVIDIA: GeForce GTX 590 and GeForce GTX TITAN. Experimental results using real data reveals signficant speedups up with regards to serial implementation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Trabalho final de Mestrado para obtenção do grau de Mestre em Engenharia de Redes de Comunicação e Multimédia

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hyperspectral imaging has become one of the main topics in remote sensing applications, which comprise hundreds of spectral bands at different (almost contiguous) wavelength channels over the same area generating large data volumes comprising several GBs per flight. This high spectral resolution can be used for object detection and for discriminate between different objects based on their spectral characteristics. One of the main problems involved in hyperspectral analysis is the presence of mixed pixels, which arise when the spacial resolution of the sensor is not able to separate spectrally distinct materials. Spectral unmixing is one of the most important task for hyperspectral data exploitation. However, the unmixing algorithms can be computationally very expensive, and even high power consuming, which compromises the use in applications under on-board constraints. In recent years, graphics processing units (GPUs) have evolved into highly parallel and programmable systems. Specifically, several hyperspectral imaging algorithms have shown to be able to benefit from this hardware taking advantage of the extremely high floating-point processing performance, compact size, huge memory bandwidth, and relatively low cost of these units, which make them appealing for onboard data processing. In this paper, we propose a parallel implementation of an augmented Lagragian based method for unsupervised hyperspectral linear unmixing on GPUs using CUDA. The method called simplex identification via split augmented Lagrangian (SISAL) aims to identify the endmembers of a scene, i.e., is able to unmix hyperspectral data sets in which the pure pixel assumption is violated. The efficient implementation of SISAL method presented in this work exploits the GPU architecture at low level, using shared memory and coalesced accesses to memory.