940 resultados para Object Oriented Programming (Computing)


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent years emerged several initiatives promoted by educational organizations to adapt Service Oriented Architectures (SOA) to e-learning. These initiatives commonly named eLearning Frameworks share a common goal: to create flexible learning environments by integrating heterogeneous systems already available in many educational institutions. However, these frameworks were designed for integration of systems participating in business like processes rather than on complex pedagogical processes as those related to automatic evaluation. Consequently, their knowledge bases lack some fundamental components that are needed to model pedagogical processes. The objective of the research described in this paper is to study the applicability of eLearning frameworks for modelling a network of heterogeneous eLearning systems, using the automatic evaluation of programming exercises as a case study. The paper surveys the existing eLearning frameworks to justify the selection of the e-Framework. This framework is described in detail and identified the necessary components missing from its knowledge base, more precisely, a service genre, expression and usage model for an evaluation service. The extensibility of the framework is tested with the definition of this service. A concrete model for evaluation of programming exercises is presented as a validation of the proposed approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The concept of Learning Object (LO) is crucial for the standardization on eLearning. The latest LO standard from IMS Global Learning Consortium is the IMS Common Cartridge (IMS CC) that organizes and distributes digital learning content. By analyzing this new specification we considered two interoperability levels: content and communication. A common content format is the backbone of interoperability and is the basis for content exchange among eLearning systems. Communication is more than just exchanging content; it includes also accessing to specialized systems and services and reporting on content usage. This is particularly important when LOs are used for evaluation. In this paper we analyze the Common Cartridge profile based on the two interoperability levels we proposed. We detail its data model that comprises a set of derived schemata referenced on the CC schema and we explore the use of the IMS Learning Tools Interoperability (LTI) to allow remote tools and content to be integrated into a Learning Management System (LMS). In order to test the applicability of IMS CC for automatic evaluation we define a representation of programming exercises using this standard. This representation is intended to be the cornerstone of a network of eLearning systems where students can solve computer programming exercises and obtain feedback automatically. The CC learning object is automatically generated based on a XML dialect called PExIL that aims to consolidate all the data need to describe resources within the programming exercise life-cycle. Finally, we test the generated cartridge on the IMS CC online validator to verify its conformance with the IMS CC specification.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Learning computer programming requires solving programming exercises. In computer programming courses teachers need to assess and give feedback to a large number of exercises. These tasks are time consuming and error-prone since there are many aspects relating to good programming that should be considered. In this context automatic assessment tools can play an important role helping teachers in grading tasks as well to assist students with automatic feedback. In spite of its usefulness, these tools lack integration mechanisms with other eLearning systems such as Learning Management Systems, Learning Objects Repositories or Integrated Development Environments. In this paper we provide a survey on programming evaluation systems. The survey gathers information on interoperability features of these systems, categorizing and comparing them regarding content and communication standardization. This work may prove useful to instructors and computer science educators when they have to choose an assessment system to be integrated in their e-Learning environment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Extracting the semantic relatedness of terms is an important topic in several areas, including data mining, information retrieval and web recommendation. This paper presents an approach for computing the semantic relatedness of terms using the knowledge base of DBpedia — a community effort to extract structured information from Wikipedia. Several approaches to extract semantic relatedness from Wikipedia using bag-of-words vector models are already available in the literature. The research presented in this paper explores a novel approach using paths on an ontological graph extracted from DBpedia. It is based on an algorithm for finding and weighting a collection of paths connecting concept nodes. This algorithm was implemented on a tool called Shakti that extract relevant ontological data for a given domain from DBpedia using its SPARQL endpoint. To validate the proposed approach Shakti was used to recommend web pages on a Portuguese social site related to alternative music and the results of that experiment are reported in this paper.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Several standards have appeared in recent years to formalize the metadata of learning objects, but they are still insufficient to fully describe a specialized domain. In particular, the programming exercise domain requires interdependent resources (e.g. test cases, solution programs, exercise description) usually processed by different services in the programming exercise lifecycle. Moreover, the manual creation of these resources is time-consuming and error-prone, leading to an obstacle to the fast development of programming exercises of good quality. This chapter focuses on the definition of an XML dialect called PExIL (Programming Exercises Interoperability Language). The aim of PExIL is to consolidate all the data required in the programming exercise lifecycle from when it is created to when it is graded, covering also the resolution, the evaluation, and the feedback. The authors introduce the XML Schema used to formalize the relevant data of the programming exercise lifecycle. The validation of this approach is made through the evaluation of the usefulness and expressiveness of the PExIL definition. In the former, the authors present the tools that consume the PExIL definition to automatically generate the specialized resources. In the latter, they use the PExIL definition to capture all the constraints of a set of programming exercises stored in a learning objects repository.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

TiO2 films have been deposited on ITO substrates by dc reactive magnetron sputtering technique. It has been found that the sputtering pressure is a very important parameter for the structure of the deposited TiO2 films. When the pressure is lower than 1 Pa, the deposited has a dense structure and shows a preferred orientation along the [101] direction. However, the nanorod structure has been obtained as the sputtering pressure is higher than 1 Pa. These nanorods structure TiO2 film shows a preferred orientation along the [110] direction. The x-ray diffraction and the Raman scattering measurements show both the dense and the nanostructure TiO2 films have only an anatase phase, no other phase has been obtained. The results of the SEM show that these TiO2 nanorods are perpendicular to the ITO substrate. The TEM measurement shows that the nanorods have a very rough surface. The dye-sensitized solar cells (DSSCs) have been assembled using these TiO2 nanorod films prepared at different sputtering pressures as photoelectrode. And the effect of the sputtering pressure on the properties of the photoelectric conversion of the DSSCs has been studied.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent integrated circuit technologies have opened the possibility to design parallel architectures with hundreds of cores on a single chip. The design space of these parallel architectures is huge with many architectural options. Exploring the design space gets even more difficult if, beyond performance and area, we also consider extra metrics like performance and area efficiency, where the designer tries to design the architecture with the best performance per chip area and the best sustainable performance. In this paper we present an algorithm-oriented approach to design a many-core architecture. Instead of doing the design space exploration of the many core architecture based on the experimental execution results of a particular benchmark of algorithms, our approach is to make a formal analysis of the algorithms considering the main architectural aspects and to determine how each particular architectural aspect is related to the performance of the architecture when running an algorithm or set of algorithms. The architectural aspects considered include the number of cores, the local memory available in each core, the communication bandwidth between the many-core architecture and the external memory and the memory hierarchy. To exemplify the approach we did a theoretical analysis of a dense matrix multiplication algorithm and determined an equation that relates the number of execution cycles with the architectural parameters. Based on this equation a many-core architecture has been designed. The results obtained indicate that a 100 mm(2) integrated circuit design of the proposed architecture, using a 65 nm technology, is able to achieve 464 GFLOPs (double precision floating-point) for a memory bandwidth of 16 GB/s. This corresponds to a performance efficiency of 71 %. Considering a 45 nm technology, a 100 mm(2) chip attains 833 GFLOPs which corresponds to 84 % of peak performance These figures are better than those obtained by previous many-core architectures, except for the area efficiency which is limited by the lower memory bandwidth considered. The results achieved are also better than those of previous state-of-the-art many-cores architectures designed specifically to achieve high performance for matrix multiplication.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Relatório de Estágio submetido à Escola Superior de Teatro e Cinema para cumprimento dos requisitos necessários à obtenção do grau de Mestre em Teatro – Especialização em Produção

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider an optimal control problem with a deterministic finite horizon and state variable dynamics given by a Markov-switching jump–diffusion stochastic differential equation. Our main results extend the dynamic programming technique to this larger family of stochastic optimal control problems. More specifically, we provide a detailed proof of Bellman’s optimality principle (or dynamic programming principle) and obtain the corresponding Hamilton–Jacobi–Belman equation, which turns out to be a partial integro-differential equation due to the extra terms arising from the Lévy process and the Markov process. As an application of our results, we study a finite horizon consumption– investment problem for a jump–diffusion financial market consisting of one risk-free asset and one risky asset whose coefficients are assumed to depend on the state of a continuous time finite state Markov process. We provide a detailed study of the optimal strategies for this problem, for the economically relevant families of power utilities and logarithmic utilities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Consider scheduling of real-time tasks on a multiprocessor where migration is forbidden. Specifically, consider the problem of determining a task-to-processor assignment for a given collection of implicit-deadline sporadic tasks upon a multiprocessor platform in which there are two distinct types of processors. For this problem, we propose a new algorithm, LPC (task assignment based on solving a Linear Program with Cutting planes). The algorithm offers the following guarantee: for a given task set and a platform, if there exists a feasible task-to-processor assignment, then LPC succeeds in finding such a feasible task-to-processor assignment as well but on a platform in which each processor is 1.5 × faster and has three additional processors. For systems with a large number of processors, LPC has a better approximation ratio than state-of-the-art algorithms. To the best of our knowledge, this is the first work that develops a provably good real-time task assignment algorithm using cutting planes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Single processor architectures are unable to provide the required performance of high performance embedded systems. Parallel processing based on general-purpose processors can achieve these performances with a considerable increase of required resources. However, in many cases, simplified optimized parallel cores can be used instead of general-purpose processors achieving better performance at lower resource utilization. In this paper, we propose a configurable many-core architecture to serve as a co-processor for high-performance embedded computing on Field-Programmable Gate Arrays. The architecture consists of an array of configurable simple cores with support for floating-point operations interconnected with a configurable interconnection network. For each core it is possible to configure the size of the internal memory, the supported operations and number of interfacing ports. The architecture was tested in a ZYNQ-7020 FPGA in the execution of several parallel algorithms. The results show that the proposed many-core architecture achieves better performance than that achieved with a parallel generalpurpose processor and that up to 32 floating-point cores can be implemented in a ZYNQ-7020 SoC FPGA.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The rapidly increasing computing power, available storage and communication capabilities of mobile devices makes it possible to start processing and storing data locally, rather than offloading it to remote servers; allowing scenarios of mobile clouds without infrastructure dependency. We can now aim at connecting neighboring mobile devices, creating a local mobile cloud that provides storage and computing services on local generated data. In this paper, we describe an early overview of a distributed mobile system that allows accessing and processing of data distributed across mobile devices without an external communication infrastructure. Copyright © 2015 ICST.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The complexity associated with fast growing of B2B and the lack of a (complete) suite of open standards makes difficulty to maintain the underlying collaborative processes. Aligned to this challenge, this paper aims to be a contribution to an open architecture of logistics and transport processes management system. A model of an open integrated system is being defined as an open computational responsibility from the embedded systems (on-board) as well as a reference implementation (prototype) of a host system to validate the proposed open interfaces. Embedded subsystem can, natively, be prepared to cooperate with other on-board units and with IT-systems in an infrastructure commonly referred to as a center information system or back-office. In interaction with a central system the proposal is to adopt an open framework for cooperation where the embedded unit or the unit placed somewhere (land/sea) interacts in response to a set of implemented capabilities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hyperspectral imaging can be used for object detection and for discriminating between different objects based on their spectral characteristics. One of the main problems of hyperspectral data analysis is the presence of mixed pixels, due to the low spatial resolution of such images. This means that several spectrally pure signatures (endmembers) are combined into the same mixed pixel. Linear spectral unmixing follows an unsupervised approach which aims at inferring pure spectral signatures and their material fractions at each pixel of the scene. The huge data volumes acquired by such sensors put stringent requirements on processing and unmixing methods. This paper proposes an efficient implementation of a unsupervised linear unmixing method on GPUs using CUDA. The method finds the smallest simplex by solving a sequence of nonsmooth convex subproblems using variable splitting to obtain a constraint formulation, and then applying an augmented Lagrangian technique. The parallel implementation of SISAL presented in this work exploits the GPU architecture at low level, using shared memory and coalesced accesses to memory. The results herein presented indicate that the GPU implementation can significantly accelerate the method's execution over big datasets while maintaining the methods accuracy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a coordination approach to maximize the total profit of wind power systems coordinated with concentrated solar power systems, having molten-salt thermal energy storage. Both systems are effectively handled by mixed-integer linear programming in the approach, allowing enhancement on the operational during non-insolation periods. Transmission grid constraints and technical operating constraints on both systems are modeled to enable a true management support for the integration of renewable energy sources in day-ahead electricity markets. A representative case study based on real systems is considered to demonstrate the effectiveness of the proposed approach. © IFIP International Federation for Information Processing 2015.