85 resultados para Evidence Containers, Representation, Provenance, Tool Interoperability
Resumo:
The LMS plays an indisputable role in the majority of the eLearning environments. This eLearning system type is often used for presenting, solving and grading simple exercises. However, exercises from complex domains, such as computer programming, require heterogeneous systems such as evaluation engines, learning objects repositories and exercise resolution environments. The coordination of networks of such disparate systems is rather complex. This work presents a standard approach for the coordination of a network of eLearning systems supporting the resolution of exercises. The proposed approach use a pivot component embedded in the LMS with two roles: provide an exercise resolution environment and coordinate the communication between the LMS and other systems exposing their functions as web services. The integration of the pivot component with the LMS relies on the Learning Tools Interoperability. The validation of this approach is made through the integration of the component with LMSs from two vendors.
Resumo:
Several standards appeared in recent years to formalize the metadata of learning objects, but they are still insufficient to fully describe a specialized domain. In particular, the programming exercise domain requires interdependent resources (e.g. test cases, solution programs, exercise description) usually processed by different services in the programming exercise life-cycle. Moreover, the manual creation of these resources is time-consuming and error-prone leading to what is an obstacle to the fast development of programming exercises of good quality. This paper focuses on the definition of an XML dialect called PExIL (Programming Exercises Interoperability Language). The aim of PExIL is to consolidate all the data required in the programming exercise life-cycle, from when it is created to when it is graded, covering also the resolution, the evaluation and the feedback. We introduce the XML Schema used to formalize the relevant data of the programming exercise life-cycle. The validation of this approach is made through the evaluation of the usefulness and expressiveness of the PExIL definition. In the former we present the tools that consume the PExIL definition to automatically generate the specialized resources. In the latter we use the PExIL definition to capture all the constraints of a set of programming exercises stored in a learning objects repository.
Resumo:
The LMS plays a decisive role in most eLearning environments. Although they integrate many useful tools for managing eLearning activities, they must also be effectively integrated with other specialized systems typically found in an educational environment such as Repositories of Learning Objects or ePortfolio Systems. Both types of systems evolved separately but in recent years the trend is to combine them, allowing the LMS to benefit from using the ePortfolio assessment features. This paper details the most common strategies for integrating an ePortfolio system into an LMS: the data, the API and the tool integration strategies. It presents a comparative study of strategies based on the technical skills, degree of coupling, security features, batch integration, development effort, status and standardization. This study is validated through the integration of two of the most representative systems on each category - respectively Mahara and Moodle.
Resumo:
The confluence of education with the evolution of technology boosted the paradigm shift of the face-to-face learning to distance learning. In this scenario e-Learning plays an essential role as a facilitator of the teaching/learning process. However new demands associated with the new Web paradigm require that existent e-Learning environments characterized mostly by monolithic systems begin interacting with new specialized services. In this decentralized scenario the definition of a strategy of interoperability is the cornerstone to ensure the standardization communication among systems. This paper presents a definition of an interoperability strategy for an e-Learning environment at our School (ESEIG) called PEACE – Project for ESEIG Academic Content Environment. This new interoperability model relies on the application of several coordination and integration standards on several services, controlled by teachers and students, and included in the PEACE environment such as social networks, repositories, libraries, e-portfolios, intelligent tutors, recommendation systems and virtual classrooms.
Resumo:
Learning computer programming requires solving programming exercises. In computer programming courses teachers need to assess and give feedback to a large number of exercises. These tasks are time consuming and error-prone since there are many aspects relating to good programming that should be considered. In this context automatic assessment tools can play an important role helping teachers in grading tasks as well to assist students with automatic feedback. In spite of its usefulness, these tools lack integration mechanisms with other eLearning systems such as Learning Management Systems, Learning Objects Repositories or Integrated Development Environments. In this paper we provide a survey on programming evaluation systems. The survey gathers information on interoperability features of these systems, categorizing and comparing them regarding content and communication standardization. This work may prove useful to instructors and computer science educators when they have to choose an assessment system to be integrated in their e-Learning environment.
Resumo:
This paper presents a tool called Petcha that acts as an automated Teaching Assistant in computer programming courses. The ultimate objective of Petcha is to increase the number of programming exercises effectively solved by students. Petcha meets this objective by helping both teachers to author programming exercises and students to solve them. It also coordinates a network of heterogeneous systems, integrating automatic program evaluators, learning management systems, learning object repositories and integrated programming environments. This paper presents the concept and the design of Petcha and sets this tool in a service oriented architecture for managing learning processes based on the automatic evaluation of programming exercises. The paper presents also a case study that validates the use of Petcha and of the proposed architecture.
Resumo:
Among the most important measures to prevent wild forest fires is the use of prescribed and controlled burning actions in order to reduce the availability of fuel mass. However, the impact of these activities on soil physical and chemical properties varies according to the type of both soil and vegetation and is not fully understood. Therefore, soil monitoring campaigns are often used to measure these impacts. In this paper we have successfully used three statistical data treatments - the Kolmogorov-Smirnov test followed by the ANOVA and the Kruskall-Wallis tests – to investigate the variability among the soil pH, soil moisture, soil organic matter and soil iron variables for different monitoring times and sampling procedures.
Resumo:
Prepared for presentation at the Portuguese Finance Network International Conference 2014, Vilamoura, Portugal, June 18-20
Resumo:
The teaching-learning process is increasingly focused on the combination of the paradigms “learning by viewing” and “learning by doing.” In this context, educational resources, either expository or evaluative, play a pivotal role. Both types of resources are interdependent and their sequencing would create a richer educational experience to the end user. However, there is a lack of tools that support sequencing essentially due to the fact that existing specifications are complex. The Seqins is a sequencing tool of digital resources that has a fairly simple sequencing model. The tool communicates through the IMS LTI specification with a plethora of e-learning systems such as learning management systems, repositories, authoring and evaluation systems. In order to validate Seqins we integrate it in an e-learning Ensemble framework instance for the computer programming learning.
Resumo:
This paper presents the creation and development of technological schools directly linked to the business community and to higher public education. Establishing themselves as the key interface between the two sectors they make a signigicant contribution by having a greater competitive edge when faced with increasing competition in the tradional markets. The development of new business strategies supported by references of excellence, quality and competitiveness also provides a good link between the estalishment of partnerships aiming at the qualification of education boards at a medium level between the technological school and higher education with a technological foundation. We present a case study as an example depicting the success of Escola Tecnológica de Vale de Cambra.
Resumo:
The changes introduced into the European Higher Education Area (EHEA) by the Bologna Process, together with renewed pedagogical and methodological practices, have created a new teaching-learning paradigm: Student-Centred Learning. In addition, the last few years have been characterized by the application of Information Technologies, especially the Semantic Web, not only to the teaching-learning process, but also to administrative processes within learning institutions. On one hand, the aim of this study was to present a model for identifying and classifying Competencies and Learning Outcomes and, on the other hand, the computer applications of the information management model were developed, namely a relational Database and an Ontology.
Resumo:
Existing adaptive educational hypermedia systems have been using learning resources sequencing approaches in order to enrich the learning experience. In this context, educational resources, either expository or evaluative, play a central role. However, there is a lack of tools that support sequencing essentially due to the fact that existing specifications are complex. This paper presents Seqins as a sequencing tool of digital educational resources. Seqins includes a simple and flexible sequencing model that will foster heterogeneous students to learn at different rhythms. The tool communicates through the IMS Learning Tools Interoperability specification with a plethora of e-learning systems such as learning management systems, repositories, authoring and automatic evaluation systems. In order to validate Seqins we integrate it in an e-learning Ensemble framework instance for the computer programming learning domain.
Resumo:
To meet the increasing demands of the complex inter-organizational processes and the demand for continuous innovation and internationalization, it is evident that new forms of organisation are being adopted, fostering more intensive collaboration processes and sharing of resources, in what can be called collaborative networks (Camarinha-Matos, 2006:03). Information and knowledge are crucial resources in collaborative networks, being their management fundamental processes to optimize. Knowledge organisation and collaboration systems are thus important instruments for the success of collaborative networks of organisations having been researched in the last decade in the areas of computer science, information science, management sciences, terminology and linguistics. Nevertheless, research in this area didn’t give much attention to multilingual contexts of collaboration, which pose specific and challenging problems. It is then clear that access to and representation of knowledge will happen more and more on a multilingual setting which implies the overcoming of difficulties inherent to the presence of multiple languages, through the use of processes like localization of ontologies. Although localization, like other processes that involve multilingualism, is a rather well-developed practice and its methodologies and tools fruitfully employed by the language industry in the development and adaptation of multilingual content, it has not yet been sufficiently explored as an element of support to the development of knowledge representations - in particular ontologies - expressed in more than one language. Multilingual knowledge representation is then an open research area calling for cross-contributions from knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences. This workshop joined researchers interested in multilingual knowledge representation, in a multidisciplinary environment to debate the possibilities of cross-fertilization between knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences applied to contexts where multilingualism continuously creates new and demanding challenges to current knowledge representation methods and techniques. In this workshop six papers dealing with different approaches to multilingual knowledge representation are presented, most of them describing tools, approaches and results obtained in the development of ongoing projects. In the first case, Andrés Domínguez Burgos, Koen Kerremansa and Rita Temmerman present a software module that is part of a workbench for terminological and ontological mining, Termontospider, a wiki crawler that aims at optimally traverse Wikipedia in search of domainspecific texts for extracting terminological and ontological information. The crawler is part of a tool suite for automatically developing multilingual termontological databases, i.e. ontologicallyunderpinned multilingual terminological databases. In this paper the authors describe the basic principles behind the crawler and summarized the research setting in which the tool is currently tested. In the second paper, Fumiko Kano presents a work comparing four feature-based similarity measures derived from cognitive sciences. The purpose of the comparative analysis presented by the author is to verify the potentially most effective model that can be applied for mapping independent ontologies in a culturally influenced domain. For that, datasets based on standardized pre-defined feature dimensions and values, which are obtainable from the UNESCO Institute for Statistics (UIS) have been used for the comparative analysis of the similarity measures. The purpose of the comparison is to verify the similarity measures based on the objectively developed datasets. According to the author the results demonstrate that the Bayesian Model of Generalization provides for the most effective cognitive model for identifying the most similar corresponding concepts existing for a targeted socio-cultural community. In another presentation, Thierry Declerck, Hans-Ulrich Krieger and Dagmar Gromann present an ongoing work and propose an approach to automatic extraction of information from multilingual financial Web resources, to provide candidate terms for building ontology elements or instances of ontology concepts. The authors present a complementary approach to the direct localization/translation of ontology labels, by acquiring terminologies through the access and harvesting of multilingual Web presences of structured information providers in the field of finance, leading to both the detection of candidate terms in various multilingual sources in the financial domain that can be used not only as labels of ontology classes and properties but also for the possible generation of (multilingual) domain ontologies themselves. In the next paper, Manuel Silva, António Lucas Soares and Rute Costa claim that despite the availability of tools, resources and techniques aimed at the construction of ontological artifacts, developing a shared conceptualization of a given reality still raises questions about the principles and methods that support the initial phases of conceptualization. These questions become, according to the authors, more complex when the conceptualization occurs in a multilingual setting. To tackle these issues the authors present a collaborative platform – conceptME - where terminological and knowledge representation processes support domain experts throughout a conceptualization framework, allowing the inclusion of multilingual data as a way to promote knowledge sharing and enhance conceptualization and support a multilingual ontology specification. In another presentation Frieda Steurs and Hendrik J. Kockaert present us TermWise, a large project dealing with legal terminology and phraseology for the Belgian public services, i.e. the translation office of the ministry of justice, a project which aims at developing an advanced tool including expert knowledge in the algorithms that extract specialized language from textual data (legal documents) and whose outcome is a knowledge database including Dutch/French equivalents for legal concepts, enriched with the phraseology related to the terms under discussion. Finally, Deborah Grbac, Luca Losito, Andrea Sada and Paolo Sirito report on the preliminary results of a pilot project currently ongoing at UCSC Central Library, where they propose to adapt to subject librarians, employed in large and multilingual Academic Institutions, the model used by translators working within European Union Institutions. The authors are using User Experience (UX) Analysis in order to provide subject librarians with a visual support, by means of “ontology tables” depicting conceptual linking and connections of words with concepts presented according to their semantic and linguistic meaning. The organizers hope that the selection of papers presented here will be of interest to a broad audience, and will be a starting point for further discussion and cooperation.
Resumo:
Demand response can play a very relevant role in the context of power systems with an intensive use of distributed energy resources, from which renewable intermittent sources are a significant part. More active consumers participation can help improving the system reliability and decrease or defer the required investments. Demand response adequate use and management is even more important in competitive electricity markets. However, experience shows difficulties to make demand response be adequately used in this context, showing the need of research work in this area. The most important difficulties seem to be caused by inadequate business models and by inadequate demand response programs management. This paper contributes to developing methodologies and a computational infrastructure able to provide the involved players with adequate decision support on demand response programs and contracts design and use. The presented work uses DemSi, a demand response simulator that has been developed by the authors to simulate demand response actions and programs, which includes realistic power system simulation. It includes an optimization module for the application of demand response programs and contracts using deterministic and metaheuristic approaches. The proposed methodology is an important improvement in the simulator while providing adequate tools for demand response programs adoption by the involved players. A machine learning method based on clustering and classification techniques, resulting in a rule base concerning DR programs and contracts use, is also used. A case study concerning the use of demand response in an incident situation is presented.