5 resultados para Language evaluation

em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Durant els darrers anys, s’han publicat un gran nombre de materials multimèdia destinats a l’aprenentatge de llengües, la major part dels quals son CD-ROM dissenyats com a cursos per l’autoaprenentatge. Amb aquests materials, els alumnes poden treballar independentment sense l’assessorament d’un professor, i per aquest motiu s’ha afirmat que promouen i faciliten l’aprenentatge autònom. Aquesta relació, però, no es certa, com Phil Benson i Peter Voller 1997:10) han manifestat encertadament:(…) Such claims are often dubious, however, because of the limited range of options and roles offered to the learner. Nevertheless, technologies of education in the broadest sense can be considered to be either more or less supportive of autonomy. The question is what kind of criteria do we apply in evaluating them? En aquest article presentem una investigació conjunta on es defineixen els criteris que poden ser utilitzats per avaluar materials multimèdia en relació a la seva facilitat per permetre l’aprenentatge autònom. Aquests criteris son la base d’un qüestionari que s’ha emprat per avaluar una selecció de CD-ROM destinats a l’autoaprenentatge de llengües. La estructura d’aquest article és la següent: - Una introducció de l’estudi - Els criteris que s’han utilitzar per la creació del qüestionari - Els resultats generals de l’avaluació - Les conclusions que s’han extret i la seva importància pel disseny instructiu multimèdia

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper analyzes and evaluates, in the context of Ontology learning, some techniques to identify and extract candidate terms to classes of a taxonomy. Besides, this work points out some inconsistencies that may be occurring in the preprocessing of text corpus, and proposes techniques to obtain good terms candidate to classes of a taxonomy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Collaborative activities, in which students actively interact with each other, have proved to provide significant learning benefits. In Computer-Supported Collaborative Learning (CSCL), these collaborative activities are assisted by technologies. However, the use of computers does not guarantee collaboration, as free collaboration does not necessary lead to fruitful learning. Therefore, practitioners need to design CSCL scripts that structure the collaborative settings so that they promote learning. However, not all teachers have the technical and pedagogical background needed to design such scripts. With the aim of assisting teachers in designing effective CSCL scripts, we propose a model to support the selection of reusable good practices (formulated as patterns) so that they can be used as a starting point for their own designs. This model is based on a pattern ontology that computationally represents the knowledge captured on a pattern language for the design of CSCL scripts. A preliminary evaluation of the proposed approach is provided with two examples based on a set of meaningful interrelated patters computationally represented with the pattern ontology, and a paper prototyping experience carried out with two teaches. The results offer interesting insights towards the implementation of the pattern ontology in software tools.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A range of different language systems for nursing diagnosis, interventions and outcomes are currently available. Nursing terminologies are intended to support nursing practice but they have to be evaluated. This study aims to assess the results of an expert survey to establish the face validity of a nursing interface terminology. The study applied a descriptive design with a cross-sectional survey strategy using a written questionnaire administered to expert nurses working in hospitals. Sample size was estimated at 35 participants. The questionnaire included topics related to validity and reliability criteria for nursing controlled vocabularies described in the literature. Mean global score and criteria scoring at least 7 were considered main outcome measures. The analysis included descriptive statistics with a confidence level of 95%. The mean global score was 8.1. The mean score for the validity criteria was 8.4 and 7.8 for reliability and applicability criteria. Two of the criteria for reliability and applicability evaluation did not achieve minimum scores. According to the experts" responses, this terminology meets face validity, but that improvements are required in some criteria and further research is needed to completely demonstrate its metric properties.