825 resultados para digital learning tools


Relevância:

40.00% 40.00%

Publicador:

Resumo:

As an introduction to a series of articles focused on the exploration of particular tools and/or methods to bring together digital technology and historical research, the aim of this paper is mainly to highlight and discuss in what measure those methodological approaches can contribute to improve analytical and interpretative capabilities available to historians. In a moment when the digital world present us with an ever-increasing variety of tools to perform extraction, analysis and visualization of large amounts of text, we thought it would be relevant to bring the digital closer to the vast historical academic community. More than repeating an idea of digital revolution introduced in the historical research, something recurring in the literature since the 1980s, the aim was to show the validity and usefulness of using digital tools and methods, as another set of highly relevant tools that the historians should consider. For this several case studies were used, combining the exploration of specific themes of historical knowledge and the development or discussion of digital methodologies, in order to highlight some changes and challenges that, in our opinion, are already affecting the historians' work, such as a greater focus given to interdisciplinarity and collaborative work, and a need for the form of communication of historical knowledge to become more interactive.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Unilever Food Solutions new digital CRM1 Platform - What is the combination of tools, processes and content that will help Unilever Food Solutions grow his business? Unilever Food Solutions (UFS) intend to create a new online platform to enable it to communicate with segments of the markets, which have previously been too difficult to reach. Specifically targeted at Chefs and other food professionals, the aim is to create an interactive website, which delivers value to its intended users by providing a variety of relevant content and functions, while simultaneously opening up a potential transactional channel to those same users.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents general problems and approaches for the spatial data analysis using machine learning algorithms. Machine learning is a very powerful approach to adaptive data analysis, modelling and visualisation. The key feature of the machine learning algorithms is that they learn from empirical data and can be used in cases when the modelled environmental phenomena are hidden, nonlinear, noisy and highly variable in space and in time. Most of the machines learning algorithms are universal and adaptive modelling tools developed to solve basic problems of learning from data: classification/pattern recognition, regression/mapping and probability density modelling. In the present report some of the widely used machine learning algorithms, namely artificial neural networks (ANN) of different architectures and Support Vector Machines (SVM), are adapted to the problems of the analysis and modelling of geo-spatial data. Machine learning algorithms have an important advantage over traditional models of spatial statistics when problems are considered in a high dimensional geo-feature spaces, when the dimension of space exceeds 5. Such features are usually generated, for example, from digital elevation models, remote sensing images, etc. An important extension of models concerns considering of real space constrains like geomorphology, networks, and other natural structures. Recent developments in semi-supervised learning can improve modelling of environmental phenomena taking into account on geo-manifolds. An important part of the study deals with the analysis of relevant variables and models' inputs. This problem is approached by using different feature selection/feature extraction nonlinear tools. To demonstrate the application of machine learning algorithms several interesting case studies are considered: digital soil mapping using SVM, automatic mapping of soil and water system pollution using ANN; natural hazards risk analysis (avalanches, landslides), assessments of renewable resources (wind fields) with SVM and ANN models, etc. The dimensionality of spaces considered varies from 2 to more than 30. Figures 1, 2, 3 demonstrate some results of the studies and their outputs. Finally, the results of environmental mapping are discussed and compared with traditional models of geostatistics.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper analyses the use of open video editing tools to support the creation and production of online collaborative audiovisual projects for higher education. It focuses on the possibilities offered by these tools to promote collective creation in virtual environments.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Hypermedia systems based on the Web for open distance education are becoming increasinglypopular as tools for user-driven access learning information. Adaptive hypermedia is a new direction in research within the area of user-adaptive systems, to increase its functionality by making it personalized [Eklu 961. This paper sketches a general agents architecture to include navigationaladaptability and user-friendly processes which would guide and accompany the student during hislher learning on the PLAN-G hypermedia system (New Generation Telematics Platform to Support Open and Distance Learning), with the aid of computer networks and specifically WWW technology [Marz 98-1] [Marz 98-2]. The PLAN-G actual prototype is successfully used with some informatics courses (the current version has no agents yet). The propased multi-agent system, contains two different types of adaptive autonomous software agents: Personal Digital Agents {Interface), to interacl directly with the student when necessary; and Information Agents (Intermediaries), to filtrate and discover information to learn and to adapt navigation space to a specific student

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The identification and integration of reusable and customizable CSCL (Computer Supported Collaborative Learning) may benefit from the capture of best practices in collaborative learning structuring. The authors have proposed CLFPs (Collaborative Learning Flow Patterns) as a way of collecting these best practices. To facilitate the process of CLFPs by software systems, the paper proposes to specify these patterns using IMS Learning Design (IMS-LD). Thus, teachers without technical knowledge can particularize and integrate CSCL tools. Nevertheless, the support of IMS-LD for describing collaborative learning activities has some deficiencies: the collaborative tools that can be defined in these activities are limited. Thus, this paper proposes and discusses an extension to IMS-LD that enables to specify several characteristics of the use of tools that mediate collaboration. In order to obtain a Unit of Learning based on a CLFP, a three stage process is also proposed. A CLFP-based Unit of Learning example is used to illustrate the process and the need of the proposed extension.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper describes a bibliographic analysis of the vision of Marshal McLuhan and the vision adopted by diverse current authors regarding the use of new interactive learning technologies. The paper also analyzes the transformation that will have to take place in the formal surroundings of education in order to improve their social function. The main points of view and contributions made by diverse authors are discussed. It is important that all actors involved in the educational process take in consideration these contributions in order to be ready for future changes.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

La música es una asignatura obligatoria dentro de la etapa educativa de primaria. Hemos detectado varios profesores de distintas áreas educativas, entre ellas el área de música, que utilizan plataformas de e-learning y herramientas web como apoyo para enseñar el currículum que marca el “Departament d‟Educació de la Generalitat de Catalunya”. A partir del cuerpo de análisis se ha dibujado el panorama en plataformas de e-learning, analizando las tipologías y usos. A través de la muestra de plataformas de e-learning en educación musical se han detectado cuatro escuelas con plataformas de e-learning en fase avanzada. Se realiza un estudio de caso sobre una de estas plataformas para hacer el análisis de contenidos y validar el formato de entrevista utilizado, esto nos ha servido para crear un modelo que pueda ser utilizado en otros centros con plataforma de e-learning para la asignatura específica de música.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

ABSTRACT In recent years, geotechnologies as remote and proximal sensing and attributes derived from digital terrain elevation models indicated to be very useful for the description of soil variability. However, these information sources are rarely used together. Therefore, a methodology for assessing and specialize soil classes using the information obtained from remote/proximal sensing, GIS and technical knowledge has been applied and evaluated. Two areas of study, in the State of São Paulo, Brazil, totaling approximately 28.000 ha were used for this work. First, in an area (area 1), conventional pedological mapping was done and from the soil classes found patterns were obtained with the following information: a) spectral information (forms of features and absorption intensity of spectral curves with 350 wavelengths -2,500 nm) of soil samples collected at specific points in the area (according to each soil type); b) obtaining equations for determining chemical and physical properties of the soil from the relationship between the results obtained in the laboratory by the conventional method, the levels of chemical and physical attributes with the spectral data; c) supervised classification of Landsat TM 5 images, in order to detect changes in the size of the soil particles (soil texture); d) relationship between classes relief soils and attributes. Subsequently, the obtained patterns were applied in area 2 obtain pedological classification of soils, but in GIS (ArcGIS). Finally, we developed a conventional pedological mapping in area 2 to which was compared with a digital map, ie the one obtained only with pre certain standards. The proposed methodology had a 79 % accuracy in the first categorical level of Soil Classification System, 60 % accuracy in the second category level and became less useful in the categorical level 3 (37 % accuracy).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Automatic environmental monitoring networks enforced by wireless communication technologies provide large and ever increasing volumes of data nowadays. The use of this information in natural hazard research is an important issue. Particularly useful for risk assessment and decision making are the spatial maps of hazard-related parameters produced from point observations and available auxiliary information. The purpose of this article is to present and explore the appropriate tools to process large amounts of available data and produce predictions at fine spatial scales. These are the algorithms of machine learning, which are aimed at non-parametric robust modelling of non-linear dependencies from empirical data. The computational efficiency of the data-driven methods allows producing the prediction maps in real time which makes them superior to physical models for the operational use in risk assessment and mitigation. Particularly, this situation encounters in spatial prediction of climatic variables (topo-climatic mapping). In complex topographies of the mountainous regions, the meteorological processes are highly influenced by the relief. The article shows how these relations, possibly regionalized and non-linear, can be modelled from data using the information from digital elevation models. The particular illustration of the developed methodology concerns the mapping of temperatures (including the situations of Föhn and temperature inversion) given the measurements taken from the Swiss meteorological monitoring network. The range of the methods used in the study includes data-driven feature selection, support vector algorithms and artificial neural networks.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The traditional model of learning based on knowledge transfer doesn't promote the acquisition of information-related competencies and development of autonomous learning. More needs to be done to embrace learner-centred approaches, based on constructivism, collaboration and co-operation. This new learning paradigm is aligned with the European Higher Education Area (EHEA) requirements. In this sense, a learning experience based in faculty' librarian collaboration was seen as the best option for promoting student engagement and also a way to increase information-related competences in Open University of Catalonia (UOC) academic context. This case study outlines the benefits of teacher-librarian collaboration in terms of pedagogy innovation, resources management and introduction of open educational resources (OER) in virtual classrooms, Information literacy (IL) training and use of 2.0 tools in teaching. Our faculty-librarian's collaboration aims to provide an example of technology-enhanced learning and demonstrate how working together improves the quality and relevance of educational resources in UOC's virtual classrooms. Under this new approach, while teachers change their role from instructors to facilitators of the learning process and extend their reach to students, libraries acquire an important presence in the academic learning communities.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Semantic Web technology is able to provide the required computational semantics for interoperability of learning resources across different Learning Management Systems (LMS) and Learning Object Repositories (LOR). The EU research project LUISA (Learning Content Management System Using Innovative Semantic Web Services Architecture) addresses the development of a reference semantic architecture for the major challenges in the search, interchange and delivery of learning objects in a service-oriented context. One of the key issues, highlighted in this paper, is Digital Rights Management (DRM) interoperability. A Semantic Web approach to copyright management has been followed, which places a Copyright Ontology as the key component for interoperability among existing DRM systems and other licensing schemes like Creative Commons. Moreover, Semantic Web tools like reasoners, rule engines and semantic queries facilitate the implementation of an interoperable copyright management component in the LUISA architecture.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Many educators and educational institutions have yet to integrate web-based practices into their classrooms and curricula. As a result, it can be difficult to prototype and evaluate approaches to transforming classrooms from static endpoints to dynamic, content-creating nodes in the online information ecosystem. But many scholastic journalism programs have already embraced the capabilities of the Internet for virtual collaboration, dissemination, and reader participation. Because of this, scholastic journalism can act as a test-bed for integrating web-based sharing and collaboration practices into classrooms. Student Journalism 2.0 was a research project to integrate open copyright licenses into two scholastic journalism programs, to document outcomes, and to identify recommendations and remaining challenges for similar integrations. Video and audio recordings of two participating high school journalism programs informed the research. In describing the steps of our integration process, we note some important legal, technical, and social challenges. Legal worries such as uncertainty over copyright ownership could lead districts and administrators to disallow open licensing of student work. Publication platforms among journalism classrooms are far from standardized, making any integration of new technologies and practices difficult to achieve at scale. And teachers and students face challenges re-conceptualizing the role their class work can play online.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Peer-reviewed