704 resultados para Learning support class


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose of the study: Basic life support (BLS) and automated externaldefibrillation (AED) represent important skills to be acquired duringpregraduate medical training. Since 3 years, our medical school hasintroduced a BLS-AED course (with certification) for all second yearmedical students. Few reports about quality and persistence over timeof BLS-AED learning are available to date in the medical literature.Comprehensive evaluation of students' acquired skills was performedat the end of the 2008 academic year, 6 month after certification.Materials and methods: The students (N = 142) were evaluated duringa 9 minutes «objective structured clinical examination» (OSCE) station.Out of a standardized scenario, they had to recognize a cardiac arrestsituation and start a resuscitation process. Their performance wererecorded on a PC using an Ambuman(TM) mannequin and the AmbuCPR software kit(TM) during a minimum of 8 cycles (30 compressions:2 ventilations each). BLS parameters were systematically checked. Nostudent-rater interactions were allowed during the whole evaluation.Results: Response of the victim was checked by 99% of the students(N = 140), 96% (N = 136) called for an ambulance and/or an AED. Openthe airway and check breathing were done by 96% (N = 137), 92% (N =132) gave 2 rescue breaths. Pulse was checked by 95% (N=135), 100%(N = 142) begun chest compression, 96% (N = 136) within 1 minute.Chest compression rate was 101 ± 18 per minute (mean ± SD), depthcompression 43 ± 8 mm, 97% (N = 138) respected a compressionventilationratio of 30:2.Conclusions: Quality of BLS skills acquisition is maintained during a6-month period after a BLS-AED certification. Main targets of 2005 AHAguidelines were well respected. This analysis represents one of thelargest evaluations of specific BLS teaching efficiency reported. Furtherfollow-up is needed to control the persistence of these skills during alonger time period and noteworthy at the end of the pregraduatemedical curriculum.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to the advances in sensor networks and remote sensing technologies, the acquisition and storage rates of meteorological and climatological data increases every day and ask for novel and efficient processing algorithms. A fundamental problem of data analysis and modeling is the spatial prediction of meteorological variables in complex orography, which serves among others to extended climatological analyses, for the assimilation of data into numerical weather prediction models, for preparing inputs to hydrological models and for real time monitoring and short-term forecasting of weather.In this thesis, a new framework for spatial estimation is proposed by taking advantage of a class of algorithms emerging from the statistical learning theory. Nonparametric kernel-based methods for nonlinear data classification, regression and target detection, known as support vector machines (SVM), are adapted for mapping of meteorological variables in complex orography.With the advent of high resolution digital elevation models, the field of spatial prediction met new horizons. In fact, by exploiting image processing tools along with physical heuristics, an incredible number of terrain features which account for the topographic conditions at multiple spatial scales can be extracted. Such features are highly relevant for the mapping of meteorological variables because they control a considerable part of the spatial variability of meteorological fields in the complex Alpine orography. For instance, patterns of orographic rainfall, wind speed and cold air pools are known to be correlated with particular terrain forms, e.g. convex/concave surfaces and upwind sides of mountain slopes.Kernel-based methods are employed to learn the nonlinear statistical dependence which links the multidimensional space of geographical and topographic explanatory variables to the variable of interest, that is the wind speed as measured at the weather stations or the occurrence of orographic rainfall patterns as extracted from sequences of radar images. Compared to low dimensional models integrating only the geographical coordinates, the proposed framework opens a way to regionalize meteorological variables which are multidimensional in nature and rarely show spatial auto-correlation in the original space making the use of classical geostatistics tangled.The challenges which are explored during the thesis are manifolds. First, the complexity of models is optimized to impose appropriate smoothness properties and reduce the impact of noisy measurements. Secondly, a multiple kernel extension of SVM is considered to select the multiscale features which explain most of the spatial variability of wind speed. Then, SVM target detection methods are implemented to describe the orographic conditions which cause persistent and stationary rainfall patterns. Finally, the optimal splitting of the data is studied to estimate realistic performances and confidence intervals characterizing the uncertainty of predictions.The resulting maps of average wind speeds find applications within renewable resources assessment and opens a route to decrease the temporal scale of analysis to meet hydrological requirements. Furthermore, the maps depicting the susceptibility to orographic rainfall enhancement can be used to improve current radar-based quantitative precipitation estimation and forecasting systems and to generate stochastic ensembles of precipitation fields conditioned upon the orography.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Institute has professionals with extensive experience in training, specifically with experience in the field of police and emergencies training. Moreover, it also has very talented people. But above all, our institution has public professionals with a desire to serve, who love security and emergency responders and want to provide them with the best knowledge to make them every day better professionals. In the quest for continuous training improvement, its during 2009 when e-learning begins to have a presence at the Institute. Virtual training methodology becomes a facilitator for the training of various professionals, avoiding geographical displacement and easing the class schedule.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study is based on the analysis of the use of supplementary materials to teach vocabulary by second language teachers in Primary Education. The study consists of two analyses: the first one is a quantitative analysis based on 33 questionnaires answered by different second language teachers of Primary Education. The other, is a qualitative analysis in which the teacher’s subjective opinion on vocabulary learning techniques is presented. The study covers these main aspects: material use, effectiveness, children’s motivation, main criteria to teach vocabulary and the children’s role in their vocabulary learning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The book presents the state of the art in machine learning algorithms (artificial neural networks of different architectures, support vector machines, etc.) as applied to the classification and mapping of spatially distributed environmental data. Basic geostatistical algorithms are presented as well. New trends in machine learning and their application to spatial data are given, and real case studies based on environmental and pollution data are carried out. The book provides a CD-ROM with the Machine Learning Office software, including sample sets of data, that will allow both students and researchers to put the concepts rapidly to practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Millennials generation is changing the way of learning, prompting educational institutions to attempt to better adapt to young needs by incorporating technologies into education. Based on this premise, we have reviewed the prominent reports of the integration of ICT into education with the aim of evidencing how education is changing, and will change, to meet the needs ofMillennials with ICT support. We conclude that most of the investments have simply resulted in an increase of computers and access to the Internet, with teachers reproducing traditional approaches to education and e-learning being seen as complementary to face-to-face education. While it would seem that the use of ICT is not revolutionizing learning, it is facilitating the personalization, collaboration and ubiquity of learning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Information Society has provided the context for the development of a new generation, known as the Millennials, who are characterized by their intensive use of technologies in everyday life. These features are changing the way of learning, prompting educational institutions to attempt to better adapt to youngneeds by incorporating technologies into education. Based on this premise, wehave reviewed the prominent reports of the integration of ICT into education atdifferent levels with the aim of evidencing how education is changing, and willchange, to meet the needs of Millennials with ICT support. The results show thatmost of the investments have simply resulted in an increase of computers andaccess to the Internet, with teachers reproducing traditional approaches to education and e-learning being seen as complementary to face-to-face education.While it would seem that the use of ICT is not revolutionizing learning, it isfacilitating the personalization, collaboration and ubiquity of learning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Our work is focused on alleviating the workload for designers of adaptive courses on the complexity task of authoring adaptive learning designs adjusted to specific user characteristics and the user context. We propose an adaptation platform that consists in a set of intelligent agents where each agent carries out an independent adaptation task. The agents apply machine learning techniques to support the user modelling for the adaptation process

Relevância:

30.00% 30.00%

Publicador:

Resumo:

1. The ecological niche is a fundamental biological concept. Modelling species' niches is central to numerous ecological applications, including predicting species invasions, identifying reservoirs for disease, nature reserve design and forecasting the effects of anthropogenic and natural climate change on species' ranges. 2. A computational analogue of Hutchinson's ecological niche concept (the multidimensional hyperspace of species' environmental requirements) is the support of the distribution of environments in which the species persist. Recently developed machine-learning algorithms can estimate the support of such high-dimensional distributions. We show how support vector machines can be used to map ecological niches using only observations of species presence to train distribution models for 106 species of woody plants and trees in a montane environment using up to nine environmental covariates. 3. We compared the accuracy of three methods that differ in their approaches to reducing model complexity. We tested models with independent observations of both species presence and species absence. We found that the simplest procedure, which uses all available variables and no pre-processing to reduce correlation, was best overall. Ecological niche models based on support vector machines are theoretically superior to models that rely on simulating pseudo-absence data and are comparable in empirical tests. 4. Synthesis and applications. Accurate species distribution models are crucial for effective environmental planning, management and conservation, and for unravelling the role of the environment in human health and welfare. Models based on distribution estimation rather than classification overcome theoretical and practical obstacles that pervade species distribution modelling. In particular, ecological niche models based on machine-learning algorithms for estimating the support of a statistical distribution provide a promising new approach to identifying species' potential distributions and to project changes in these distributions as a result of climate change, land use and landscape alteration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Iowa Department of Education (DE) was appropriated $1.45 million for the development and implementation of a statewide work-based learning intermediary network. This funding was awarded on a competitive basis to 15 regional intermediary networks. Funds received by the regional intermediary networks from the state through this grant are to be used to develop and expand work-based learning opportunities within each region. A match of resources equal to 25 percent was a requirement of the funding. This match could include private donations, in-kind contributions, or public moneys. Funds may be used to support personnel responsible for the implementation of the intermediary network program components.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order to understand the development of non-genetically encoded actions during an animal's lifespan, it is necessary to analyze the dynamics and evolution of learning rules producing behavior. Owing to the intrinsic stochastic and frequency-dependent nature of learning dynamics, these rules are often studied in evolutionary biology via agent-based computer simulations. In this paper, we show that stochastic approximation theory can help to qualitatively understand learning dynamics and formulate analytical models for the evolution of learning rules. We consider a population of individuals repeatedly interacting during their lifespan, and where the stage game faced by the individuals fluctuates according to an environmental stochastic process. Individuals adjust their behavioral actions according to learning rules belonging to the class of experience-weighted attraction learning mechanisms, which includes standard reinforcement and Bayesian learning as special cases. We use stochastic approximation theory in order to derive differential equations governing action play probabilities, which turn out to have qualitative features of mutator-selection equations. We then perform agent-based simulations to find the conditions where the deterministic approximation is closest to the original stochastic learning process for standard 2-action 2-player fluctuating games, where interaction between learning rules and preference reversal may occur. Finally, we analyze a simplified model for the evolution of learning in a producer-scrounger game, which shows that the exploration rate can interact in a non-intuitive way with other features of co-evolving learning rules. Overall, our analyses illustrate the usefulness of applying stochastic approximation theory in the study of animal learning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Virulent infections are expected to impair learning ability, either as a direct consequence of stressed physiological state or as an adaptive response that minimizes diversion of energy from immune defense. This prediction has been well supported for mammals and bees. Here, we report an opposite result in Drosophila melanogaster. Using an odor-mechanical shock conditioning paradigm, we found that intestinal infection with bacterial pathogens Pseudomonas entomophila or Erwinia c. carotovora improved flies' learning performance after a 1h retention interval. Infection with P. entomophila (but not E. c. carotovora) also improved learning performance after 5 min retention. No effect on learning performance was detected for intestinal infections with an avirulent GacA mutant of P. entomophila or for virulent systemic (hemocoel) infection with E. c. carotovora. Assays of unconditioned responses to odorants and shock do not support a major role for changes in general responsiveness to stimuli in explaining the changes in learning performance, although differences in their specific salience for learning cannot be excluded. Our results demonstrate that the effects of pathogens on learning performance in insects are less predictable than suggested by previous studies, and support the notion that immune stress can sometimes boost cognitive abilities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Spatial data analysis mapping and visualization is of great importance in various fields: environment, pollution, natural hazards and risks, epidemiology, spatial econometrics, etc. A basic task of spatial mapping is to make predictions based on some empirical data (measurements). A number of state-of-the-art methods can be used for the task: deterministic interpolations, methods of geostatistics: the family of kriging estimators (Deutsch and Journel, 1997), machine learning algorithms such as artificial neural networks (ANN) of different architectures, hybrid ANN-geostatistics models (Kanevski and Maignan, 2004; Kanevski et al., 1996), etc. All the methods mentioned above can be used for solving the problem of spatial data mapping. Environmental empirical data are always contaminated/corrupted by noise, and often with noise of unknown nature. That's one of the reasons why deterministic models can be inconsistent, since they treat the measurements as values of some unknown function that should be interpolated. Kriging estimators treat the measurements as the realization of some spatial randomn process. To obtain the estimation with kriging one has to model the spatial structure of the data: spatial correlation function or (semi-)variogram. This task can be complicated if there is not sufficient number of measurements and variogram is sensitive to outliers and extremes. ANN is a powerful tool, but it also suffers from the number of reasons. of a special type ? multiplayer perceptrons ? are often used as a detrending tool in hybrid (ANN+geostatistics) models (Kanevski and Maignank, 2004). Therefore, development and adaptation of the method that would be nonlinear and robust to noise in measurements, would deal with the small empirical datasets and which has solid mathematical background is of great importance. The present paper deals with such model, based on Statistical Learning Theory (SLT) - Support Vector Regression. SLT is a general mathematical framework devoted to the problem of estimation of the dependencies from empirical data (Hastie et al, 2004; Vapnik, 1998). SLT models for classification - Support Vector Machines - have shown good results on different machine learning tasks. The results of SVM classification of spatial data are also promising (Kanevski et al, 2002). The properties of SVM for regression - Support Vector Regression (SVR) are less studied. First results of the application of SVR for spatial mapping of physical quantities were obtained by the authorsin for mapping of medium porosity (Kanevski et al, 1999), and for mapping of radioactively contaminated territories (Kanevski and Canu, 2000). The present paper is devoted to further understanding of the properties of SVR model for spatial data analysis and mapping. Detailed description of the SVR theory can be found in (Cristianini and Shawe-Taylor, 2000; Smola, 1996) and basic equations for the nonlinear modeling are given in section 2. Section 3 discusses the application of SVR for spatial data mapping on the real case study - soil pollution by Cs137 radionuclide. Section 4 discusses the properties of the modelapplied to noised data or data with outliers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La asignatura troncal “Evaluación Psicológica” de los estudios de Psicología y delestudio de grado “Desarrollo humano en la sociedad de la información” de laUniversidad de Girona consta de 12 créditos según la Ley Orgánica de Universidades.Hasta el año académico 2004-05 el trabajo no presencial del alumno consistía en larealización de una evaluación psicológica que se entregaba por escrito a final de curso yde la cual el estudiante obtenía una calificación y revisión si se solicitaba. En el caminohacia el Espacio Europeo de Educación Superior, esta asignatura consta de 9 créditosque equivalen a un total de 255 horas de trabajo presencial y no presencial delestudiante. En los años académicos 2005-06 y 2006-07 se ha creado una guía de trabajopara la gestión de la actividad no presencial con el objetivo de alcanzar aprendizajes anivel de aplicación y solución de problemas/pensamiento crítico (Bloom, 1975)siguiendo las recomendaciones de la Agencia para la Calidad del Sistema Universitariode Cataluña (2005). La guía incorpora: los objetivos de aprendizaje, los criterios deevaluación, la descripción de las actividades, el cronograma semanal de trabajos paratodo el curso, la especificación de las tutorías programadas para la revisión de losdiversos pasos del proceso de evaluación psicológica y el uso del foro para elconocimiento, análisis y crítica constructiva de las evaluaciones realizadas por loscompañeros