1000 resultados para Learning environnement
Resumo:
The research considers the problem of spatial data classification using machine learning algorithms: probabilistic neural networks (PNN) and support vector machines (SVM). As a benchmark model simple k-nearest neighbor algorithm is considered. PNN is a neural network reformulation of well known nonparametric principles of probability density modeling using kernel density estimator and Bayesian optimal or maximum a posteriori decision rules. PNN is well suited to problems where not only predictions but also quantification of accuracy and integration of prior information are necessary. An important property of PNN is that they can be easily used in decision support systems dealing with problems of automatic classification. Support vector machine is an implementation of the principles of statistical learning theory for the classification tasks. Recently they were successfully applied for different environmental topics: classification of soil types and hydro-geological units, optimization of monitoring networks, susceptibility mapping of natural hazards. In the present paper both simulated and real data case studies (low and high dimensional) are considered. The main attention is paid to the detection and learning of spatial patterns by the algorithms applied.
Resumo:
Alors que les modèles et notions de performance généralement analysés dans les recherches sont presque toujours ceux du management et des cadres dirigeants, cet article propose de s'arrêter sur la vision des employés, en l'occurrence des agents publics suisses. Il présente, à la lumière des réformes entreprises dans l'administration publique en Suisse, d'une part la conception générale qu'ont les agents publics de la performance du service public, et d'autre part leur perception de l'évolution des performances organisationnelle, individuelle, ainsi que celle en lien avec les prestations fournies aux bénéficiaires. L'analyse de la performance perçue est effectuée à travers la mise en évidence de l'existence de plusieurs mondes de référence auxquels se rapportent les agents publics lorsqu'ils parlent de leur travail. Les résultats révèlent la prégnance du monde industriel, devançant le monde civique qui, lui-même, relègue les mondes domestique et marchand respectivement en troisième et quatrième position. Un intérêt particulier est également accordé aux différents comportements, découlant des réformes du service public, qu'adoptent les agents publics, et aux liens qui pourraient exister entre ces comportements et les différents mondes de référence. Un mariage entre deux mondes, potentiellement contradictoires, semble se dessiner dans l'esprit des agents publics. Il pourrait constituer une réponse à une forme de crise identitaire, mise en avant par de nombreux spécialistes, provoquée par des injonctions contradictoires auxquelles sont soumis actuellement les agents publics.
Resumo:
This paper reports on the purpose, design, methodology and target audience of E-learning courses in forensic interpretation offered by the authors since 2010, including practical experiences made throughout the implementation period of this project. This initiative was motivated by the fact that reporting results of forensic examinations in a logically correct and scientifically rigorous way is a daily challenge for any forensic practitioner. Indeed, interpretation of raw data and communication of findings in both written and oral statements are topics where knowledge and applied skills are needed. Although most forensic scientists hold educational records in traditional sciences, only few actually followed full courses that focussed on interpretation issues. Such courses should include foundational principles and methodology - including elements of forensic statistics - for the evaluation of forensic data in a way that is tailored to meet the needs of the criminal justice system. In order to help bridge this gap, the authors' initiative seeks to offer educational opportunities that allow practitioners to acquire knowledge and competence in the current approaches to the evaluation and interpretation of forensic findings. These cover, among other aspects, probabilistic reasoning (including Bayesian networks and other methods of forensic statistics, tools and software), case pre-assessment, skills in the oral and written communication of uncertainty, and the development of independence and self-confidence to solve practical inference problems. E-learning was chosen as a general format because it helps to form a trans-institutional online-community of practitioners from varying forensic disciplines and workfield experience such as reporting officers, (chief) scientists, forensic coordinators, but also lawyers who all can interact directly from their personal workplaces without consideration of distances, travel expenses or time schedules. In the authors' experience, the proposed learning initiative supports participants in developing their expertise and skills in forensic interpretation, but also offers an opportunity for the associated institutions and the forensic community to reinforce the development of a harmonized view with regard to interpretation across forensic disciplines, laboratories and judicial systems.
Resumo:
Games are powerful and engaging. On average, one billion people spend at least 1 hour a day playing computer and videogames. This is even more true with the younger generations. Our students have become the < digital natives >, the < gamers >, the < virtual generation >. Research shows that those who are most at risk for failure in the traditional classroom setting, also spend more time than their counterparts, using video games. They might strive, given a different learning environment. Educators have the responsibility to align their teaching style to these younger generation learning styles. However, many academics resist the use of computer-assisted learning that has been "created elsewhere". This can be extrapolated to game-based teaching: even if educational games were more widely authored, their adoption would still be limited to the educators who feel a match between the authored games and their own beliefs and practices. Consequently, game-based teaching would be much more widespread if teachers could develop their own games, or at least customize them. Yet, the development and customization of teaching games are complex and costly. This research uses a design science methodology, leveraging gamification techniques, active and cooperative learning theories, as well as immersive sandbox 3D virtual worlds, to develop a method which allows management instructors to transform any off-the-shelf case study into an engaging collaborative gamified experience. This method is applied to marketing case studies, and uses the sandbox virtual world of Second Life. -- Les jeux sont puissants et motivants, En moyenne, un milliard de personnes passent au moins 1 heure par jour jouer à des jeux vidéo sur ordinateur. Ceci se vérifie encore plus avec les jeunes générations, Nos étudiants sont nés à l'ère du numérique, certains les appellent des < gamers >, d'autres la < génération virtuelle >. Les études montrent que les élèves qui se trouvent en échec scolaire dans les salles de classes traditionnelles, passent aussi plus de temps que leurs homologues à jouer à des jeux vidéo. lls pourraient potentiellement briller, si on leur proposait un autre environnement d'apprentissage. Les enseignants ont la responsabilité d'adapter leur style d'enseignement aux styles d'apprentissage de ces jeunes générations. Toutefois, de nombreux professeurs résistent lorsqu'il s'agit d'utiliser des contenus d'apprentissage assisté par ordinateur, développés par d'autres. Ceci peut être extrapolé à l'enseignement par les jeux : même si un plus grand nombre de jeux éducatifs était créé, leur adoption se limiterait tout de même aux éducateurs qui perçoivent une bonne adéquation entre ces jeux et leurs propres convictions et pratiques. Par conséquent, I'enseignement par les jeux serait bien plus répandu si les enseignants pouvaient développer leurs propres jeux, ou au moins les customiser. Mais le développement de jeux pédagogiques est complexe et coûteux. Cette recherche utilise une méthodologie Design Science pour développer, en s'appuyant sur des techniques de ludification, sur les théories de pédagogie active et d'apprentissage coopératif, ainsi que sur les mondes virtuels immersifs < bac à sable > en 3D, une méthode qui permet aux enseignants et formateurs de management, de transformer n'importe quelle étude de cas, provenant par exemple d'une centrale de cas, en une expérience ludique, collaborative et motivante. Cette méthode est appliquée aux études de cas Marketing dans le monde virtuel de Second Life.
Resumo:
Introduction: Evidence-based medicine (EBM) improves the quality of health care. Courses on how to teach EBM in practice are available, but knowledge does not automatically imply its application in teaching. We aimed to identify and compare barriers and facilitators for teaching EBM in clinical practice in various European countries. Methods: A questionnaire was constructed listing potential barriers and facilitators for EBM teaching in clinical practice. Answers were reported on a 7-point Likert scale ranging from not at all being a barrier to being an insurmountable barrier. Results: The questionnaire was completed by 120 clinical EBM teachers from 11 countries. Lack of time was the strongest barrier for teaching EBM in practice (median 5). Moderate barriers were the lack of requirements for EBM skills and a pyramid hierarchy in health care management structure (median 4). In Germany, Hungary and Poland, reading and understanding articles in English was a higher barrier than in the other countries. Conclusion: Incorporation of teaching EBM in practice faces several barriers to implementation. Teaching EBM in clinical settings is most successful where EBM principles are culturally embedded and form part and parcel of everyday clinical decisions and medical practice.
Resumo:
The present research deals with an application of artificial neural networks for multitask learning from spatial environmental data. The real case study (sediments contamination of Geneva Lake) consists of 8 pollutants. There are different relationships between these variables, from linear correlations to strong nonlinear dependencies. The main idea is to construct a subsets of pollutants which can be efficiently modeled together within the multitask framework. The proposed two-step approach is based on: 1) the criterion of nonlinear predictability of each variable ?k? by analyzing all possible models composed from the rest of the variables by using a General Regression Neural Network (GRNN) as a model; 2) a multitask learning of the best model using multilayer perceptron and spatial predictions. The results of the study are analyzed using both machine learning and geostatistical tools.
Resumo:
Fragile X syndrome (FXS) is characterized by intellectual disability and autistic traits, and results from the silencing of the FMR1 gene coding for a protein implicated in the regulation of protein synthesis at synapses. The lack of functional Fragile X mental retardation protein has been proposed to result in an excessive signaling of synaptic metabotropic glutamate receptors, leading to alterations of synapse maturation and plasticity. It remains, however, unclear how mechanisms of activity-dependent spine dynamics are affected in Fmr knockout (Fmr1-KO) mice and whether they can be reversed. Here we used a repetitive imaging approach in hippocampal slice cultures to investigate properties of structural plasticity and their modulation by signaling pathways. We found that basal spine turnover was significantly reduced in Fmr1-KO mice, but markedly enhanced by activity. Additionally, activity-mediated spine stabilization was lost in Fmr1-KO mice. Application of the metabotropic glutamate receptor antagonist α-Methyl-4-carboxyphenylglycine (MCPG) enhanced basal turnover, improved spine stability, but failed to reinstate activity-mediated spine stabilization. In contrast, enhancing phosphoinositide-3 kinase (PI3K) signaling, a pathway implicated in various aspects of synaptic plasticity, reversed both basal turnover and activity-mediated spine stabilization. It also restored defective long-term potentiation mechanisms in slices and improved reversal learning in Fmr1-KO mice. These results suggest that modulation of PI3K signaling could contribute to improve the cognitive deficits associated with FXS.
Resumo:
Résumé Suite aux recentes avancées technologiques, les archives d'images digitales ont connu une croissance qualitative et quantitative sans précédent. Malgré les énormes possibilités qu'elles offrent, ces avancées posent de nouvelles questions quant au traitement des masses de données saisies. Cette question est à la base de cette Thèse: les problèmes de traitement d'information digitale à très haute résolution spatiale et/ou spectrale y sont considérés en recourant à des approches d'apprentissage statistique, les méthodes à noyau. Cette Thèse étudie des problèmes de classification d'images, c'est à dire de catégorisation de pixels en un nombre réduit de classes refletant les propriétés spectrales et contextuelles des objets qu'elles représentent. L'accent est mis sur l'efficience des algorithmes, ainsi que sur leur simplicité, de manière à augmenter leur potentiel d'implementation pour les utilisateurs. De plus, le défi de cette Thèse est de rester proche des problèmes concrets des utilisateurs d'images satellite sans pour autant perdre de vue l'intéret des méthodes proposées pour le milieu du machine learning dont elles sont issues. En ce sens, ce travail joue la carte de la transdisciplinarité en maintenant un lien fort entre les deux sciences dans tous les développements proposés. Quatre modèles sont proposés: le premier répond au problème de la haute dimensionalité et de la redondance des données par un modèle optimisant les performances en classification en s'adaptant aux particularités de l'image. Ceci est rendu possible par un système de ranking des variables (les bandes) qui est optimisé en même temps que le modèle de base: ce faisant, seules les variables importantes pour résoudre le problème sont utilisées par le classifieur. Le manque d'information étiquétée et l'incertitude quant à sa pertinence pour le problème sont à la source des deux modèles suivants, basés respectivement sur l'apprentissage actif et les méthodes semi-supervisées: le premier permet d'améliorer la qualité d'un ensemble d'entraînement par interaction directe entre l'utilisateur et la machine, alors que le deuxième utilise les pixels non étiquetés pour améliorer la description des données disponibles et la robustesse du modèle. Enfin, le dernier modèle proposé considère la question plus théorique de la structure entre les outputs: l'intègration de cette source d'information, jusqu'à présent jamais considérée en télédétection, ouvre des nouveaux défis de recherche. Advanced kernel methods for remote sensing image classification Devis Tuia Institut de Géomatique et d'Analyse du Risque September 2009 Abstract The technical developments in recent years have brought the quantity and quality of digital information to an unprecedented level, as enormous archives of satellite images are available to the users. However, even if these advances open more and more possibilities in the use of digital imagery, they also rise several problems of storage and treatment. The latter is considered in this Thesis: the processing of very high spatial and spectral resolution images is treated with approaches based on data-driven algorithms relying on kernel methods. In particular, the problem of image classification, i.e. the categorization of the image's pixels into a reduced number of classes reflecting spectral and contextual properties, is studied through the different models presented. The accent is put on algorithmic efficiency and the simplicity of the approaches proposed, to avoid too complex models that would not be used by users. The major challenge of the Thesis is to remain close to concrete remote sensing problems, without losing the methodological interest from the machine learning viewpoint: in this sense, this work aims at building a bridge between the machine learning and remote sensing communities and all the models proposed have been developed keeping in mind the need for such a synergy. Four models are proposed: first, an adaptive model learning the relevant image features has been proposed to solve the problem of high dimensionality and collinearity of the image features. This model provides automatically an accurate classifier and a ranking of the relevance of the single features. The scarcity and unreliability of labeled. information were the common root of the second and third models proposed: when confronted to such problems, the user can either construct the labeled set iteratively by direct interaction with the machine or use the unlabeled data to increase robustness and quality of the description of data. Both solutions have been explored resulting into two methodological contributions, based respectively on active learning and semisupervised learning. Finally, the more theoretical issue of structured outputs has been considered in the last model, which, by integrating outputs similarity into a model, opens new challenges and opportunities for remote sensing image processing.