624 resultados para Learning Approach
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Conventional taught learning practices often experience difficulties in keeping students motivated and engaged. Video games, however, are very successful at sustaining high levels of motivation and engagement through a set of tasks for hours without apparent loss of focus. In addition, gamers solve complex problems within a gaming environment without feeling fatigue or frustration, as they would typically do with a comparable learning task. Based on this notion, the academic community is keen on exploring methods that can deliver deep learner engagement and has shown increased interest in adopting gamification – the integration of gaming elements, mechanics, and frameworks into non-game situations and scenarios – as a means to increase student engagement and improve information retention. Its effectiveness when applied to education has been debatable though, as attempts have generally been restricted to one-dimensional approaches such as transposing a trivial reward system onto existing teaching materials and/or assessments. Nevertheless, a gamified, multi-dimensional, problem-based learning approach can yield improved results even when applied to a very complex and traditionally dry task like the teaching of computer programming, as shown in this paper. The presented quasi-experimental study used a combination of instructor feedback, real time sequence of scored quizzes, and live coding to deliver a fully interactive learning experience. More specifically, the “Kahoot!” Classroom Response System (CRS), the classroom version of the TV game show “Who Wants To Be A Millionaire?”, and Codecademy’s interactive platform formed the basis for a learning model which was applied to an entry-level Python programming course. Students were thus allowed to experience multiple interlocking methods similar to those commonly found in a top quality game experience. To assess gamification’s impact on learning, empirical data from the gamified group were compared to those from a control group who was taught through a traditional learning approach, similar to the one which had been used during previous cohorts. Despite this being a relatively small-scale study, the results and findings for a number of key metrics, including attendance, downloading of course material, and final grades, were encouraging and proved that the gamified approach was motivating and enriching for both students and instructors.
Resumo:
Les courriels Spams (courriels indésirables ou pourriels) imposent des coûts annuels extrêmement lourds en termes de temps, d’espace de stockage et d’argent aux utilisateurs privés et aux entreprises. Afin de lutter efficacement contre le problème des spams, il ne suffit pas d’arrêter les messages de spam qui sont livrés à la boîte de réception de l’utilisateur. Il est obligatoire, soit d’essayer de trouver et de persécuter les spammeurs qui, généralement, se cachent derrière des réseaux complexes de dispositifs infectés, ou d’analyser le comportement des spammeurs afin de trouver des stratégies de défense appropriées. Cependant, une telle tâche est difficile en raison des techniques de camouflage, ce qui nécessite une analyse manuelle des spams corrélés pour trouver les spammeurs. Pour faciliter une telle analyse, qui doit être effectuée sur de grandes quantités des courriels non classés, nous proposons une méthodologie de regroupement catégorique, nommé CCTree, permettant de diviser un grand volume de spams en des campagnes, et ce, en se basant sur leur similarité structurale. Nous montrons l’efficacité et l’efficience de notre algorithme de clustering proposé par plusieurs expériences. Ensuite, une approche d’auto-apprentissage est proposée pour étiqueter les campagnes de spam en se basant sur le but des spammeur, par exemple, phishing. Les campagnes de spam marquées sont utilisées afin de former un classificateur, qui peut être appliqué dans la classification des nouveaux courriels de spam. En outre, les campagnes marquées, avec un ensemble de quatre autres critères de classement, sont ordonnées selon les priorités des enquêteurs. Finalement, une structure basée sur le semiring est proposée pour la représentation abstraite de CCTree. Le schéma abstrait de CCTree, nommé CCTree terme, est appliqué pour formaliser la parallélisation du CCTree. Grâce à un certain nombre d’analyses mathématiques et de résultats expérimentaux, nous montrons l’efficience et l’efficacité du cadre proposé.
Resumo:
Der Hamburger Schulversuch EARA (Erprobung neu strukturierter Ausbildungsformen im Rahmen des Ausbildungskonsenses 2007-2010“) wurde durch in Konsortium der Universität Hamburg evaluiert und wissenschaftlich begleitet. Die Schulversuchsevaluation gliederte sich in einen summativen und einen formativen Teil (vgl. EARA 2012, 11 f.). Im Rahmen der formativen Evaluation wurden intensive curriculare Entwicklungsarbeiten geleistet, die durch eine enge Kooperation von wissenschaftlicher Begleitung und Projektschulen geprägt waren. Dieser Beitrag stellt die theoretischen Grundlagen der gemeinsamen Curriculumentwicklung im Schulversuch EARA dar. Diese sind umfassend in der Kooperation mit der beruflichen Schule für Büro und Personalmanagement Bergedorf für das Berufsbild Kaufleute für Bürokommunikation umgesetzt worden. In diesem Beitrag werden zunächst zentrale Herausforderungen der schulnahen Curriculumentwicklung im Kontext des Lernfeldansatzes dargestellt, um dann im zweiten Abschnitt die im Schulversuch EARA zum Tragen gekommenen Lösungsansätze zu beschreiben. Die spezifischen Herausforderungen waren (1) eine veränderte curriculare Entwicklungslogik, (2) die notwendige curriculare Rekonstruktion von Lernfeldern, (3) die Schwierigkeiten prozessübergreifender Kompetenzentwicklung und (4) die geforderte Verknüpfung von Prozess- und Systemperspektive in den Lernsituationen. Die Lösungsvorschläge für diese Herausforderungen münden im Konzept der Hamburger Kompetenzmatrix und werden am Beispiel von Ergebnissen aus dem Projekt EARA dargestellt. Der Beitrag endet mit der Bilanzierung von Erfolgen und Desideraten.
Resumo:
The overwhelming amount and unprecedented speed of publication in the biomedical domain make it difficult for life science researchers to acquire and maintain a broad view of the field and gather all information that would be relevant for their research. As a response to this problem, the BioNLP (Biomedical Natural Language Processing) community of researches has emerged and strives to assist life science researchers by developing modern natural language processing (NLP), information extraction (IE) and information retrieval (IR) methods that can be applied at large-scale, to scan the whole publicly available biomedical literature and extract and aggregate the information found within, while automatically normalizing the variability of natural language statements. Among different tasks, biomedical event extraction has received much attention within BioNLP community recently. Biomedical event extraction constitutes the identification of biological processes and interactions described in biomedical literature, and their representation as a set of recursive event structures. The 2009–2013 series of BioNLP Shared Tasks on Event Extraction have given raise to a number of event extraction systems, several of which have been applied at a large scale (the full set of PubMed abstracts and PubMed Central Open Access full text articles), leading to creation of massive biomedical event databases, each of which containing millions of events. Sinece top-ranking event extraction systems are based on machine-learning approach and are trained on the narrow-domain, carefully selected Shared Task training data, their performance drops when being faced with the topically highly varied PubMed and PubMed Central documents. Specifically, false-positive predictions by these systems lead to generation of incorrect biomolecular events which are spotted by the end-users. This thesis proposes a novel post-processing approach, utilizing a combination of supervised and unsupervised learning techniques, that can automatically identify and filter out a considerable proportion of incorrect events from large-scale event databases, thus increasing the general credibility of those databases. The second part of this thesis is dedicated to a system we developed for hypothesis generation from large-scale event databases, which is able to discover novel biomolecular interactions among genes/gene-products. We cast the hypothesis generation problem as a supervised network topology prediction, i.e predicting new edges in the network, as well as types and directions for these edges, utilizing a set of features that can be extracted from large biomedical event networks. Routine machine learning evaluation results, as well as manual evaluation results suggest that the problem is indeed learnable. This work won the Best Paper Award in The 5th International Symposium on Languages in Biology and Medicine (LBM 2013).
Resumo:
GCSE English resit students at Loughborough College learn difficult vocabulary by testing themselves on their mobile devices with the language learning app, Memrise. Competing with each other to earn badges for each completed test motivates students to tackle less appealing aspects of the curriculum. This innovative assessment-for-learning approach helps staff track individual learners’ progress so they can provide more support to those who need it.
Resumo:
Recent years have seen an astronomical rise in SQL Injection Attacks (SQLIAs) used to compromise the confidentiality, authentication and integrity of organisations’ databases. Intruders becoming smarter in obfuscating web requests to evade detection combined with increasing volumes of web traffic from the Internet of Things (IoT), cloud-hosted and on-premise business applications have made it evident that the existing approaches of mostly static signature lack the ability to cope with novel signatures. A SQLIA detection and prevention solution can be achieved through exploring an alternative bio-inspired supervised learning approach that uses input of labelled dataset of numerical attributes in classifying true positives and negatives. We present in this paper a Numerical Encoding to Tame SQLIA (NETSQLIA) that implements a proof of concept for scalable numerical encoding of features to a dataset attributes with labelled class obtained from deep web traffic analysis. In the numerical attributes encoding: the model leverages proxy in the interception and decryption of web traffic. The intercepted web requests are then assembled for front-end SQL parsing and pattern matching by applying traditional Non-Deterministic Finite Automaton (NFA). This paper is intended for a technique of numerical attributes extraction of any size primed as an input dataset to an Artificial Neural Network (ANN) and statistical Machine Learning (ML) algorithms implemented using Two-Class Averaged Perceptron (TCAP) and Two-Class Logistic Regression (TCLR) respectively. This methodology then forms the subject of the empirical evaluation of the suitability of this model in the accurate classification of both legitimate web requests and SQLIA payloads.
Resumo:
Cette version du mémoire a été tronquée des éléments de composition originale, ces éléments donnant des informations d'ordre structurel qui permettraient d'identifier le stage qui fait l'objet de la présente recherche. Une version plus complète est disponible en ligne pour les membres de la communauté de l’Université de Montréal et peut aussi être consultée dans une des bibliothèques UdeM.
Resumo:
Improved clinical care for Bipolar Disorder (BD) relies on the identification of diagnostic markers that can reliably detect disease-related signals in clinically heterogeneous populations. At the very least, diagnostic markers should be able to differentiate patients with BD from healthy individuals and from individuals at familial risk for BD who either remain well or develop other psychopathology, most commonly Major Depressive Disorder (MDD). These issues are particularly pertinent to the development of translational applications of neuroimaging as they represent challenges for which clinical observation alone is insufficient. We therefore applied pattern classification to task-based functional magnetic resonance imaging (fMRI) data of the n-back working memory task, to test their predictive value in differentiating patients with BD (n=30) from healthy individuals (n=30) and from patients' relatives who were either diagnosed with MDD (n=30) or were free of any personal lifetime history of psychopathology (n=30). Diagnostic stability in these groups was confirmed with 4-year prospective follow-up. Task-based activation patterns from the fMRI data were analyzed with Gaussian Process Classifiers (GPC), a machine learning approach to detecting multivariate patterns in neuroimaging datasets. Consistent significant classification results were only obtained using data from the 3-back versus 0-back contrast. Using contrast, patients with BD were correctly classified compared to unrelated healthy individuals with an accuracy of 83.5%, sensitivity of 84.6% and specificity of 92.3%. Classification accuracy, sensitivity and specificity when comparing patients with BD to their relatives with MDD, were respectively 73.1%, 53.9% and 94.5%. Classification accuracy, sensitivity and specificity when comparing patients with BD to their healthy relatives were respectively 81.8%, 72.7% and 90.9%. We show that significant individual classification can be achieved using whole brain pattern analysis of task-based working memory fMRI data. The high accuracy and specificity achieved by all three classifiers suggest that multivariate pattern recognition analyses can aid clinicians in the clinical care of BD in situations of true clinical uncertainty regarding the diagnosis and prognosis.
Resumo:
Entrepreneurship education has emerged as one popular research domain in academic fields given its aim at enhancing and developing certain entrepreneurial qualities of undergraduates that change their state of behavior, even their entrepreneurial inclination and finally may result in the formation of new businesses as well as new job opportunities. This study attempts to investigate the Colombian student´s entrepreneurial qualities and the influence of entrepreneurial education during their studies.
Resumo:
El Centro de Enseñanza Aprendizaje de la Universidad del Rosario (cea-ur), acorde con su compromiso por el mejoramiento continuo y la búsqueda de la innovación en los procesos de enseñanza-aprendizaje, pone a disposición de la comunidad académica el Boletín Reflexiones Pedagógicas. Esta colección presentará diversas alternativas para mejorar nuestros procesos didácticos y fortalecer los procesos de aprendizaje de nuestros estudiantes, de una manera sencilla y fácil de entender. En este primer número se presenta el tema de aprendizaje activo centro de nuestro proyecto educativo enunciado en el pei 2014.En él encontrarán no solo a qué se refiere este tipo de aprendizaje, sino también la descripción de algunas estrategias para desarrollarlo y referencias para profundizar en este tema.
Resumo:
This paper presents a study made in a field poorly explored in the Portuguese language – modality and its automatic tagging. Our main goal was to find a set of attributes for the creation of automatic tag- gers with improved performance over the bag-of-words (bow) approach. The performance was measured using precision, recall and F1. Because it is a relatively unexplored field, the study covers the creation of the corpus (composed by eleven verbs), the use of a parser to extract syntac- tic and semantic information from the sentences and a machine learning approach to identify modality values. Based on three different sets of attributes – from trigger itself and the trigger’s path (from the parse tree) and context – the system creates a tagger for each verb achiev- ing (in almost every verb) an improvement in F1 when compared to the traditional bow approach.
Resumo:
Magnetic Resonance Imaging (MRI) is the in vivo technique most commonly employed to characterize changes in brain structures. The conventional MRI-derived morphological indices are able to capture only partial aspects of brain structural complexity. Fractal geometry and its most popular index, the fractal dimension (FD), can characterize self-similar structures including grey matter (GM) and white matter (WM). Previous literature shows the need for a definition of the so-called fractal scaling window, within which each structure manifests self-similarity. This justifies the existence of fractal properties and confirms Mandelbrot’s assertion that "fractals are not a panacea; they are not everywhere". In this work, we propose a new approach to automatically determine the fractal scaling window, computing two new fractal descriptors, i.e., the minimal and maximal fractal scales (mfs and Mfs). Our method was implemented in a software package, validated on phantoms and applied on large datasets of structural MR images. We demonstrated that the FD is a useful marker of morphological complexity changes that occurred during brain development and aging and, using ultra-high magnetic field (7T) examinations, we showed that the cerebral GM has fractal properties also below the spatial scale of 1 mm. We applied our methodology in two neurological diseases. We observed the reduction of the brain structural complexity in SCA2 patients and, using a machine learning approach, proved that the cerebral WM FD is a consistent feature in predicting cognitive decline in patients with small vessel disease and mild cognitive impairment. Finally, we showed that the FD of the WM skeletons derived from diffusion MRI provides complementary information to those obtained from the FD of the WM general structure in T1-weighted images. In conclusion, the fractal descriptors of structural brain complexity are candidate biomarkers to detect subtle morphological changes during development, aging and in neurological diseases.
Resumo:
The demographics of massive open online course (MOOC) analytics show that the great majority of learners are highly qualified professionals, and not, as originally envisaged, the global community of disadvantaged learners who have no access to good higher education. MOOC pedagogy fits well with the combination of instruction and peer community learning found in most professional development. A UNESCO study therefore set out to test the efficacy of an experimental course for teachers who need but do not receive high-quality continuing professional development, as a way of exploiting what MOOCs can do indirectly to serve disadvantaged students. The course was based on case studies around the world of information and communication technology (ICT) in primary education and was carried out to contribute to the UNESCO “Education For All” goal. It used a co-learning approach to engage the primary teaching community in exploring ways of using ICT in primary education. Course analytics, forums and participant surveys demonstrated that it worked well. The paper concludes by arguing that this technology has the power to tackle the large-scale educational problem of developing the primary-level teachers needed to meet the goal of universal education.
Resumo:
Let’s put ourselves in the shoes of an energy company. Our fleet of electricity production plants mainly includes gas, hydroelectric and waste-to-energy plants. We also sold contracts for the supply of gas and electricity. For each year we have to plan the trading of the volumes needed by the plants and customers: better to fix the price of these volumes in advance with the so-called forward contracts, instead of waiting for the delivery months, exposing ourselves to price uncertainty. Here’s the thing: trying to keep uncertainty under control in a market that has never shown such extreme scenarios as in recent years: a pandemic, a worsening climate crisis and a war that is affecting economies around the world have made the energy market more volatile than ever. How to make decisions in such uncertain contexts? There is an optimization problem: given a year, we need to choose the optimal planning of volume trading times, to meet the needs of our portfolio at the best prices, taking into account the liquidity constraints given by the market and the risk constraints imposed by the company. Algorithms are needed for the generation of market scenarios over a finite time horizon, that is, a probabilistic distribution that allows a view of all the dates between now and the end of the year of interest. Algorithms are needed to solve the optimization problem: we have proposed more than one and compared them; a very simple one, which avoids considering part of the complexity, moving on to a scenario approach and finally a reinforcement learning approach.