967 resultados para deep learning


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Objectives Recent research has shown that machine learning techniques can accurately predict activity classes from accelerometer data in adolescents and adults. The purpose of this study is to develop and test machine learning models for predicting activity type in preschool-aged children. Design Participants completed 12 standardised activity trials (TV, reading, tablet game, quiet play, art, treasure hunt, cleaning up, active game, obstacle course, bicycle riding) over two laboratory visits. Methods Eleven children aged 3–6 years (mean age = 4.8 ± 0.87; 55% girls) completed the activity trials while wearing an ActiGraph GT3X+ accelerometer on the right hip. Activities were categorised into five activity classes: sedentary activities, light activities, moderate to vigorous activities, walking, and running. A standard feed-forward Artificial Neural Network and a Deep Learning Ensemble Network were trained on features in the accelerometer data used in previous investigations (10th, 25th, 50th, 75th and 90th percentiles and the lag-one autocorrelation). Results Overall recognition accuracy for the standard feed forward Artificial Neural Network was 69.7%. Recognition accuracy for sedentary activities, light activities and games, moderate-to-vigorous activities, walking, and running was 82%, 79%, 64%, 36% and 46%, respectively. In comparison, overall recognition accuracy for the Deep Learning Ensemble Network was 82.6%. For sedentary activities, light activities and games, moderate-to-vigorous activities, walking, and running recognition accuracy was 84%, 91%, 79%, 73% and 73%, respectively. Conclusions Ensemble machine learning approaches such as Deep Learning Ensemble Network can accurately predict activity type from accelerometer data in preschool children.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In the current regulatory climate, there is increasing expectation that law schools will be able to demonstrate students’ acquisition of learning outcomes regarding collaboration skills. We argue that this is best achieved through a stepped and structured whole-of-curriculum approach to small group learning. ‘Group work’ provides deep learning and opportunities to develop professional skills, but these benefits are not always realised for law students. An issue is that what is meant by ‘group work’ is not always clear, resulting in a learning regime that may not support the attainment of desired outcomes. This paper describes different types of ‘group work', each associated with distinct learning outcomes. It suggests that ‘group work’ as an umbrella term to describe these types is confusing, as it provides little indication to students and teachers of the type of learning that is valued and is expected to take place. ‘Small group learning’ is a preferable general descriptor. Identifying different types of small group learning allows law schools to develop and demonstrate a scaffolded, sequential and incremental approach to fostering law students’ collaboration skills. To support learning and the acquisition of higherorder skills, different types of small group learning are more appropriate at certain stages of the program. This structured approach is consistent with social cognitive theory, which suggests that with the guidance of a supportive teacher, students can develop skills and confidence in one type of activity which then enhances motivation to participate in another.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.

This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.

Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.

It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In this paper we discuss collaborative learning strategies based on the use of digital stories in corporate training and lifelong learning. The text starts with a concise review on theoretical and technical foundations about the use of digital technologies in collaborative strategies in lifelong learning. We will also discuss if the corporate training may be improved by the use of individual audio-visual experience in learning process. Careful planning, scripting and production of audio-visual digital stories can help in the construction of collaborative learning spaces in which adults are in the context of vocational training throughout life. Our analysis concludes emphasizing on the need to experience the routing performance of digital stories in the context of corporate training, following the reference levels mentioned here, so we can have in a future more theoretical and empirical elements for the validation and conceptualization in the use of digital stories in the context of corporate training. Ultimately we believe that lifelong learning can be improved with the use of strategies that promote the production of personal audio-visual for those involved in teaching and learning process in organizational context.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Les tâches de vision artificielle telles que la reconnaissance d’objets demeurent irrésolues à ce jour. Les algorithmes d’apprentissage tels que les Réseaux de Neurones Artificiels (RNA), représentent une approche prometteuse permettant d’apprendre des caractéristiques utiles pour ces tâches. Ce processus d’optimisation est néanmoins difficile. Les réseaux profonds à base de Machine de Boltzmann Restreintes (RBM) ont récemment été proposés afin de guider l’extraction de représentations intermédiaires, grâce à un algorithme d’apprentissage non-supervisé. Ce mémoire présente, par l’entremise de trois articles, des contributions à ce domaine de recherche. Le premier article traite de la RBM convolutionelle. L’usage de champs réceptifs locaux ainsi que le regroupement d’unités cachées en couches partageant les même paramètres, réduit considérablement le nombre de paramètres à apprendre et engendre des détecteurs de caractéristiques locaux et équivariant aux translations. Ceci mène à des modèles ayant une meilleure vraisemblance, comparativement aux RBMs entraînées sur des segments d’images. Le deuxième article est motivé par des découvertes récentes en neurosciences. Il analyse l’impact d’unités quadratiques sur des tâches de classification visuelles, ainsi que celui d’une nouvelle fonction d’activation. Nous observons que les RNAs à base d’unités quadratiques utilisant la fonction softsign, donnent de meilleures performances de généralisation. Le dernière article quand à lui, offre une vision critique des algorithmes populaires d’entraînement de RBMs. Nous montrons que l’algorithme de Divergence Contrastive (CD) et la CD Persistente ne sont pas robustes : tous deux nécessitent une surface d’énergie relativement plate afin que leur chaîne négative puisse mixer. La PCD à "poids rapides" contourne ce problème en perturbant légèrement le modèle, cependant, ceci génère des échantillons bruités. L’usage de chaînes tempérées dans la phase négative est une façon robuste d’adresser ces problèmes et mène à de meilleurs modèles génératifs.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Cette thèse porte sur une classe d'algorithmes d'apprentissage appelés architectures profondes. Il existe des résultats qui indiquent que les représentations peu profondes et locales ne sont pas suffisantes pour la modélisation des fonctions comportant plusieurs facteurs de variation. Nous sommes particulièrement intéressés par ce genre de données car nous espérons qu'un agent intelligent sera en mesure d'apprendre à les modéliser automatiquement; l'hypothèse est que les architectures profondes sont mieux adaptées pour les modéliser. Les travaux de Hinton (2006) furent une véritable percée, car l'idée d'utiliser un algorithme d'apprentissage non-supervisé, les machines de Boltzmann restreintes, pour l'initialisation des poids d'un réseau de neurones supervisé a été cruciale pour entraîner l'architecture profonde la plus populaire, soit les réseaux de neurones artificiels avec des poids totalement connectés. Cette idée a été reprise et reproduite avec succès dans plusieurs contextes et avec une variété de modèles. Dans le cadre de cette thèse, nous considérons les architectures profondes comme des biais inductifs. Ces biais sont représentés non seulement par les modèles eux-mêmes, mais aussi par les méthodes d'entraînement qui sont souvent utilisés en conjonction avec ceux-ci. Nous désirons définir les raisons pour lesquelles cette classe de fonctions généralise bien, les situations auxquelles ces fonctions pourront être appliquées, ainsi que les descriptions qualitatives de telles fonctions. L'objectif de cette thèse est d'obtenir une meilleure compréhension du succès des architectures profondes. Dans le premier article, nous testons la concordance entre nos intuitions---que les réseaux profonds sont nécessaires pour mieux apprendre avec des données comportant plusieurs facteurs de variation---et les résultats empiriques. Le second article est une étude approfondie de la question: pourquoi l'apprentissage non-supervisé aide à mieux généraliser dans un réseau profond? Nous explorons et évaluons plusieurs hypothèses tentant d'élucider le fonctionnement de ces modèles. Finalement, le troisième article cherche à définir de façon qualitative les fonctions modélisées par un réseau profond. Ces visualisations facilitent l'interprétation des représentations et invariances modélisées par une architecture profonde.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

L'apprentissage profond est un domaine de recherche en forte croissance en apprentissage automatique qui est parvenu à des résultats impressionnants dans différentes tâches allant de la classification d'images à la parole, en passant par la modélisation du langage. Les réseaux de neurones récurrents, une sous-classe d'architecture profonde, s'avèrent particulièrement prometteurs. Les réseaux récurrents peuvent capter la structure temporelle dans les données. Ils ont potentiellement la capacité d'apprendre des corrélations entre des événements éloignés dans le temps et d'emmagasiner indéfiniment des informations dans leur mémoire interne. Dans ce travail, nous tentons d'abord de comprendre pourquoi la profondeur est utile. Similairement à d'autres travaux de la littérature, nos résultats démontrent que les modèles profonds peuvent être plus efficaces pour représenter certaines familles de fonctions comparativement aux modèles peu profonds. Contrairement à ces travaux, nous effectuons notre analyse théorique sur des réseaux profonds acycliques munis de fonctions d'activation linéaires par parties, puisque ce type de modèle est actuellement l'état de l'art dans différentes tâches de classification. La deuxième partie de cette thèse porte sur le processus d'apprentissage. Nous analysons quelques techniques d'optimisation proposées récemment, telles l'optimisation Hessian free, la descente de gradient naturel et la descente des sous-espaces de Krylov. Nous proposons le cadre théorique des méthodes à région de confiance généralisées et nous montrons que plusieurs de ces algorithmes développés récemment peuvent être vus dans cette perspective. Nous argumentons que certains membres de cette famille d'approches peuvent être mieux adaptés que d'autres à l'optimisation non convexe. La dernière partie de ce document se concentre sur les réseaux de neurones récurrents. Nous étudions d'abord le concept de mémoire et tentons de répondre aux questions suivantes: Les réseaux récurrents peuvent-ils démontrer une mémoire sans limite? Ce comportement peut-il être appris? Nous montrons que cela est possible si des indices sont fournis durant l'apprentissage. Ensuite, nous explorons deux problèmes spécifiques à l'entraînement des réseaux récurrents, à savoir la dissipation et l'explosion du gradient. Notre analyse se termine par une solution au problème d'explosion du gradient qui implique de borner la norme du gradient. Nous proposons également un terme de régularisation conçu spécifiquement pour réduire le problème de dissipation du gradient. Sur un ensemble de données synthétique, nous montrons empiriquement que ces mécanismes peuvent permettre aux réseaux récurrents d'apprendre de façon autonome à mémoriser des informations pour une période de temps indéfinie. Finalement, nous explorons la notion de profondeur dans les réseaux de neurones récurrents. Comparativement aux réseaux acycliques, la définition de profondeur dans les réseaux récurrents est souvent ambiguë. Nous proposons différentes façons d'ajouter de la profondeur dans les réseaux récurrents et nous évaluons empiriquement ces propositions.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Current e-learning systems are increasing their importance in higher education. However, the state of the art of e-learning applications, besides the state of the practice, does not achieve the level of interactivity that current learning theories advocate. In this paper, the possibility of enhancing e-learning systems to achieve deep learning has been studied by replicating an experiment in which students had to learn basic software engineering principles. One group learned these principles using a static approach, while the other group learned the same principles using a system-dynamics-based approach, which provided interactivity and feedback. The results show that, quantitatively, the latter group achieved a better understanding of the principles; furthermore, qualitatively, they enjoyed the learning experience

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Purpose – The purpose of this study is to examine the relationship between the cultural background of students and their learning approaches in a first year undergraduate accounting program.

Design/methodology/approach – While prior research in this area has more generally focused on the investigation of the approaches to learning by accounting students, there appears to have been little investigation into the learning approaches of students from different cultures who are studying accounting together at the same institution. The paper presents the results of a study of 550 students enrolled in an undergraduate accounting program at a multi-campus university in Victoria, Australia, which used Biggs' study process questionnaire (SPQ) to assess the approaches to learning utilised by local and Chinese students.

Findings – The results showed that, while there were no significant differences in the use of surface and deep learning strategies by the Chinese and Australian students, there were significant differences in the learning motives of the two groups. Furthermore, the results contradict prior claims that Asian students rely principally on the memorisation and reproduction of factual information as a means of achieving academic success.

Originality/value – The study provides support for the notion that Chinese students may in fact have a culturally induced bias towards seeking understanding through deeper approaches to study.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The increasing diversity and mobility of students have challenged universities, world over, to review educational courses and delivery to provide a more satisfying learning environment to students. The continuous improvement of the 'quality' of teaching and learning is one of the key goals of universities endeavouring to fulfil their obligations as learning institutions. Using a revised SPQ2F instrument (Biggs, 2003, Biggs and Leung, 2001), this exploratory study undertakes a comparative analysis of the age and gender differences in the learning orientations of two groups of tertiary students in an Australian University. The results indicate that there are no significant differences in the learning orientations of students but on average they seem to demonstrate deep learning than surface learning although they may differ in terms of the learning contexts.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Classroom video, and video-stimulated interviews of small group work, in a Grade 5/6 classroom are used to show ways group composition can influence learning opportunities. Vygotsky’s (1933/1966; 1978) learning theory on the spontaneous creation of knowledge as compared to the guidance of an expert other frames this group analysis. Illustrations from two groups show how opportunities to spontaneously create new knowledge can be limited or enhanced by psychological factors associated with the inclination to explore that have been linked to resilience in the form of optimism (Seligman, 1995, Williams, 2003). This study contributes to our knowledge on forming groups to promote deep learning. It raises questions about other ways in which learning may be influenced by optimistic orientation and about building this personal characteristic to enable deep learning.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This is a reflective article on the importance scaffolding in the EME 150 unit taught in collaboration with Deakin University Australia. Being the first unit introduced in the second semester of the first academic year, students were given a lot of support to enhance their understanding and learning since this curriculum was solely developed by Deakin University and introduced for the first time in teachers education curriculum. The scaffolding tools discussed in this article enabled students to a) establish deep learning of the theory. b) engage in collaborative and engaged learning which established good ethical relations between students c) transfer learning by applying theory into practice.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In the context of a broader research study on the intercultural understanding of teachers in Australia, Japan and Thailand, this paper focuses on approaches to learning and the role of assessment in shaping such approaches. Popular contrasts portray Asian learners as compliant and favouring rote memorisation and Western learners as independent and favouring deep, conceptual learning. Yet Asian students frequently outperform their Western counterparts in competitive tests purported to measure higher cognitive skills. Biggs and his associates have challenged the stereotypical view of Asian students as rote learners as a Western misperception. But data from the present cross-cultural study suggest it is more than a Western misperception, being shared by teachers in Japan and Thailand. With this background, this paper then explores the role of assessment through an analysis of examination papers in the three countries at the high stakes, year 12 level. This analysis of the ways in which knowledge and comprehension are assessed identifies different practices across cultures but not ones corresponding to the rhetoric on contrasting approaches to learning. Rather it concludes that assessment tasks classified superficially as comprehension can be approached through memorisation and conversely, those often classified as memorisation can require careful reading, thought and interpretation, while drawing from an extensive knowledge base. A shared understanding of the nature of assessment tasks in different cultures thus has the potential to dissolve the demarcation of culturally embedded learning styles and to enhance deep learning grounded in specialist knowledge for scholars, be they students or teachers, in all cultures.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Accounting academics have heeded the call to incorporate team learning activities into the curricula, yet little is known of student perception of teamwork and whether they view it as beneficial to them. This study addresses the gap by utilising qualitative techniques to examine student perception of the benefits of teamwork and what aspects of the teamwork will contribute to their future professional work. Results indicate that students perceive teamwork enables the use of deep learning and, further, that teamwork at the undergraduate level contributes to their future abilities in the profession. The paper ends by presenting implications for accounting educators.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The learning experiences of first-year engineering students to a newly implemented engineering problem-based learning (PBL) curriculum is reported here, with an emphasis on student approaches to learning. Ethnographic approaches were used for data collection and analysis. This study found that student learning in a PBL team in this setting was mainly influenced by the attitudes, behaviour and learning approaches of the student members in that team. Three different learning cultures that emerged from the analysis of eight PBL teams are reported here. They are the finishing culture, the performing culture and the collaborative learning culture. It was found that the team that used a collaborative approach to learning benefited the most in this PBL setting. Students in this team approached learning at a deep level. The findings of this study imply that students in a problem-based, or project-based, learning setting may not automatically adopt a collaborative learning culture. Hence, it is important for institutions and teachers to identify and consider the factors that influence student learning in their particular setting, provide students with necessary tools and ongoing coaching to nurture deep learning approaches in PBL teams.