813 resultados para Learning Tool


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hopwood Hall is adopting the use of touch screen technology and gradually replacing its interactive whiteboards to improve access to interactive learning. The touch screen monitors connected to LCD TV’s provide a cheaper classroom build with technology that’s more user-friendly and better suited to classroom delivery. Until now, interactive boards had been the mainstay of classrooms but they created teaching barriers for staff including the additional software to learn and master. They are also expensive and often have usability issues with 'pens' not working or having a delay when used as the mouse tool.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A benchmarking tool developed by Jisc in collaboration with the National Union of Students (NUS) and the Student Engagement Partnership (TSEP). The tool is a starting point for discussions between staff and students about what is working in the digital learning environment and what they can work on together to improve it.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the first part of the thesis we explore three fundamental questions that arise naturally when we conceive a machine learning scenario where the training and test distributions can differ. Contrary to conventional wisdom, we show that in fact mismatched training and test distribution can yield better out-of-sample performance. This optimal performance can be obtained by training with the dual distribution. This optimal training distribution depends on the test distribution set by the problem, but not on the target function that we want to learn. We show how to obtain this distribution in both discrete and continuous input spaces, as well as how to approximate it in a practical scenario. Benefits of using this distribution are exemplified in both synthetic and real data sets.

In order to apply the dual distribution in the supervised learning scenario where the training data set is fixed, it is necessary to use weights to make the sample appear as if it came from the dual distribution. We explore the negative effect that weighting a sample can have. The theoretical decomposition of the use of weights regarding its effect on the out-of-sample error is easy to understand but not actionable in practice, as the quantities involved cannot be computed. Hence, we propose the Targeted Weighting algorithm that determines if, for a given set of weights, the out-of-sample performance will improve or not in a practical setting. This is necessary as the setting assumes there are no labeled points distributed according to the test distribution, only unlabeled samples.

Finally, we propose a new class of matching algorithms that can be used to match the training set to a desired distribution, such as the dual distribution (or the test distribution). These algorithms can be applied to very large datasets, and we show how they lead to improved performance in a large real dataset such as the Netflix dataset. Their computational complexity is the main reason for their advantage over previous algorithms proposed in the covariate shift literature.

In the second part of the thesis we apply Machine Learning to the problem of behavior recognition. We develop a specific behavior classifier to study fly aggression, and we develop a system that allows analyzing behavior in videos of animals, with minimal supervision. The system, which we call CUBA (Caltech Unsupervised Behavior Analysis), allows detecting movemes, actions, and stories from time series describing the position of animals in videos. The method summarizes the data, as well as it provides biologists with a mathematical tool to test new hypotheses. Other benefits of CUBA include finding classifiers for specific behaviors without the need for annotation, as well as providing means to discriminate groups of animals, for example, according to their genetic line.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.

This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.

Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.

It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Skillful tool use requires knowledge of the dynamic properties of tools in order to specify the mapping between applied force and tool motion. Importantly, this mapping depends on the orientation of the tool in the hand. Here we investigate the representation of dynamics during skillful manipulation of a tool that can be grasped at different orientations. We ask whether the motor system uses a single general representation of dynamics for all grasp contexts or whether it uses multiple grasp-specific representations. Using a novel robotic interface, subjects rotated a virtual tool whose orientation relative to the hand could be varied. Subjects could immediately anticipate the force direction for each orientation of the tool based on its visual geometry, and, with experience, they learned to parameterize the force magnitude. Surprisingly, this parameterization of force magnitude showed limited generalization when the orientation of the tool changed. Had subjects parameterized a single general representation, full generalization would be expected. Thus, our results suggest that object dynamics are captured by multiple representations, each of which encodes the mapping associated with a specific grasp context. We suggest that the concept of grasp-specific representations may provide a unifying framework for interpreting previous results related to dynamics learning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Innovation policies play an important role throughout the development process of emerging industries. However, existing policy studies view the process as a black-box, and fail to understand the policy-industry interactions through the process. This paper aims to develop an integrated technology roadmapping tool, in order to facilitate the better understanding of policy heterogeneity at the different stages of new energy industries in China. Through the case study of Chinese wind energy equipment manufacturing industry, this paper elaborates the dynamics between policy and the growth process of the industry. Further, this paper generalizes some Chinese specifics for the policy-industry interactions. As a practical output, this study proposes a policy-technology roadmapping framework that maps policy-market-product- technology interactions in response to the requirement for analyzing and planning the development of new industries in emerging economies (e.g. China). This paper will be of interest to policy makers, strategists, investors, and industrial experts. © 2011 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Innovation policies play an important role throughout the development process of emerging industries in China. Existing policy and industry studies view the emergence process as a black-box, and fail to understand the impacts of policy to the process along which it varies. This paper aims to develop a multi-dimensional roadmapping tool to better analyse the dynamics between policy and industrial growth for new industries in China. Through reviewing the emergence process of Chinese wind turbine industry, this paper elaborates how policy and other factors influence the emergence of this industry along this path. Further, this paper generalises some Chinese specifics for the policy-industry dynamics. As a practical output, this study proposes a roadmapping framework that generalises some patterns of policy-industry interactions for the emergence process of new industries in China. This paper will be of interest to policy makers, strategists, investors and industrial experts. Copyright © 2013 Inderscience Enterprises Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Today's fast-paced, dynamic environments mean that for organizations to keep "ahead of the game", engineering managers need to maximize current opportunities and avoid repeating past mistakes. This article describes the development study of a collaborative strategic management tool - the Experience Scan to capture past experience and apply learning from this to present and future situations. Experience Scan workshops were held in a number of different technology organizations, developing and refining the tool until its format stabilized. From participants' feedback, the workshop-based tool was judged to be a useful and efficient mechanism for communication and knowledge management, contributing to organizational learning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper asks how people can be assisted in learning from practice, as a basis for informing future action, when configuring information technology (IT) in organizations. It discusses the use of Alexanderian Patterns as a means of aiding such learning. Three patterns are presented that have been derived from a longitudinal empirical study that has focused on practices surrounding IT configuration. The paper goes on to argue that Alexanderian Patterns offer a valuable means of learning from past experience. It is argued that learning from experience is an important dimension of deciding “what needs to be done” in configuring IT with organizational context. The three patterns outlined are described in some detail, and the implications of each discussed. Although it is argued that patterns, per se, provide a valuable tool for learning from experience, some potential dangers in seeking to codify experience with a patterns approach are also discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

R. Daly, Q. Shen and S. Aitken. Using ant colony optimisation in learning Bayesian network equivalence classes. Proceedings of the 2006 UK Workshop on Computational Intelligence, pages 111-118.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Q. Meng and M. H. Lee, 'Construction of Robot Intra-modal and Inter-modal Coordination Skills by Developmental Learning', Journal of Intelligent and Robotic Systems, 48(1), pp 97-114, 2007.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A neural network is introduced which provides a solution of the classical motor equivalence problem, whereby many different joint configurations of a redundant manipulator can all be used to realize a desired trajectory in 3-D space. To do this, the network self-organizes a mapping from motion directions in 3-D space to velocity commands in joint space. Computer simulations demonstrate that, without any additional learning, the network can generate accurate movement commands that compensate for variable tool lengths, clamping of joints, distortions of visual input by a prism, and unexpected limb perturbations. Blind reaches have also been simulated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes a self-organizing neural model for eye-hand coordination. Called the DIRECT model, it embodies a solution of the classical motor equivalence problem. Motor equivalence computations allow humans and other animals to flexibly employ an arm with more degrees of freedom than the space in which it moves to carry out spatially defined tasks under conditions that may require novel joint configurations. During a motor babbling phase, the model endogenously generates movement commands that activate the correlated visual, spatial, and motor information that are used to learn its internal coordinate transformations. After learning occurs, the model is capable of controlling reaching movements of the arm to prescribed spatial targets using many different combinations of joints. When allowed visual feedback, the model can automatically perform, without additional learning, reaches with tools of variable lengths, with clamped joints, with distortions of visual input by a prism, and with unexpected perturbations. These compensatory computations occur within a single accurate reaching movement. No corrective movements are needed. Blind reaches using internal feedback have also been simulated. The model achieves its competence by transforming visual information about target position and end effector position in 3-D space into a body-centered spatial representation of the direction in 3-D space that the end effector must move to contact the target. The spatial direction vector is adaptively transformed into a motor direction vector, which represents the joint rotations that move the end effector in the desired spatial direction from the present arm configuration. Properties of the model are compared with psychophysical data on human reaching movements, neurophysiological data on the tuning curves of neurons in the monkey motor cortex, and alternative models of movement control.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study is a cross-linguistic, cross-sectional investigation of the impact of learning contexts on the acquisition of sociopragmatic variation patterns and the subsequent enactment of compound identities. The informants are 20 non-native speaker teachers of English from a range of 10 European countries. They are all primarily mono-contextual foreign language learners/users of English: however, they differ with respect to the length of time accumulated in a target language environment. This allows for three groups to be established – those who have accumulated 60 days or less; those with between 90 days and one year and the final group, all of whom have accumulated in excess of one year. In order to foster the dismantling of the monolith of learning context, both learning contexts under consideration – i.e. the foreign language context and submersion context are broken down into micro-contexts which I refer to as loci of learning. For the purpose of this study, two loci are considered: the institutional and the conversational locus. In order to make a correlation between the impact of learning contexts and loci of learning on the acquisition of sociopragmatic variation patterns, a two-fold study is conducted. The first stage is the completion of a highly detailed language contact profile (LCP) questionnaire. This provides extensive biographical information regarding language learning history and is a powerful tool in illuminating the intensity of contact with the L2 that learners experience in both contexts as well as shedding light on the loci of learning to which learners are exposed in both contexts. Following the completion of the LCP, the informants take part in two role plays which require the enactment of differential identities when engaged in a speech event of asking for advice. The enactment of identities then undergoes a strategic and linguistic analysis in order to investigate if and how differences in the enactment of compound identities are indexed in language. Results indicate that learning context has a considerable impact not only on how identity is indexed in language, but also on the nature of identities enacted. Informants with very low levels of crosscontextuality index identity through strategic means – i.e. levels of directness and conventionality; however greater degrees of cross-contextuality give rise to the indexing of differential identities linguistically by means of speaker/hearer orientation and (non-) solidary moves. When it comes to the nature of identity enacted, it seems that more time spent in intense contact with native speakers in a range of loci of learning allows learners to enact their core identity; whereas low levels of contact with over-exposure to the institutional locus of learning fosters the enactment of generic identities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Gemstone Team ILL (Interactive Language Learning)