790 resultados para Learning techniques
Resumo:
We consider the problem of prediction with expert advice in the setting where a forecaster is presented with several online prediction tasks. Instead of competing against the best expert separately on each task, we assume the tasks are related, and thus we expect that a few experts will perform well on the entire set of tasks. That is, our forecaster would like, on each task, to compete against the best expert chosen from a small set of experts. While we describe the "ideal" algorithm and its performance bound, we show that the computation required for this algorithm is as hard as computation of a matrix permanent. We present an efficient algorithm based on mixing priors, and prove a bound that is nearly as good for the sequential task presentation case. We also consider a harder case where the task may change arbitrarily from round to round, and we develop an efficient approximate randomized algorithm based on Markov chain Monte Carlo techniques.
Resumo:
Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive definite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space -- classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semi-definite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm -- using the labelled part of the data one can learn an embedding also for the unlabelled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method to learn the 2-norm soft margin parameter in support vector machines, solving another important open problem. Finally, the novel approach presented in the paper is supported by positive empirical results.
Resumo:
Assessment for Learning is a pedagogical practice with anticipated gains of increased student motivation, mastery and autonomy as learners develop their capacity to monitor and plan their own learning progress. Assessment for Learning (AfL) differs from Assessment of learning in its timing, occurring within the regular flow of learning rather than end point, in its purpose of improving student learning rather than summative grading and in the ownership of the learning where the student voice is heard in judging quality. Since Black and Wiliam (1998) highlighted the achievement gains that AfL practices seem to bring to all learners in classrooms, it has become part of current educational policy discourse in Australia, yet teacher adoption of the practices is not a straightforward implementation of techniques within an existing classroom repertoire. As can be seen from the following meta-analysis, recent research highlights a more complex interrelationship between teacher and student beliefs about learning and assessment, and the social and cultural interactions in and contexts of the classroom. More research is needed from a sociocultural perspective that allows meaning to emerge from practice. Before another policy push, we need to understand better the many factors within the assessment relationship. We need to hear from teachers and students through long-term AfL case studies both to inform AfL theory and to shed light on the complexities of pedagogical change for enhancing learner autonomy.
Resumo:
Learning a digital tool is often a hidden process. We tend to learn new tools in a bewildering range of ways. Formal, informal, structured, random, conscious, unconscious, individual, group strategies, may all play a part, but are often lost to us in the complex and demanding processes of learning. But when we reflect carefully on the experience, some patterns and surprising techniques emerge. This monograph presents the thinking of four students in MDN642, Digital Pedagogies, where they have deliberately reflected on the mental processes at work as they learnt a digital technology of their choice.
Resumo:
This paper presents a method for automatic terrain classification, using a cheap monocular camera in conjunction with a robot’s stall sensor. A first step is to have the robot generate a training set of labelled images. Several techniques are then evaluated for preprocessing the images, reducing their dimensionality, and building a classifier. Finally, the classifier is implemented and used online by an indoor robot. Results are presented, demonstrating an increased level of autonomy.
Resumo:
In the university education arena, it is becoming apparent that traditional methods of conducting classes are not the most effective ways to achieve desired learning outcomes. The traditional class/method involves the instructor verbalizing information for passive, note-taking students who are assumed to be empty receptacles waiting to be filled with knowledge. This method is limited in its effectiveness, as the flow of information is usually only in one direction. Furthermore, “It has been demonstrated that students in many cases can recite and apply formulas in numerical problems, but the actual meaning and understanding of the concept behind the formula is not acquired (Crouch & Mazur)”. It is apparent that memorization is the main technique present in this approach. A more effective method of teaching involves increasing the students’ level of activity during, and hence their involvement in the learning process. This technique stimulates self- learning and assists in keeping these students’ levels of concentration more uniform. In this work, I am therefore interested in studying the influence of a particular TLA on students’ learning-outcomes. I want to foster high-level understanding and critical thinking skills using active learning (Silberman, 1996) techniques. The TLA in question aims to promote self-study by students and to expose them to a situation where their learning-outcomes can be tested. The motivation behind this activity is based on studies which suggest that some sensory modalities are more effective than others. Using various instruments for data collection and by means of a thorough analysis I present evidence of the effectiveness of this action research project which aims to improve my own teaching practices, with the ultimate goal of enhancing student’s learning.
Resumo:
This project has blended two streams of enquiry: temporary and transportable construction technology, and flexible blended-learning environments. It seeks to develop prototypes for a series of environments suited for the activities of learning (future-proofed schools), as practiced in the twenty first century. The research utilises techniques of: historic survey, case study, first-hand observation, and architectural design (as research). The design comprises three major components: The determinate landscape: in-situ concrete ‘plate’ that is permanent. The indeterminate landscape: a kit of pre-fabricated 2-D panels assembled in a unique manner at each site to suit the client and context; manufactured to the principles of design-for-disassembly. The stations: pre-fabricated packages of highly-serviced space connected through the determinate landscape. This project was submitted to the ‘Future Proofing Schools’ competition (professional category) in October 2011. The competition was part of a research project supported under the Australian Research Council’s Linkage Grant funding scheme (project LP0991146).
Resumo:
Detailed representations of complex flow datasets are often difficult to generate using traditional vector visualisation techniques such as arrow plots and streamlines. This is particularly true when the flow regime changes in time. Texture-based techniques, which are based on the advection of dense textures, are novel techniques for visualising such flows. We review two popular texture based techniques and their application to flow datasets sourced from active research projects. The techniques investigated were Line integral convolution (LIC) [1], and Image based flow visualisation (IBFV) [18]. We evaluated these and report on their effectiveness from a visualisation perspective. We also report on their ease of implementation and computational overheads.
Resumo:
This paper describes an approach to investigate the adoption of Web 2.0 in the classroom using a mixed methods study. By using a combination of qualitative or quantitative data collection and analysis techniques, we attempt to synergize the results and provide a more valid understanding of Web 2.0 adoption for learning by both teachers and students. This approach is expected to yield a better holistic view on the adoption issues associated with the e-learning 2.0 concept in current higher education as opposed to single method studies done previously. This paper also presents some early findings of e-learning 2.0 adoption using this research method
Resumo:
Organisations are engaging in e-learning as a mechanism for delivering flexible learning to meet the needs of individuals and organisations. In light of the increasing use and organisational investment in e-learning, the need for methods to evaluate the success of its design and implementation seems more important than ever. To date, developing a standard for the evaluation of e-learning appears to have eluded both academics and practitioners. The currently accepted evaluation methods for e-learning are traditional learning and development models, such as Kirkpatrick’s model (1976). Due to the technical nature of e-learning it is important to broaden the scope and consider other evaluation models or techniques, such as the DeLone and McLean Information Success Model, that may be applicable to the e-learning domain. Research into the use of e-learning courses has largely avoided considering the applicability of information systems research. Given this observation, it is reasonable to conclude that e-learning implementation decisions and practice could be overlooking useful or additional viewpoints. This research investigated how existing evaluation models apply in the context of organisational e-learning, and resulted in an Organisational E-learning success Framework, which identifies the critical elements for success in an e-learning environment. In particular this thesis highlights the critical importance of three e-learning system creation elements; system quality, information quality, and support quality. These elements were explored in depth and the nature of each element is described in detail. In addition, two further elements were identified as factors integral to the success of an e-learning system; learner preferences and change management. Overall, this research has demonstrated the need for a holistic approach to e-learning evaluation. Furthermore, it has shown that the application of both traditional training evaluation approaches and the D&M IS Success Model are appropriate to the organisational e-learning context, and when combined can provide this holistic approach. Practically, this thesis has reported the need for organisations to consider evaluation at all stages of e-learning from design through to implementation.
Resumo:
This study aims to redefine spaces of learning to places of learning through the direct engagement of local communities as a way to examine and learn from real world issues in the city. This paper exemplifies Smart City Learning, where the key goal is to promote the generation and exchange of urban design ideas for the future development of South Bank, in Brisbane, Australia, informing the creation of new design policies responding to the needs of local citizens. Specific to this project was the implementation of urban informatics techniques and approaches to promote innovative engagement strategies. Architecture and Urban Design students were encouraged to review and appropriate real-time, ubiquitous technology, social media, and mobile devices that were used by urban residents to augment and mediate the physical and digital layers of urban infrastructures. Our study’s experience found that urban informatics provide an innovative opportunity to enrich students’ place of learning within the city.
Resumo:
This thesis investigates the possibility of using an adaptive tutoring system for beginning programming students. The work involved, designing, developing and evaluating such a system and showing that it was effective in increasing the students’ test scores. In doing so, Artificial Intelligence techniques were used to analyse PHP programs written by students and to provide feedback based on any specific errors made by them. Methods were also included to provide students with the next best exercise to suit their particular level of knowledge.
Resumo:
Over the last decade, the majority of existing search techniques is either keyword- based or category-based, resulting in unsatisfactory effectiveness. Meanwhile, studies have illustrated that more than 80% of users preferred personalized search results. As a result, many studies paid a great deal of efforts (referred to as col- laborative filtering) investigating on personalized notions for enhancing retrieval performance. One of the fundamental yet most challenging steps is to capture precise user information needs. Most Web users are inexperienced or lack the capability to express their needs properly, whereas the existent retrieval systems are highly sensitive to vocabulary. Researchers have increasingly proposed the utilization of ontology-based tech- niques to improve current mining approaches. The related techniques are not only able to refine search intentions among specific generic domains, but also to access new knowledge by tracking semantic relations. In recent years, some researchers have attempted to build ontological user profiles according to discovered user background knowledge. The knowledge is considered to be both global and lo- cal analyses, which aim to produce tailored ontologies by a group of concepts. However, a key problem here that has not been addressed is: how to accurately match diverse local information to universal global knowledge. This research conducts a theoretical study on the use of personalized ontolo- gies to enhance text mining performance. The objective is to understand user information needs by a \bag-of-concepts" rather than \words". The concepts are gathered from a general world knowledge base named the Library of Congress Subject Headings. To return desirable search results, a novel ontology-based mining approach is introduced to discover accurate search intentions and learn personalized ontologies as user profiles. The approach can not only pinpoint users' individual intentions in a rough hierarchical structure, but can also in- terpret their needs by a set of acknowledged concepts. Along with global and local analyses, another solid concept matching approach is carried out to address about the mismatch between local information and world knowledge. Relevance features produced by the Relevance Feature Discovery model, are determined as representatives of local information. These features have been proven as the best alternative for user queries to avoid ambiguity and consistently outperform the features extracted by other filtering models. The two attempt-to-proposed ap- proaches are both evaluated by a scientific evaluation with the standard Reuters Corpus Volume 1 testing set. A comprehensive comparison is made with a num- ber of the state-of-the art baseline models, including TF-IDF, Rocchio, Okapi BM25, the deploying Pattern Taxonomy Model, and an ontology-based model. The gathered results indicate that the top precision can be improved remarkably with the proposed ontology mining approach, where the matching approach is successful and achieves significant improvements in most information filtering measurements. This research contributes to the fields of ontological filtering, user profiling, and knowledge representation. The related outputs are critical when systems are expected to return proper mining results and provide personalized services. The scientific findings have the potential to facilitate the design of advanced preference mining models, where impact on people's daily lives.
Resumo:
The huge amount of CCTV footage available makes it very burdensome to process these videos manually through human operators. This has made automated processing of video footage through computer vision technologies necessary. During the past several years, there has been a large effort to detect abnormal activities through computer vision techniques. Typically, the problem is formulated as a novelty detection task where the system is trained on normal data and is required to detect events which do not fit the learned ‘normal’ model. There is no precise and exact definition for an abnormal activity; it is dependent on the context of the scene. Hence there is a requirement for different feature sets to detect different kinds of abnormal activities. In this work we evaluate the performance of different state of the art features to detect the presence of the abnormal objects in the scene. These include optical flow vectors to detect motion related anomalies, textures of optical flow and image textures to detect the presence of abnormal objects. These extracted features in different combinations are modeled using different state of the art models such as Gaussian mixture model(GMM) and Semi- 2D Hidden Markov model(HMM) to analyse the performances. Further we apply perspective normalization to the extracted features to compensate for perspective distortion due to the distance between the camera and objects of consideration. The proposed approach is evaluated using the publicly available UCSD datasets and we demonstrate improved performance compared to other state of the art methods.
Resumo:
It is well recognized that many scientifically interesting sites on Mars are located in rough terrains. Therefore, to enable safe autonomous operation of a planetary rover during exploration, the ability to accurately estimate terrain traversability is critical. In particular, this estimate needs to account for terrain deformation, which significantly affects the vehicle attitude and configuration. This paper presents an approach to estimate vehicle configuration, as a measure of traversability, in deformable terrain by learning the correlation between exteroceptive and proprioceptive information in experiments. We first perform traversability estimation with rigid terrain assumptions, then correlate the output with experienced vehicle configuration and terrain deformation using a multi-task Gaussian Process (GP) framework. Experimental validation of the proposed approach was performed on a prototype planetary rover and the vehicle attitude and configuration estimate was compared with state-of-the-art techniques. We demonstrate the ability of the approach to accurately estimate traversability with uncertainty in deformable terrain.