924 resultados para Learning objects repositories
Resumo:
To date, automatic recognition of semantic information such as salient objects and mid-level concepts from images is a challenging task. Since real-world objects tend to exist in a context within their environment, the computer vision researchers have increasingly incorporated contextual information for improving object recognition. In this paper, we present a method to build a visual contextual ontology from salient objects descriptions for image annotation. The ontologies include not only partOf/kindOf relations, but also spatial and co-occurrence relations. A two-step image annotation algorithm is also proposed based on ontology relations and probabilistic inference. Different from most of the existing work, we specially exploit how to combine representation of ontology, contextual knowledge and probabilistic inference. The experiments show that image annotation results are improved in the LabelMe dataset.
Resumo:
Focusing on the role-playing simulation game SCAPE (Sustainability, Community and Planning Education), this paper proposes that potential disparities between game design practice and the meaning-making process of the players need to be addressed in a wider ecology of learning. The cultural setting of the gameplay experience, and also the different levels of engagement of the players, can be seen to pose vital questions, which are in and of themselves objects of inquiry. This paper argues that ethnographic participant-observation, which is a recognized approach in game studies, allows taking the wider ecology of learning into account to explore the various relations that shape the gameplay.
Resumo:
My research investigates why nouns are learned disproportionately more frequently than other kinds of words during early language acquisition (Gentner, 1982; Gleitman, et al., 2004). This question must be considered in the context of cognitive development in general. Infants have two major streams of environmental information to make meaningful: perceptual and linguistic. Perceptual information flows in from the senses and is processed into symbolic representations by the primitive language of thought (Fodor, 1975). These symbolic representations are then linked to linguistic input to enable language comprehension and ultimately production. Yet, how exactly does perceptual information become conceptualized? Although this question is difficult, there has been progress. One way that children might have an easier job is if they have structures that simplify the data. Thus, if particular sorts of perceptual information could be separated from the mass of input, then it would be easier for children to refer to those specific things when learning words (Spelke, 1990; Pylyshyn, 2003). It would be easier still, if linguistic input was segmented in predictable ways (Gentner, 1982; Gleitman, et al., 2004) Unfortunately the frequency of patterns in lexical or grammatical input cannot explain the cross-cultural and cross-linguistic tendency to favor nouns over verbs and predicates. There are three examples of this failure: 1) a wide variety of nouns are uttered less frequently than a smaller number of verbs and yet are learnt far more easily (Gentner, 1982); 2) word order and morphological transparency offer no insight when you contrast the sentence structures and word inflections of different languages (Slobin, 1973) and 3) particular language teaching behaviors (e.g. pointing at objects and repeating names for them) have little impact on children's tendency to prefer concrete nouns in their first fifty words (Newport, et al., 1977). Although the linguistic solution appears problematic, there has been increasing evidence that the early visual system does indeed segment perceptual information in specific ways before the conscious mind begins to intervene (Pylyshyn, 2003). I argue that nouns are easier to learn because their referents directly connect with innate features of the perceptual faculty. This hypothesis stems from work done on visual indexes by Zenon Pylyshyn (2001, 2003). Pylyshyn argues that the early visual system (the architecture of the "vision module") segments perceptual data into pre-conceptual proto-objects called FINSTs. FINSTs typically correspond to physical things such as Spelke objects (Spelke, 1990). Hence, before conceptualization, visual objects are picked out by the perceptual system demonstratively, like a finger pointing indicating ‘this’ or ‘that’. I suggest that this primitive system of demonstration elaborates on Gareth Evan's (1982) theory of nonconceptual content. Nouns are learnt first because their referents attract demonstrative visual indexes. This theory also explains why infants less often name stationary objects such as plate or table, but do name things that attract the focal attention of the early visual system, i.e., small objects that move, such as ‘dog’ or ‘ball’. This view leaves open the question how blind children learn words for visible objects and why children learn category nouns (e.g. 'dog'), rather than proper nouns (e.g. 'Fido') or higher taxonomic distinctions (e.g. 'animal').
Resumo:
In this paper, we present the application of a non-linear dimensionality reduction technique for the learning and probabilistic classification of hyperspectral image. Hyperspectral image spectroscopy is an emerging technique for geological investigations from airborne or orbital sensors. It gives much greater information content per pixel on the image than a normal colour image. This should greatly help with the autonomous identification of natural and manmade objects in unfamiliar terrains for robotic vehicles. However, the large information content of such data makes interpretation of hyperspectral images time-consuming and userintensive. We propose the use of Isomap, a non-linear manifold learning technique combined with Expectation Maximisation in graphical probabilistic models for learning and classification. Isomap is used to find the underlying manifold of the training data. This low dimensional representation of the hyperspectral data facilitates the learning of a Gaussian Mixture Model representation, whose joint probability distributions can be calculated offline. The learnt model is then applied to the hyperspectral image at runtime and data classification can be performed.
Resumo:
This paper uses cultural-historical activity to examine the nature of learning assistance, provided in university settings, to assist learners in coping with academic study. Alternative rival constructions of learning assistance, either as part of an overall university activity system, or as an activity system overlapping with other separate activity systems (e.g. library and faculty) within the university, are outlined. Data from a research study at one university are used to describe the object, tools and tensions in learning assistance; and, together with the history of the service, these are used to suggest that the current situation is one where the objects of the Learning Assistance Unit, the faculty and the library have separated from the object of a university.
Resumo:
In early 2011, the Australian Learning and Teaching Council Ltd (ALTC) commissioned a series of Good Practice Reports on completed ALTC projects and fellowships. This report will: • Provide a summative evaluation of the good practices and key outcomes for teaching and learning from completed ALTC projects and fellowships relating to blended learning • Include a literature review of the good practices and key outcomes for teaching and learning from national and international research • Identify areas in which further work or development are appropriate. The literature abounds with definitions; it can be argued that the various definitions incorporate different perspectives, but there is no single, collectively accepted definition. Blended learning courses in higher education can be placed somewhere on a continuum, between fully online and fully face-to-face courses. Consideration must therefore be given to the different definitions for blended learning presented in the literature and by users and stakeholders. The application of this term in these various projects and fellowships is dependent on the particular focus of the team and the conditions and situations under investigation. One of the key challenges for projects wishing to develop good practice in blended learning is the lack of a universally accepted definition. The findings from these projects and fellowships reveal the potential of blended learning programs to improve both student outcomes and levels of satisfaction. It is clear that this environment can help teaching and learning engage students more effectively and allow greater participation than traditional models. Just as there are many definitions, there are many models and frameworks that can be successfully applied to the design and implementation of such courses. Each academic discipline has different learning objectives and in consequence there can’t be only one correct approach. This is illustrated by the diversity of definitions and applications in the ALTC funded projects and fellowships. A review of the literature found no universally accepted guidelines for good practice in higher education. To inform this evaluation and literature review, the Seven Principles for Good Practice in Undergraduate Education, as outlined by Chickering and Gamson (1987), were adopted: 1. encourages contacts between students and faculty 2. develops reciprocity and cooperation among students 3. uses active learning techniques 4. gives prompt feedback 5. emphasises time on task 6. communicates high expectations 7. respects diverse talents and ways of learning. These blended learning projects have produced a wide range of resources that can be used in many and varied settings. These resources include: books, DVDs, online repositories, pedagogical frameworks, teaching modules. In addition there is valuable information contained in the published research data and literature reviews that inform good practice and can assist in the development of courses that can enrich and improve teaching and learning.
Resumo:
The work by graduate teachers in this volume represent intentional design of learning experiences using technology for Early Childhood settings. They were given a two-part design task: a sequence of lessons organised around a themed project; and the collection of resources to support such activities. The project had to be constructive in nature where the children built objects and representations that were meaningful to them. The excellent works presented here offer a range of approaches that would be suitable in a variety of contexts. Because they are reasoned, these projects offer flexibility in implementation along with confidence that they would be effective.
Resumo:
Discussion of Attention-Deficit/Hyperactivity Disorder (ADHD) in the media, and thus much popular discourse, typically revolves around the possible causes of disruptive behaviour and the “behaviourally disordered” child. The usual suspects - too much television and video games, food additives, bad parenting, lack of discipline and single mothers – feature prominently as potential contributors to the spiralling rate of ADHD diagnosis in Western industrialised nations, especially the United States and Australia. Conspicuously absent from the field of investigation, however, is the scene of schooling and the influence that the discourses and practices of schooling might bring to bear upon the constitution of “disorderly behaviour” and subsequent recognition of particular children as a particular kind of “disorderly”. This paper reviews a sample of the literature surrounding ADHD, in order to question the function of this absence and, ultimately, make an argument for an interrogation of the school as a site for the production of disorderly objects.
Resumo:
Laboratories and technical hands on learning have always been a part of Engineering and Science based university courses. They provide the interface where theory meets practice and students may develop professional skills through interacting with real objects in an environment that models appropriate standards and systems. Laboratories in many countries are facing challenges to their sustainable operation and effectiveness. In some countries such as Australia, significantly reduced funding and staff reduction is eroding a once strong base of technical infrastructure. Other countries such as Thailand are seeking to develop their laboratory infrastructure and are in need of staff skill development, management and staff structure in technical areas. In this paper the authors will address the need for technical development with reference to work undertaken in Thailand and Australia. The authors identify the roads which their respective university sectors are on and point out problems and opportunities. It is hoped that the cross roads where we meet will result in better directions for both.
Resumo:
This paper presents an investigation into event detection in crowded scenes, where the event of interest co-occurs with other activities and only binary labels at the clip level are available. The proposed approach incorporates a fast feature descriptor from the MPEG domain, and a novel multiple instance learning (MIL) algorithm using sparse approximation and random sensing. MPEG motion vectors are used to build particle trajectories that represent the motion of objects in uniform video clips, and the MPEG DCT coefficients are used to compute a foreground map to remove background particles. Trajectories are transformed into the Fourier domain, and the Fourier representations are quantized into visual words using the K-Means algorithm. The proposed MIL algorithm models the scene as a linear combination of independent events, where each event is a distribution of visual words. Experimental results show that the proposed approaches achieve promising results for event detection compared to the state-of-the-art.
Resumo:
The huge amount of CCTV footage available makes it very burdensome to process these videos manually through human operators. This has made automated processing of video footage through computer vision technologies necessary. During the past several years, there has been a large effort to detect abnormal activities through computer vision techniques. Typically, the problem is formulated as a novelty detection task where the system is trained on normal data and is required to detect events which do not fit the learned ‘normal’ model. There is no precise and exact definition for an abnormal activity; it is dependent on the context of the scene. Hence there is a requirement for different feature sets to detect different kinds of abnormal activities. In this work we evaluate the performance of different state of the art features to detect the presence of the abnormal objects in the scene. These include optical flow vectors to detect motion related anomalies, textures of optical flow and image textures to detect the presence of abnormal objects. These extracted features in different combinations are modeled using different state of the art models such as Gaussian mixture model(GMM) and Semi- 2D Hidden Markov model(HMM) to analyse the performances. Further we apply perspective normalization to the extracted features to compensate for perspective distortion due to the distance between the camera and objects of consideration. The proposed approach is evaluated using the publicly available UCSD datasets and we demonstrate improved performance compared to other state of the art methods.
Resumo:
Process models are used to convey semantics about business operations that are to be supported by an information system. A wide variety of professionals is targeted to use such models, including people who have little modeling or domain expertise. We identify important user characteristics that influence the comprehension of process models. Through a free simulation experiment, we provide evidence that selected cognitive abilities, learning style, and learning strategy influence the development of process model comprehension. These insights draw attention to the importance of research that views process model comprehension as an emergent learning process rather than as an attribute of the models as objects. Based on our findings, we identify a set of organizational intervention strategies that can lead to more successful process modeling workshops.
Resumo:
This paper presents a method for the continuous segmentation of dynamic objects using only a vehicle mounted monocular camera without any prior knowledge of the object’s appearance. Prior work in online static/dynamic segmentation is extended to identify multiple instances of dynamic objects by introducing an unsupervised motion clustering step. These clusters are then used to update a multi-class classifier within a self-supervised framework. In contrast to many tracking-by-detection based methods, our system is able to detect dynamic objects without any prior knowledge of their visual appearance shape or location. Furthermore, the classifier is used to propagate labels of the same object in previous frames, which facilitates the continuous tracking of individual objects based on motion. The proposed system is evaluated using recall and false alarm metrics in addition to a new multi-instance labelled dataset to evaluate the performance of segmenting multiple instances of objects.