817 resultados para flexible learning space
Resumo:
This thesis introduces a flexible visual data exploration framework which combines advanced projection algorithms from the machine learning domain with visual representation techniques developed in the information visualisation domain to help a user to explore and understand effectively large multi-dimensional datasets. The advantage of such a framework to other techniques currently available to the domain experts is that the user is directly involved in the data mining process and advanced machine learning algorithms are employed for better projection. A hierarchical visualisation model guided by a domain expert allows them to obtain an informed segmentation of the input space. Two other components of this thesis exploit properties of these principled probabilistic projection algorithms to develop a guided mixture of local experts algorithm which provides robust prediction and a model to estimate feature saliency simultaneously with the training of a projection algorithm.Local models are useful since a single global model cannot capture the full variability of a heterogeneous data space such as the chemical space. Probabilistic hierarchical visualisation techniques provide an effective soft segmentation of an input space by a visualisation hierarchy whose leaf nodes represent different regions of the input space. We use this soft segmentation to develop a guided mixture of local experts (GME) algorithm which is appropriate for the heterogeneous datasets found in chemoinformatics problems. Moreover, in this approach the domain experts are more involved in the model development process which is suitable for an intuition and domain knowledge driven task such as drug discovery. We also derive a generative topographic mapping (GTM) based data visualisation approach which estimates feature saliency simultaneously with the training of a visualisation model.
Resumo:
Text classification is essential for narrowing down the number of documents relevant to a particular topic for further pursual, especially when searching through large biomedical databases. Protein-protein interactions are an example of such a topic with databases being devoted specifically to them. This paper proposed a semi-supervised learning algorithm via local learning with class priors (LL-CP) for biomedical text classification where unlabeled data points are classified in a vector space based on their proximity to labeled nodes. The algorithm has been evaluated on a corpus of biomedical documents to identify abstracts containing information about protein-protein interactions with promising results. Experimental results show that LL-CP outperforms the traditional semisupervised learning algorithms such as SVMand it also performs better than local learning without incorporating class priors.
Resumo:
The World Wide Web provides plentiful contents for Web-based learning, but its hyperlink-based architecture connects Web resources for browsing freely rather than for effective learning. To support effective learning, an e-learning system should be able to discover and make use of the semantic communities and the emerging semantic relations in a dynamic complex network of learning resources. Previous graph-based community discovery approaches are limited in ability to discover semantic communities. This paper first suggests the Semantic Link Network (SLN), a loosely coupled semantic data model that can semantically link resources and derive out implicit semantic links according to a set of relational reasoning rules. By studying the intrinsic relationship between semantic communities and the semantic space of SLN, approaches to discovering reasoning-constraint, rule-constraint, and classification-constraint semantic communities are proposed. Further, the approaches, principles, and strategies for discovering emerging semantics in dynamic SLNs are studied. The basic laws of the semantic link network motion are revealed for the first time. An e-learning environment incorporating the proposed approaches, principles, and strategies to support effective discovery and learning is suggested.
Resumo:
The complexity and multifaceted nature of sustainable lifelong learning can be effectively addressed by a broad network of providers working co-operatively and collaboratively. Such a network involving the third, public and private sector bodies must realise the full potential of accredited flexible and blended formal learning, contextual opportunities offered by enablers of informal and non formal learning and the affordances derived from the various loose and open spaces that can make social learning effective. Such a conception informs the new Lifelong Learning Network Consortium on Sustainable Communities, Urban Regeneration and Environmental Technologies established and led by the Lifelong Learning Centre at Aston University. This paper offers a radical, reflective and political evaluation of its first year in development arguing that networked learning of this type could prefigure a new model for lifelong learning and sustainable education that renders the city itself a creative medium for transformative learning and sustainability.
Resumo:
The Library of Birmingham (LoB) is a £193million project designed to provide a new space for lifelong learning and knowledge growth, a physical and virtual portal for Birmingham's citizens to the wider world. In cooperation with a range of private, public, and third-sector bodies, as well as individual citizens, the library, due to open in June 2013, will articulate a continuing process of organic growth and emergence. Key delivery themes focus on: arts and creativity, citizenship and community, enterprise and innovation, learning and skills and the new media ecology. A landmark design in the heart of the cultural district of the city, the LoB aims to stimulate sustainable economic growth, urban regeneration and social inclusion by offering a wide range of new digital learning services, real and virtual community spaces, and new opportunities for interpreting and exploiting internationally significant collections of documentary archives, photography, moving image, and rare printed books. Additionally, the LoB will offer physical space for creative, cultural, enterprise, and knowledge development. This paper outlines the cultural and educational thinking that informs the project and the challenges experienced in developing innovative service redesign.
Resumo:
Ontology construction for any domain is a labour intensive and complex process. Any methodology that can reduce the cost and increase efficiency has the potential to make a major impact in the life sciences. This paper describes an experiment in ontology construction from text for the Animal Behaviour domain. Our objective was to see how much could be done in a simple and rapid manner using a corpus of journal papers. We used a sequence of text processing steps, and describe the different choices made to clean the input, to derive a set of terms and to structure those terms in a hierarchy. We were able in a very short space of time to construct a 17000 term ontology with a high percentage of suitable terms. We describe some of the challenges, especially that of focusing the ontology appropriately given a starting point of a heterogeneous corpus.
Resumo:
Networked Learning, e-Learning and Technology Enhanced Learning have each been defined in different ways, as people's understanding about technology in education has developed. Yet each could also be considered as a terminology competing for a contested conceptual space. Theoretically this can be a ‘fertile trans-disciplinary ground for represented disciplines to affect and potentially be re-orientated by others’ (Parchoma and Keefer, 2012), as differing perspectives on terminology and subject disciplines yield new understandings. Yet when used in government policy texts to describe connections between humans, learning and technology, terms tend to become fixed in less fertile positions linguistically. A deceptively spacious policy discourse that suggests people are free to make choices conceals an economically-based assumption that implementing new technologies, in themselves, determines learning. Yet it actually narrows choices open to people as one route is repeatedly in the foreground and humans are not visibly involved in it. An impression that the effective use of technology for endless improvement is inevitable cuts off critical social interactions and new knowledge for multiple understandings of technology in people's lives. This paper explores some findings from a corpus-based Critical Discourse Analysis of UK policy for educational technology during the last 15 years, to help to illuminate the choices made. This is important when through political economy, hierarchical or dominant neoliberal logic promotes a single ‘universal model’ of technology in education, without reference to a wider social context (Rustin, 2013). Discourse matters, because it can ‘mould identities’ (Massey, 2013) in narrow, objective economically-based terms which 'colonise discourses of democracy and student-centredness' (Greener and Perriton, 2005:67). This undermines subjective social, political, material and relational (Jones, 2012: 3) contexts for those learning when humans are omitted. Critically confronting these structures is not considered a negative activity. Whilst deterministic discourse for educational technology may leave people unconsciously restricted, I argue that, through a close analysis, it offers a deceptively spacious theoretical tool for debate about the wider social and economic context of educational technology. Methodologically it provides insights about ways technology, language and learning intersect across disciplinary borders (Giroux, 1992), as powerful, mutually constitutive elements, ever-present in networked learning situations. In sharing a replicable approach for linguistic analysis of policy discourse I hope to contribute to visions others have for a broader theoretical underpinning for educational technology, as a developing field of networked knowledge and research (Conole and Oliver, 2002; Andrews, 2011).
Resumo:
To solve multi-objective problems, multiple reward signals are often scalarized into a single value and further processed using established single-objective problem solving techniques. While the field of multi-objective optimization has made many advances in applying scalarization techniques to obtain good solution trade-offs, the utility of applying these techniques in the multi-objective multi-agent learning domain has not yet been thoroughly investigated. Agents learn the value of their decisions by linearly scalarizing their reward signals at the local level, while acceptable system wide behaviour results. However, the non-linear relationship between weighting parameters of the scalarization function and the learned policy makes the discovery of system wide trade-offs time consuming. Our first contribution is a thorough analysis of well known scalarization schemes within the multi-objective multi-agent reinforcement learning setup. The analysed approaches intelligently explore the weight-space in order to find a wider range of system trade-offs. In our second contribution, we propose a novel adaptive weight algorithm which interacts with the underlying local multi-objective solvers and allows for a better coverage of the Pareto front. Our third contribution is the experimental validation of our approach by learning bi-objective policies in self-organising smart camera networks. We note that our algorithm (i) explores the objective space faster on many problem instances, (ii) obtained solutions that exhibit a larger hypervolume, while (iii) acquiring a greater spread in the objective space.
Resumo:
In this paper a flexible approach to design LMS with QTI Ready component based on the e-Learning standards AICC and IMS QTI is described. This system and component permits a dynamic learning and assessment process. QTI Ready component can provide these facilities to other real world virtual learning management system.
Resumo:
In recent years, learning word vector representations has attracted much interest in Natural Language Processing. Word representations or embeddings learned using unsupervised methods help addressing the problem of traditional bag-of-word approaches which fail to capture contextual semantics. In this paper we go beyond the vector representations at the word level and propose a novel framework that learns higher-level feature representations of n-grams, phrases and sentences using a deep neural network built from stacked Convolutional Restricted Boltzmann Machines (CRBMs). These representations have been shown to map syntactically and semantically related n-grams to closeby locations in the hidden feature space. We have experimented to additionally incorporate these higher-level features into supervised classifier training for two sentiment analysis tasks: subjectivity classification and sentiment classification. Our results have demonstrated the success of our proposed framework with 4% improvement in accuracy observed for subjectivity classification and improved the results achieved for sentiment classification over models trained without our higher level features.
Resumo:
The advances in building learning technology now have to emphasize on the aspect of the individual learning besides the popular focus on the technology per se. Unlike the common research where a great deal has been on finding ways to build, manage, classify, categorize and search knowledge on the server, there is an interest in our work to look at the knowledge development at the individual’s learning. We build the technology that resides behind the knowledge sharing platform where learning and sharing activities of an individual take place. The system that we built, KFTGA (Knowledge Flow Tracer and Growth Analyzer), demonstrates the capability of identifying the topics and subjects that an individual is engaged with during the knowledge sharing session and measuring the knowledge growth of the individual learning on a specific subject on a given time space.
Resumo:
Organizations are seeking new, integrated systems that enable rapid changes through early identification of opportunities and problems, tracking of progress against plans, flexible allocation of resources to achieve goals, and consistent operations. Total Quality Management (TQM) is an overall business strategy. It means that all activities of the company will be focused on satisfying all stakeholders of the company. TQM can be realised by using the EFQM model. The EFQM model is a tool that organizations may use as a framework for self-evaluation that enables an organization to identify its strengths and areas for improvement and the extent to which its operations and results are in line with the characteristics of an excellent organization. We focus on a training organisation or to the learning department of an organization. So we are limiting the EFQM model to the training /learning activities. We can apply EFQM perfect on the level of an activity (business line) of a company. We selected the main criteria for which the learner can play the role of assessor. So only three main criteria left: the enabling resources, the enabling processes and the (learning) results for the learner. We limited the last one to “learning results” based on the Kirkpatrick model.
Resumo:
Education in the Information Society is based on asynchronism in time and space, interactivity and virtual restructuring of the educational space. One way to implement such a model of training is web-based - use of the WWW as a virtual environment to access educational materials or to organize the learning process. This work presents a virtual learning environment (VLE) developed for students and made up of modules of dynamically changing content implemented by authorized users. The aim is, through advanced technology for e-learning, testing and self-testing to stimulate students’ activity to focus their potential on the acquisition of the necessary knowledge, skills and competences. The VLE was developed under the Human Resources Development Operational Programme.
Resumo:
The paper presents a different vision for personalization of the user’s stay in a cultural heritage digital library that models services for personalized content marking, commenting and analyzing that doesn’t require strict user profile, but aims at adjusting the user’s individual needs. The solution is borrowed from real work and studying of traditional written content sources (incl. books, manuals), where the user mainly performs activities such as underlining the important parts of the content, writing notes and inferences, selecting and marking zones of their interest in pictures, etc. In the paper a special attention is paid to the ability to execute learning analysis allowing different ways for the user to experience the digital library content with more creative settings.
Resumo:
In this paper, we investigate the use of manifold learning techniques to enhance the separation properties of standard graph kernels. The idea stems from the observation that when we perform multidimensional scaling on the distance matrices extracted from the kernels, the resulting data tends to be clustered along a curve that wraps around the embedding space, a behavior that suggests that long range distances are not estimated accurately, resulting in an increased curvature of the embedding space. Hence, we propose to use a number of manifold learning techniques to compute a low-dimensional embedding of the graphs in an attempt to unfold the embedding manifold, and increase the class separation. We perform an extensive experimental evaluation on a number of standard graph datasets using the shortest-path (Borgwardt and Kriegel, 2005), graphlet (Shervashidze et al., 2009), random walk (Kashima et al., 2003) and Weisfeiler-Lehman (Shervashidze et al., 2011) kernels. We observe the most significant improvement in the case of the graphlet kernel, which fits with the observation that neglecting the locational information of the substructures leads to a stronger curvature of the embedding manifold. On the other hand, the Weisfeiler-Lehman kernel partially mitigates the locality problem by using the node labels information, and thus does not clearly benefit from the manifold learning. Interestingly, our experiments also show that the unfolding of the space seems to reduce the performance gap between the examined kernels.