968 resultados para Discovery learning
Resumo:
The biological bases of learning and memory are being revealed today with a wide array of molecular approaches, most of which entail the analysis of dysfunction produced by gene disruptions. This perspective derives both from early “genetic dissections” of learning in mutant Drosophila by Seymour Benzer and colleagues and from earlier behavior-genetic analyses of learning and in Diptera by Jerry Hirsch and coworkers. Three quantitative-genetic insights derived from these latter studies serve as guiding principles for the former. First, interacting polygenes underlie complex traits. Consequently, learning/memory defects associated with single-gene mutants can be quantified accurately only in equilibrated, heterogeneous genetic backgrounds. Second, complex behavioral responses will be composed of genetically distinct functional components. Thus, genetic dissection of complex traits into specific biobehavioral properties is likely. Finally, disruptions of genes involved with learning/memory are likely to have pleiotropic effects. As a result, task-relevant sensorimotor responses required for normal learning must be assessed carefully to interpret performance in learning/memory experiments. In addition, more specific conclusions will be obtained from reverse-genetic experiments, in which gene disruptions are restricted in time and/or space.
Resumo:
The World Wide Web provides plentiful contents for Web-based learning, but its hyperlink-based architecture connects Web resources for browsing freely rather than for effective learning. To support effective learning, an e-learning system should be able to discover and make use of the semantic communities and the emerging semantic relations in a dynamic complex network of learning resources. Previous graph-based community discovery approaches are limited in ability to discover semantic communities. This paper first suggests the Semantic Link Network (SLN), a loosely coupled semantic data model that can semantically link resources and derive out implicit semantic links according to a set of relational reasoning rules. By studying the intrinsic relationship between semantic communities and the semantic space of SLN, approaches to discovering reasoning-constraint, rule-constraint, and classification-constraint semantic communities are proposed. Further, the approaches, principles, and strategies for discovering emerging semantics in dynamic SLNs are studied. The basic laws of the semantic link network motion are revealed for the first time. An e-learning environment incorporating the proposed approaches, principles, and strategies to support effective discovery and learning is suggested.
Resumo:
As a way to gain greater insights into the operation of online communities, this dissertation applies automated text mining techniques to text-based communication to identify, describe and evaluate underlying social networks among online community members. The main thrust of the study is to automate the discovery of social ties that form between community members, using only the digital footprints left behind in their online forum postings. Currently, one of the most common but time consuming methods for discovering social ties between people is to ask questions about their perceived social ties. However, such a survey is difficult to collect due to the high investment in time associated with data collection and the sensitive nature of the types of questions that may be asked. To overcome these limitations, the dissertation presents a new, content-based method for automated discovery of social networks from threaded discussions, referred to as ‘name network’. As a case study, the proposed automated method is evaluated in the context of online learning communities. The results suggest that the proposed ‘name network’ method for collecting social network data is a viable alternative to costly and time-consuming collection of users’ data using surveys. The study also demonstrates how social networks produced by the ‘name network’ method can be used to study online classes and to look for evidence of collaborative learning in online learning communities. For example, educators can use name networks as a real time diagnostic tool to identify students who might need additional help or students who may provide such help to others. Future research will evaluate the usefulness of the ‘name network’ method in other types of online communities.
Resumo:
Within an action research framework, this paper describes the conceptual basis for developing a crossdisciplinary pedagogical model of higher education/industry engagement for the built environment design disciplines including architecture, interior design, industrial design and landscape architecture. Aiming to holistically acknowledge and capitalize on the work environment as a place of authentic learning, problems arising in practice are understood as the impetus, focus and ‘space’ for a process of inquiry and discovery that, in the spirit of Boyer’s ‘Scholarship of Integration’, provides for generic as well as discipline-specific learning.
Resumo:
With the advent of Service Oriented Architecture, Web Services have gained tremendous popularity. Due to the availability of a large number of Web services, finding an appropriate Web service according to the requirement of the user is a challenge. This warrants the need to establish an effective and reliable process of Web service discovery. A considerable body of research has emerged to develop methods to improve the accuracy of Web service discovery to match the best service. The process of Web service discovery results in suggesting many individual services that partially fulfil the user’s interest. By considering the semantic relationships of words used in describing the services as well as the use of input and output parameters can lead to accurate Web service discovery. Appropriate linking of individual matched services should fully satisfy the requirements which the user is looking for. This research proposes to integrate a semantic model and a data mining technique to enhance the accuracy of Web service discovery. A novel three-phase Web service discovery methodology has been proposed. The first phase performs match-making to find semantically similar Web services for a user query. In order to perform semantic analysis on the content present in the Web service description language document, the support-based latent semantic kernel is constructed using an innovative concept of binning and merging on the large quantity of text documents covering diverse areas of domain of knowledge. The use of a generic latent semantic kernel constructed with a large number of terms helps to find the hidden meaning of the query terms which otherwise could not be found. Sometimes a single Web service is unable to fully satisfy the requirement of the user. In such cases, a composition of multiple inter-related Web services is presented to the user. The task of checking the possibility of linking multiple Web services is done in the second phase. Once the feasibility of linking Web services is checked, the objective is to provide the user with the best composition of Web services. In the link analysis phase, the Web services are modelled as nodes of a graph and an allpair shortest-path algorithm is applied to find the optimum path at the minimum cost for traversal. The third phase which is the system integration, integrates the results from the preceding two phases by using an original fusion algorithm in the fusion engine. Finally, the recommendation engine which is an integral part of the system integration phase makes the final recommendations including individual and composite Web services to the user. In order to evaluate the performance of the proposed method, extensive experimentation has been performed. Results of the proposed support-based semantic kernel method of Web service discovery are compared with the results of the standard keyword-based information-retrieval method and a clustering-based machine-learning method of Web service discovery. The proposed method outperforms both information-retrieval and machine-learning based methods. Experimental results and statistical analysis also show that the best Web services compositions are obtained by considering 10 to 15 Web services that are found in phase-I for linking. Empirical results also ascertain that the fusion engine boosts the accuracy of Web service discovery by combining the inputs from both the semantic analysis (phase-I) and the link analysis (phase-II) in a systematic fashion. Overall, the accuracy of Web service discovery with the proposed method shows a significant improvement over traditional discovery methods.
Resumo:
Intelligent software agents are promising in improving the effectiveness of e-marketplaces for e-commerce. Although a large amount of research has been conducted to develop negotiation protocols and mechanisms for e-marketplaces, existing negotiation mechanisms are weak in dealing with complex and dynamic negotiation spaces often found in e-commerce. This paper illustrates a novel knowledge discovery method and a probabilistic negotiation decision making mechanism to improve the performance of negotiation agents. Our preliminary experiments show that the probabilistic negotiation agents empowered by knowledge discovery mechanisms are more effective and efficient than the Pareto optimal negotiation agents in simulated e-marketplaces.
Resumo:
The integration of computer technologies into everyday classroom life continues to provide pedagogical challenges for school systems, teachers and administrators. Data from an exploratory case study of one teacher and a multiage class of children in the first years of schooling in Australia show that when young children are using computers for set tasks in small groups, they require ongoing support from teachers, and to engage in peer interactions that are meaningful and productive. Classroom organization and the nature of teacher-child talk are key factors in engaging children in set tasks and producing desirable learning and teaching outcomes.
Resumo:
Recent research on multiple kernel learning has lead to a number of approaches for combining kernels in regularized risk minimization. The proposed approaches include different formulations of objectives and varying regularization strategies. In this paper we present a unifying optimization criterion for multiple kernel learning and show how existing formulations are subsumed as special cases. We also derive the criterion’s dual representation, which is suitable for general smooth optimization algorithms. Finally, we evaluate multiple kernel learning in this framework analytically using a Rademacher complexity bound on the generalization error and empirically in a set of experiments.
Resumo:
We consider the problem of choosing, sequentially, a map which assigns elements of a set A to a few elements of a set B. On each round, the algorithm suffers some cost associated with the chosen assignment, and the goal is to minimize the cumulative loss of these choices relative to the best map on the entire sequence. Even though the offline problem of finding the best map is provably hard, we show that there is an equivalent online approximation algorithm, Randomized Map Prediction (RMP), that is efficient and performs nearly as well. While drawing upon results from the "Online Prediction with Expert Advice" setting, we show how RMP can be utilized as an online approach to several standard batch problems. We apply RMP to online clustering as well as online feature selection and, surprisingly, RMP often outperforms the standard batch algorithms on these problems.
Resumo:
The primary genetic risk factor in multiple sclerosis (MS) is the HLA-DRB1*1501 allele; however, much of the remaining genetic contribution to MS has yet to be elucidated. Several lines of evidence support a role for neuroendocrine system involvement in autoimmunity which may, in part, be genetically determined. Here, we comprehensively investigated variation within eight candidate hypothalamic-pituitary-adrenal (HPA) axis genes and susceptibility to MS. A total of 326 SNPs were investigated in a discovery dataset of 1343 MS cases and 1379 healthy controls of European ancestry using a multi-analytical strategy. Random Forests, a supervised machine-learning algorithm, identified eight intronic SNPs within the corticotrophin-releasing hormone receptor 1 or CRHR1 locus on 17q21.31 as important predictors of MS. On the basis of univariate analyses, six CRHR1 variants were associated with decreased risk for disease following a conservative correction for multiple tests. Independent replication was observed for CRHR1 in a large meta-analysis comprising 2624 MS cases and 7220 healthy controls of European ancestry. Results from a combined meta-analysis of all 3967 MS cases and 8599 controls provide strong evidence for the involvement of CRHR1 in MS. The strongest association was observed for rs242936 (OR = 0.82, 95% CI = 0.74-0.90, P = 9.7 × 10-5). Replicated CRHR1 variants appear to exist on a single associated haplotype. Further investigation of mechanisms involved in HPA axis regulation and response to stress in MS pathogenesis is warranted. © The Author 2010. Published by Oxford University Press. All rights reserved.
Resumo:
Electronic services are a leitmotif in ‘hot’ topics like Software as a Service, Service Oriented Architecture (SOA), Service oriented Computing, Cloud Computing, application markets and smart devices. We propose to consider these in what has been termed the Service Ecosystem (SES). The SES encompasses all levels of electronic services and their interaction, with human consumption and initiation on its periphery in much the same way the ‘Web’ describes a plethora of technologies that eventuate to connect information and expose it to humans. Presently, the SES is heterogeneous, fragmented and confined to semi-closed systems. A key issue hampering the emergence of an integrated SES is Service Discovery (SD). A SES will be dynamic with areas of structured and unstructured information within which service providers and ‘lay’ human consumers interact; until now the two are disjointed, e.g., SOA-enabled organisations, industries and domains are choreographed by domain experts or ‘hard-wired’ to smart device application markets and web applications. In a SES, services are accessible, comparable and exchangeable to human consumers closing the gap to the providers. This requires a new SD with which humans can discover services transparently and effectively without special knowledge or training. We propose two modes of discovery, directed search following an agenda and explorative search, which speculatively expands knowledge of an area of interest by means of categories. Inspired by conceptual space theory from cognitive science, we propose to implement the modes of discovery using concepts to map a lay consumer’s service need to terminologically sophisticated descriptions of services. To this end, we reframe SD as an information retrieval task on the information attached to services, such as, descriptions, reviews, documentation and web sites - the Service Information Shadow. The Semantic Space model transforms the shadow's unstructured semantic information into a geometric, concept-like representation. We introduce an improved and extended Semantic Space including categorization calling it the Semantic Service Discovery model. We evaluate our model with a highly relevant, service related corpus simulating a Service Information Shadow including manually constructed complex service agendas, as well as manual groupings of services. We compare our model against state-of-the-art information retrieval systems and clustering algorithms. By means of an extensive series of empirical evaluations, we establish optimal parameter settings for the semantic space model. The evaluations demonstrate the model’s effectiveness for SD in terms of retrieval precision over state-of-the-art information retrieval models (directed search) and the meaningful, automatic categorization of service related information, which shows potential to form the basis of a useful, cognitively motivated map of the SES for exploratory search.
Resumo:
The discovery of protein variation is an important strategy in disease diagnosis within the biological sciences. The current benchmark for elucidating information from multiple biological variables is the so called “omics” disciplines of the biological sciences. Such variability is uncovered by implementation of multivariable data mining techniques which come under two primary categories, machine learning strategies and statistical based approaches. Typically proteomic studies can produce hundreds or thousands of variables, p, per observation, n, depending on the analytical platform or method employed to generate the data. Many classification methods are limited by an n≪p constraint, and as such, require pre-treatment to reduce the dimensionality prior to classification. Recently machine learning techniques have gained popularity in the field for their ability to successfully classify unknown samples. One limitation of such methods is the lack of a functional model allowing meaningful interpretation of results in terms of the features used for classification. This is a problem that might be solved using a statistical model-based approach where not only is the importance of the individual protein explicit, they are combined into a readily interpretable classification rule without relying on a black box approach. Here we incorporate statistical dimension reduction techniques Partial Least Squares (PLS) and Principal Components Analysis (PCA) followed by both statistical and machine learning classification methods, and compared them to a popular machine learning technique, Support Vector Machines (SVM). Both PLS and SVM demonstrate strong utility for proteomic classification problems.
Resumo:
Twenty first century learners operate in organic, immersive environments. A pedagogy of student-centred learning is not a recipe for rooms. A contemporary learning environment is like a landscape that grows, morphs, and responds to the pressures of the context and micro-culture. There is no single adaptable solution, nor a suite of off-the-shelf answers; propositions must be customisable and infinitely variable. They must be indeterminate and changeable; based on the creation of learning places, not restrictive or constraining spaces. A sustainable solution will be un-fixed, responsive to the life cycle of the components and materials, able to be manipulated by the users; it will create and construct its own history. Learning occurs as formal education with situational knowledge structures, but also as informal learning, active learning, blended learning social learning, incidental learning, and unintended learning. These are not spatial concepts but socio-cultural patterns of discovery. Individual learning requirements must run free and need to be accommodated as the learner sees fit. The spatial solution must accommodate and enable a full array of learning situations. It is a system not an object. Three major components: 1. The determinate landscape: in-situ concrete 'plate' that is permanent. It predates the other components of the system and remains as a remnant/imprint/fossil after the other components of the system have been relocated. It is a functional learning landscape in its own right; enabling a variety of experiences and activities. 2. The indeterminate landscape: a kit of pre-fabricated 2-D panels assembled in a unique manner at each site to suit the client and context. Manufactured to the principles of design-for-disassembly. A symbiotic barnacle like system that attaches itself to the existing infrastructure through the determinate landscape which acts as a fast growth rhizome. A carapace of protective panels, infinitely variable to create enclosed, semi-enclosed, and open learning places. 3. The stations: pre-fabricated packages of highly-serviced space connected through the determinate landscape. Four main types of stations; wet-room learning centres, dry-room learning centres, ablutions, and low-impact building services. Entirely customised at the factory and delivered to site. The stations can be retro-fitted to suit a new context during relocation. Principles of design for disassembly: material principles • use recycled and recyclable materials • minimise the number of types of materials • no toxic materials • use lightweight materials • avoid secondary finishes • provide identification of material types component principles • minimise/standardise the number of types of components • use mechanical not chemical connections • design for use of common tools and equipment • provide easy access to all components • make component size to suite means of handling • provide built in means of handling • design to realistic tolerances • use a minimum number of connectors and a minimum number of types system principles • design for durability and repeated use • use prefabrication and mass production • provide spare components on site • sustain all assembly and material information