870 resultados para collaborative knowledge construction
Resumo:
This paper has two objectives: first, to provide a brief review of developments in the sociology of scientific knowledge (SSK); second to apply an aspect of SSK theorising which is concerned with the construction of scientific knowledge. The paper offers a review of the streams of thought which can be identified within SSK and then proceeds to illustrate the theoretic constructs introduced in the earlier discussion by analysing a particular contribution to the literature on research methodology in accounting and organisations studies. The paper chosen for analysis is titled “Middle Range Thinking”. The objective of this paper is not to argue that the approach used in this paper is invalid, but to seek to expose the rhetorical nature of the argumentation which is used by the author of the paper.
Resumo:
The present scarcity of operational knowledge-based systems (KBS) has been attributed, in part, to an inadequate consideration shown to user interface design during development. From a human factors perspective the problem has stemmed from an overall lack of user-centred design principles. Consequently the integration of human factors principles and techniques is seen as a necessary and important precursor to ensuring the implementation of KBS which are useful to, and usable by, the end-users for whom they are intended. Focussing upon KBS work taking place within commercial and industrial environments, this research set out to assess both the extent to which human factors support was presently being utilised within development, and the future path for human factors integration. The assessment consisted of interviews conducted with a number of commercial and industrial organisations involved in KBS development; and a set of three detailed case studies of individual KBS projects. Two of the studies were carried out within a collaborative Alvey project, involving the Interdisciplinary Higher Degrees Scheme (IHD) at the University of Aston in Birmingham, BIS Applied Systems Ltd (BIS), and the British Steel Corporation. This project, which had provided the initial basis and funding for the research, was concerned with the application of KBS to the design of commercial data processing (DP) systems. The third study stemmed from involvement on a KBS project being carried out by the Technology Division of the Trustees Saving Bank Group plc. The preliminary research highlighted poor human factors integration. In particular, there was a lack of early consideration of end-user requirements definition and user-centred evaluation. Instead concentration was given to the construction of the knowledge base and prototype evaluation with the expert(s). In response to this identified problem, a set of methods was developed that was aimed at encouraging developers to consider user interface requirements early on in a project. These methods were then applied in the two further projects, and their uptake within the overall development process was monitored. Experience from the two studies demonstrated that early consideration of user interface requirements was both feasible, and instructive for guiding future development work. In particular, it was shown a user interface prototype could be used as a basis for capturing requirements at the functional (task) level, and at the interface dialogue level. Extrapolating from this experience, a KBS life-cycle model is proposed which incorporates user interface design (and within that, user evaluation) as a largely parallel, rather than subsequent, activity to knowledge base construction. Further to this, there is a discussion of several key elements which can be seen as inhibiting the integration of human factors within KBS development. These elements stem from characteristics of present KBS development practice; from constraints within the commercial and industrial development environments; and from the state of existing human factors support.
Resumo:
Despite considerable and growing interest in the subject of academic researchers and practising managers jointly generating knowledge (which we term ‘co-production’), our searches of management literature revealed few articles based on primary data or multiple cases. Given the increasing commitment to co-production by academics, managers and those funding research, it seems important to strengthen the evidence base about practice and performance in co-production. Literature on collaborative research was reviewed to develop a framework to structure the analysis of this data and relate findings to the limited body of prior research on collaborative research practice and performance. This paper presents empirical data from four completed, large scale co-production projects. Despite major differences between the cases, we find that the key success factors and the indicators of performances are remarkably similar. We demonstrate many, complex influences between factors, between outcomes, and between factors and outcomes, and discuss the features that are distinctive to co-production. Our empirical findings are broadly consonant with prior literature, but go further in trying to understand success factors’ consequences for performance. A second contribution of this paper is the development of a conceptually and methodologically rigorous process for investigating collaborative research, linking process and performance. The paper closes with discussion of the study’s limitations and opportunities for further research.
Resumo:
Intranet technologies accessible through a web based platform are used to share and build knowledge bases in many industries. Previous research suggests that intranets are capable of providing a useful means to share, collaborate and transact information within an organization. To compete and survive successfully, business organisations are required to effectively manage various risks affecting their businesses. In the construction industry too this is increasingly becoming an important element in business planning. The ability of businesses, especially of SMEs which represent a significant portion in most economies, to manage various risks is often hindered by fragmented knowledge across a large number of businesses. As a solution, this paper argues that Intranet technologies can be used as an effective means of building and sharing knowledge and building up effective knowledge bases for risk management in SMEs, by specifically considering the risks of extreme weather events. The paper discusses and evaluates relevant literature in this regard and identifies the potential for further research to explore this concept.
Resumo:
Resource Space Model is a kind of data model which can effectively and flexibly manage the digital resources in cyber-physical system from multidimensional and hierarchical perspectives. This paper focuses on constructing resource space automatically. We propose a framework that organizes a set of digital resources according to different semantic dimensions combining human background knowledge in WordNet and Wikipedia. The construction process includes four steps: extracting candidate keywords, building semantic graphs, detecting semantic communities and generating resource space. An unsupervised statistical language topic model (i.e., Latent Dirichlet Allocation) is applied to extract candidate keywords of the facets. To better interpret meanings of the facets found by LDA, we map the keywords to Wikipedia concepts, calculate word relatedness using WordNet's noun synsets and construct corresponding semantic graphs. Moreover, semantic communities are identified by GN algorithm. After extracting candidate axes based on Wikipedia concept hierarchy, the final axes of resource space are sorted and picked out through three different ranking strategies. The experimental results demonstrate that the proposed framework can organize resources automatically and effectively.©2013 Published by Elsevier Ltd. All rights reserved.
Resumo:
An approach for knowledge extraction from the information arriving to the knowledge base input and also new knowledge distribution over knowledge subsets already present in the knowledge base is developed. It is also necessary to realize the knowledge transform into parameters (data) of the model for the following decision-making on the given subset. It is assumed to realize the decision-making with the fuzzy sets’ apparatus.
Resumo:
The hypothesis that the same educational objective, raised as cooperative or collaborative learning in university teaching does not affect students’ perceptions of the learning model, leads this study. It analyses the reflections of two students groups of engineering that shared the same educational goals implemented through two different methodological active learning strategies: Simulation as cooperative learning strategy and Problem-based Learning as a collaborative one. The different number of participants per group (eighty-five and sixty-five, respectively) as well as the use of two active learning strategies, either collaborative or cooperative, did not show differences in the results from a qualitative perspective.
Resumo:
In the past decade, systems that extract information from millions of Internet documents have become commonplace. Knowledge graphs -- structured knowledge bases that describe entities, their attributes and the relationships between them -- are a powerful tool for understanding and organizing this vast amount of information. However, a significant obstacle to knowledge graph construction is the unreliability of the extracted information, due to noise and ambiguity in the underlying data or errors made by the extraction system and the complexity of reasoning about the dependencies between these noisy extractions. My dissertation addresses these challenges by exploiting the interdependencies between facts to improve the quality of the knowledge graph in a scalable framework. I introduce a new approach called knowledge graph identification (KGI), which resolves the entities, attributes and relationships in the knowledge graph by incorporating uncertain extractions from multiple sources, entity co-references, and ontological constraints. I define a probability distribution over possible knowledge graphs and infer the most probable knowledge graph using a combination of probabilistic and logical reasoning. Such probabilistic models are frequently dismissed due to scalability concerns, but my implementation of KGI maintains tractable performance on large problems through the use of hinge-loss Markov random fields, which have a convex inference objective. This allows the inference of large knowledge graphs using 4M facts and 20M ground constraints in 2 hours. To further scale the solution, I develop a distributed approach to the KGI problem which runs in parallel across multiple machines, reducing inference time by 90%. Finally, I extend my model to the streaming setting, where a knowledge graph is continuously updated by incorporating newly extracted facts. I devise a general approach for approximately updating inference in convex probabilistic models, and quantify the approximation error by defining and bounding inference regret for online models. Together, my work retains the attractive features of probabilistic models while providing the scalability necessary for large-scale knowledge graph construction. These models have been applied on a number of real-world knowledge graph projects, including the NELL project at Carnegie Mellon and the Google Knowledge Graph.
Resumo:
Part 19: Knowledge Management in Networks
Resumo:
Part 13: Virtual Reality and Simulation
Resumo:
This study highlights the importance of cognition-affect interaction pathways in the construction of mathematical knowledge. Scientific output demands further research on the conceptual structure underlying such interaction aimed at coping with the high complexity of its interpretation. The paper discusses the effectiveness of using a dynamic model such as that outlined in the Mathematical Working Spaces (MWS) framework, in order to describe the interplay between cognition and affect in the transitions from instrumental to discursive geneses in geometrical reasoning. The results based on empirical data from a teaching experiment at a middle school show that the use of dynamic geometry software favours students’ attitudinal and volitional dimensions and helps them to maintain productive affective pathways, affording greater intellectual independence in mathematical work and interaction with the context that impact learning opportunities in geometric proofs. The reflective and heuristic dimensions of teacher mediation in students’ learning is crucial in the transition from instrumental to discursive genesis and working stability in the Instrumental-Discursive plane of MWS.
Resumo:
This is an analysis of the theoretical and practical construction of the methodology of Matrix Support by means of studies on Paideia Support (Institutional and Matrix Support), which is an inter-professional work of joint care in recent literature and official documents of the Unified Health System (SUS). An attempt was made to describe methodological concepts and strategies. A comparative analysis of Institutional Support and Matrix Support was also conducted using the epistemological framework of Field and Core Knowledge and Practices.
Resumo:
We describe two ways of optimizing score functions for protein sequence to structure threading. The first method adjusts parameters to improve sequence to structure alignment. The second adjusts parameters so as to improve a score function's ability to rank alignments calculated in the first score function. Unlike those functions known as knowledge-based force fields, the resulting parameter sets do not rely on Boltzmann statistics, have no claim to representing free energies and are purely constructions for recognizing protein folds. The methods give a small improvement, but suggest that functions can be profitably optimized for very specific aspects of protein fold recognition, Proteins 1999;36:454-461. (C) 1999 Wiley-Liss, Inc.
Resumo:
The corporative portals, enabled by Information Technology and Communication tools, provide the integration of heterogeneous data proceeding from internal information systems, which are available for access and sharing of the interested community. They can be considered an important instrument of explicit knowledge evaluation in the. organization, once they allow faster and,safer, information exchanges, enabling a healthful collaborative environment. In the specific case of major Brazilian universities, the corporate portals assume a basic aspect; therefore they offer an enormous variety and amount of information and knowledge, due to the multiplicity of their activities This. study aims to point out important aspects of the explicit knowledge expressed by the searched universities; by the analysis, of the content offered in their corporative portals` This is an exploratory study made through, direct observation of the existing contents in the corporative portals of two public universities as. Well as three private ones. A. comparative analysis of the existing contents in these portals was carried through;. it can be useful to evaluate its use as factor of optimization of the generated explicit knowledge in the university. As results, the existence of important differences, could be verified in the composition and in the content of the corporative portals of the public universities compared to the private institutions. The main differences are about the kind of services and the destination-of the,information that have as focus different public-target. It-could also be concluded that the searched private universities, focus, on the processes related to the attendance of the students, the support for the courses as well as the spreading of information to the public interested in joining the institution; whereas the anal public universities prioritize more specific information, directed to,the dissemination-of the research, developed internally or with institutional objectives.
Resumo:
Background: Microarray transcript profiling has the potential to illuminate the molecular processes that are involved in the responses of cattle to disease challenges. This knowledge may allow the development of strategies that exploit these genes to enhance resistance to disease in an individual or animal population. Results: The Bovine Innate Immune Microarray developed in this study consists of 1480 characterised genes identified by literature searches, 31 positive and negative control elements and 5376 cDNAs derived from subtracted and normalised libraries. The cDNA libraries were produced from 'challenged' bovine epithelial and leukocyte cells. The microarray was found to have a limit of detection of 1 pg/mu g of total RNA and a mean slide-to-slide correlation co-efficient of 0.88. The profiles of differentially expressed genes from Concanavalin A ( ConA) stimulated bovine peripheral blood lymphocytes were determined. Three distinct profiles highlighted 19 genes that were rapidly up-regulated within 30 minutes and returned to basal levels by 24 h; 76 genes that were upregulated between 2 - 8 hours and sustained high levels of expression until 24 h and 10 genes that were down-regulated. Quantitative real-time RT-PCR on selected genes was used to confirm the results from the microarray analysis. The results indicate that there is a dynamic process involving gene activation and regulatory mechanisms re-establishing homeostasis in the ConA activated lymphocytes. The Bovine Innate Immune Microarray was also used to determine the cross-species hybridisation capabilities of an ovine PBL sample. Conclusion: The Bovine Innate Immune Microarray has been developed which contains a set of well-characterised genes and anonymous cDNAs from a number of different bovine cell types. The microarray can be used to determine the gene expression profiles underlying innate immune responses in cattle and sheep.