865 resultados para Collaborative knowledge building
Resumo:
Research capacity can be built by collaboration between industry and universities, and Knowledge Transfer Partnerships (KTPs) are an ideal way to do this. While good collaboration and team-work has been recognised as crucial for success, projects tend to be evaluated on outcomes and not collaboration effectiveness. This paper discusses best practice for how a KTP project team might work together effectively.
Resumo:
Despite years of effort in building organisational taxonomies, the potential of ontologies to support knowledge management in complex technical domains is under-exploited. The authors of this chapter present an approach to using rich domain ontologies to support sense-making tasks associated with resolving mechanical issues. Using Semantic Web technologies, the authors have built a framework and a suite of tools which support the whole semantic knowledge lifecycle. These are presented by describing the process of issue resolution for a simulated investigation concerning failure of bicycle brakes. Foci of the work have included ensuring that semantic tasks fit in with users’ everyday tasks, to achieve user acceptability and support the flexibility required by communities of practice with differing local sub-domains, tasks, and terminology.
Resumo:
Kralijc’s (1983) purchasing portfolio approach holds that different types of purchases need different sourcing strategies, underpinned by distinct sets of resources and practices. The approach is widely deployed in business and extensively researched, and yet little research has been conducted on how knowledge and skills vary across a portfolio of purchases. This study extends the body of knowledge on purchasing portfolio management, and its application in the strategic development of purchasing in an organization, and on human resource management in the purchasing function. A novel approach to profiling purchasing skills is proposed, which is well suited to dynamic environments which require flexibility. In a survey, experienced purchasing personnel described a specific purchase and profiled the skills required for effective performance in purchasing that item. Purchases were categorized according to their importance to the organization (internally-oriented evaluation of cost and production factors) and to the supply market (externally-oriented evaluation of commercial risk and uncertainty). Through cluster analysis three key types of purchase situations were identified. The skills required for effective purchasing vary significantly across the three clusters (for 22 skills, p<0.01). Prior research shows that global organizations use the purchasing portfolio approach to develop sourcing strategies, but also aggregate analyses to inform the design of purchasing arrangements (local vs global) and to develop their improvement plans. Such organizations would also benefit from profiling skills by purchase type. We demonstrate how the survey can be adapted to provide a management tool for global firms seeking to improve procurement capability, flexibility and performance.
Resumo:
Resource Space Model is a kind of data model which can effectively and flexibly manage the digital resources in cyber-physical system from multidimensional and hierarchical perspectives. This paper focuses on constructing resource space automatically. We propose a framework that organizes a set of digital resources according to different semantic dimensions combining human background knowledge in WordNet and Wikipedia. The construction process includes four steps: extracting candidate keywords, building semantic graphs, detecting semantic communities and generating resource space. An unsupervised statistical language topic model (i.e., Latent Dirichlet Allocation) is applied to extract candidate keywords of the facets. To better interpret meanings of the facets found by LDA, we map the keywords to Wikipedia concepts, calculate word relatedness using WordNet's noun synsets and construct corresponding semantic graphs. Moreover, semantic communities are identified by GN algorithm. After extracting candidate axes based on Wikipedia concept hierarchy, the final axes of resource space are sorted and picked out through three different ranking strategies. The experimental results demonstrate that the proposed framework can organize resources automatically and effectively.©2013 Published by Elsevier Ltd. All rights reserved.
Resumo:
The paper gives an overview about the ongoing FP6-IST INFRAWEBS project and describes the main layers and software components embedded in an application oriented realisation framework. An important part of INFRAWEBS is a Semantic Web Unit (SWU) – a collaboration platform and interoperable middleware for ontology-based handling and maintaining of SWS. The framework provides knowledge about a specific domain and relies on ontologies to structure and exchange this knowledge to semantic service development modules. INFRAWEBS Designer and Composer are sub-modules of SWU responsible for creating Semantic Web Services using Case-Based Reasoning approach. The Service Access Middleware (SAM) is responsible for building up the communication channels between users and various other modules. It serves as a generic middleware for deployment of Semantic Web Services. This software toolset provides a development framework for creating and maintaining the full-life-cycle of Semantic Web Services with specific application support.
Resumo:
The advances in building learning technology now have to emphasize on the aspect of the individual learning besides the popular focus on the technology per se. Unlike the common research where a great deal has been on finding ways to build, manage, classify, categorize and search knowledge on the server, there is an interest in our work to look at the knowledge development at the individual’s learning. We build the technology that resides behind the knowledge sharing platform where learning and sharing activities of an individual take place. The system that we built, KFTGA (Knowledge Flow Tracer and Growth Analyzer), demonstrates the capability of identifying the topics and subjects that an individual is engaged with during the knowledge sharing session and measuring the knowledge growth of the individual learning on a specific subject on a given time space.
Resumo:
* The work is partly supported by RFFI grant 08-07-00062-a
Resumo:
The paper explores the functionalities of eight start pages and considers their usefulness when used as a mashable platform for deployment of personal learning environments (PLE) for self-organized learners. The Web 2.0 effects and eLearning 2.0 strategies are examined from the point of view of how they influence the methods of gathering and capturing data, information and knowledge, and the learning process. Mashup technology is studied in order to see what kind of components can be used in PLE realization. A model of a PLE for self-organized learners is developed and it is used to prototype a personal learning and research environment in the start pages Netvibes, Pageflakes and iGoogle.
Resumo:
One of the ultimate aims of Natural Language Processing is to automate the analysis of the meaning of text. A fundamental step in that direction consists in enabling effective ways to automatically link textual references to their referents, that is, real world objects. The work presented in this paper addresses the problem of attributing a sense to proper names in a given text, i.e., automatically associating words representing Named Entities with their referents. The method for Named Entity Disambiguation proposed here is based on the concept of semantic relatedness, which in this work is obtained via a graph-based model over Wikipedia. We show that, without building the traditional bag of words representation of the text, but instead only considering named entities within the text, the proposed method achieves results competitive with the state-of-the-art on two different datasets.
Resumo:
This research takes a dynamic view on the knowledge coordination process, aiming to explain how the process is affected by changes in the operating environment, from normal situations to emergencies in traditional and fast-response organizations, and why these changes occur. We first conceptualize the knowledge coordination process by distinguishing between four dimensions - what, when, how and who - that together capture the full scope of the knowledge coordination process. We use these dimensions to analyze knowledge coordination practices and the activities constituting these practices, in the IT functions of traditional and fast-response (military) organizations where we distinguish between "normal" and "emergency" operating conditions. Our findings indicate that (i) inter-relationships between knowledge coordination practices change under different operating conditions, and (ii) the patterns of change are different in traditional and fast-response organizations.
Resumo:
This paper presents a research of linguistic structure of Bulgarian bells knowledge. The idea of building semantic structure of Bulgarian bells appeared during the “Multimedia fund - BellKnow” project. In this project was collected a lots of data about bells, their structure, history, technical data, etc. This is the first attempt for computation linguistic explain of bell knowledge and deliver a semantic representation of that knowledge. Based on this research some linguistic components, aiming to realize different types of analysis of text objects are implemented in term dictionaries. Thus, we lay the foundation of the linguistic analysis services in these digital dictionaries aiding the research of kinds, number and frequency of the lexical units that constitute various bell objects.
Resumo:
We describe an ontological representation of data in an archive containing detailed description of church bells. As an object of cultural heritage the bell has general properties such as geometric dimensions, weight, sound of each of the bells, the pitch of the tone as well as acoustical diagrams obtained using contemporary equipment. We use Protégé platform in order to define basic ontological objects and relations between them.
Resumo:
GitHub is the most popular repository for open source code (Finley 2011). It has more than 3.5 million users, as the company declared in April 2013, and more than 10 million repositories, as of December 2013. It has a publicly accessible API and, since March 2012, it also publishes a stream of all the events occurring on public projects. Interactions among GitHub users are of a complex nature and take place in different forms. Developers create and fork repositories, push code, approve code pushed by others, bookmark their favorite projects and follow other developers to keep track of their activities. In this paper we present a characterization of GitHub, as both a social network and a collaborative platform. To the best of our knowledge, this is the first quantitative study about the interactions happening on GitHub. We analyze the logs from the service over 18 months (between March 11, 2012 and September 11, 2013), describing 183.54 million events and we obtain information about 2.19 million users and 5.68 million repositories, both growing linearly in time. We show that the distributions of the number of contributors per project, watchers per project and followers per user show a power-law-like shape. We analyze social ties and repository-mediated collaboration patterns, and we observe a remarkably low level of reciprocity of the social connections. We also measure the activity of each user in terms of authored events and we observe that very active users do not necessarily have a large number of followers. Finally, we provide a geographic characterization of the centers of activity and we investigate how distance influences collaboration.
Resumo:
Topic classification (TC) of short text messages offers an effective and fast way to reveal events happening around the world ranging from those related to Disaster (e.g. Sandy hurricane) to those related to Violence (e.g. Egypt revolution). Previous approaches to TC have mostly focused on exploiting individual knowledge sources (KS) (e.g. DBpedia or Freebase) without considering the graph structures that surround concepts present in KSs when detecting the topics of Tweets. In this paper we introduce a novel approach for harnessing such graph structures from multiple linked KSs, by: (i) building a conceptual representation of the KSs, (ii) leveraging contextual information about concepts by exploiting semantic concept graphs, and (iii) providing a principled way for the combination of KSs. Experiments evaluating our TC classifier in the context of Violence detection (VD) and Emergency Responses (ER) show promising results that significantly outperform various baseline models including an approach using a single KS without linked data and an approach using only Tweets. Copyright 2013 ACM.
Resumo:
This research aims to contribute to understanding the implementation of knowledge management systems (KMS) in the field of health through a case study, leading to theory building and theory extension. We use the concept of the business process approach to knowledge management as a theoretical lens to analyse and explore how a large teaching hospital developed, executed and practically implemented a KMS. A qualitative study was conducted over a 2.5 year period with data collected from semi-structured interviews with eight members of the strategic management team, 12 clinical users and 20 patients in addition to non-participant observation of meetings and documents. The theoretical propositions strategy was used as the overarching approach for data analysis. Our case study provides evidence that true patient centred approaches to supporting care delivery with a KMS benefit from process thinking at both the planning and implementation stages, and an emphasis on the knowledge demands resulting from: the activities along the care pathways; where cross-overs in care occur; and knowledge sharing for the integration of care. The findings also suggest that despite the theoretical awareness of KMS implementation methodologies, the actual execution of such systems requires practice and learning. Flexible, fluid approaches through rehearsal are important and communications strategies should focus heavily on transparency incorporating both structured and unstructured communication methods.