87 resultados para 080701 Aboriginal and Torres Strait Islander Knowledge Management
Resumo:
The work reported in this paper is part of a project simulating maintenance operations in an automotive engine production facility. The decisions made by the people in charge of these operations form a crucial element of this simulation. Eliciting this knowledge is problematic. One approach is to use the simulation model as part of the knowledge elicitation process. This paper reports on the experience so far with using a simulation model to support knowledge management in this way. Issues are discussed regarding the data available, the use of the model, and the elicitation process itself. © 2004 Elsevier B.V. All rights reserved.
Resumo:
Purpose - The idea that knowledge needs to be codified is central to many claims that knowledge can be managed. However, there appear to be no empirical studies in the knowledge management context that examine the process of knowledge codification. This paper therefore seeks to explore codification as a knowledge management process. Design/methodology/approach - The paper draws on findings from research conducted around a knowledge management project in a section of the UK Post Office, using a methodology of participant-observation. Data were collected through observations of project meetings, correspondence between project participants, and individual interviews. Findings - The principal findings about the nature of knowledge codification are first, that the process of knowledge codification also involves the process of defining the codes needed to codify knowledge, and second, that people who participate in the construction of these codes are able to interpret and use the codes more similarly. From this it can be seen that the ability of people to decodify codes similarly places restrictions on the transferability of knowledge between them. Research limitations/implications - The paper therefore argues that a new conceptual approach is needed for the role of knowledge codification in knowledge management that emphasizes the importance of knowledge decodification. Such an approach would start with one's ability to decodify rather than codify knowledge as a prerequisite for knowledge management. Originality/value - The paper provides a conceptual basis for explaining limitations to the management and transferability of knowledge. © Emerald Group Publishing Limited.
Resumo:
Knowledge has been a subject of interest and inquiry for thousands of years since at least the time of the ancient Greeks, and no doubt even before that. “What is knowledge” continues to be an important topic of discussion in philosophy. More recently, interest in managing knowledge has grown in step with the perception that increasingly we live in a knowledge-based economy. Drucker (1969) is usually credited as being the first to popularize the knowledge-based economy concept by linking the importance of knowledge with rapid technological change in Drucker (1969). Karl Wiig coined the term knowledge management (hereafter KM) for a NATO seminar in 1986, and its popularity took off following the publication of Nonaka and Takeuchi’s book “The Knowledge Creating Company” (Nonaka & Takeuchi, 1995). Knowledge creation is in fact just one of many activities involved in KM. Others include sharing, retaining, refining, and using knowledge. There are many such lists of activities (Holsapple & Joshi, 2000; Probst, Raub, & Romhardt, 1999; Skyrme, 1999; Wiig, De Hoog, & Van der Spek, 1997). Both academic and practical interest in KM has continued to increase throughout the last decade. In this article, first the different types of knowledge are outlined, then comes a discussion of various routes by which knowledge management can be implemented, advocating a process-based route. An explanation follows of how people, processes, and technology need to fit together for effective KM, and some examples of this route in use are given. Finally, there is a look towards the future.
Resumo:
Enterprise Risk Management (ERM) and Knowledge Management (KM) both encompass top-down and bottom-up approaches developing and embedding risk knowledge concepts and processes in strategy, policies, risk appetite definition, the decision-making process and business processes. The capacity to transfer risk knowledge affects all stakeholders and understanding of the risk knowledge about the enterprise's value is a key requirement in order to identify protection strategies for business sustainability. There are various factors that affect this capacity for transferring and understanding. Previous work has established that there is a difference between the influence of KM variables on Risk Control and on the perceived value of ERM. Communication among groups appears as a significant variable in improving Risk Control but only as a weak factor in improving the perceived value of ERM. However, the ERM mandate requires for its implementation a clear understanding, of risk management (RM) policies, actions and results, and the use of the integral view of RM as a governance and compliance program to support the value driven management of the organization. Furthermore, ERM implementation demands better capabilities for unification of the criteria of risk analysis, alignment of policies and protection guidelines across the organization. These capabilities can be affected by risk knowledge sharing between the RM group and the Board of Directors and other executives in the organization. This research presents an exploratory analysis of risk knowledge transfer variables used in risk management practice. A survey to risk management executives from 65 firms in various industries was undertaken and 108 answers were analyzed. Potential relationships among the variables are investigated using descriptive statistics and multivariate statistical models. The level of understanding of risk management policies and reports by the board is related to the quality of the flow of communication in the firm and perceived level of integration of the risk policy in the business processes.
Resumo:
While much of a company's knowledge can be found in text repositories, current content management systems have limited capabilities for structuring and interpreting documents. In the emerging Semantic Web, search, interpretation and aggregation can be addressed by ontology-based semantic mark-up. In this paper, we examine semantic annotation, identify a number of requirements, and review the current generation of semantic annotation systems. This analysis shows that, while there is still some way to go before semantic annotation tools will be able to address fully all the knowledge management needs, research in the area is active and making good progress.
Resumo:
This article takes the perspective that risk knowledge and the activities related to RM practice can benefit from the implementation of KM processes and systems, to produce a better enterprise wide implementation of risk management. Both in the information systems discipline and elsewhere, there has been a trend towards greater integration and consolidation in the management of organizations. Some examples of this are: Enterprise Resource Planning (Stevens, 2003), Enterprise Architecture (Zachmann, 1996) and Enterprise Content Management (Smith & McKeen, 2003). Similarly, risk management is evolving into Enterprise Risk Management. KM’s importance in breaking down silos within an organization can help it to do so.
Resumo:
This paper starts from the viewpoint that enterprise risk management is a specific application of knowledge in order to control deviations from strategic objectives, shareholders’ values and stakeholders’ relationships. This study is looking for insights into how the application of knowledge management processes can improve the implementation of enterprise risk management. This article presents the preliminary results of a survey on this topic carried out in the financial services sector, extending a previous pilot study that was in retail banking only. Five hypotheses about the relationship of knowledge management variables to the perceived value of ERM implementation were considered. The survey results show that the two people-related variables, perceived quality of communication among groups and perceived quality of knowledge sharing were positively associated with the perceived value of ERM implementation. However, the results did not support a positive association for the three variables more related to technology, namely network capacity for connecting people (which was marginally significant), risk management information system functionality and perceived integration of the information systems. Perceived quality of communication among groups appeared to be clearly the most significant of these five factors in affecting the perceived value of ERM implementation.
Resumo:
Reliability modelling and verification is indispensable in modern manufacturing, especially for product development risk reduction. Based on the discussion of the deficiencies of traditional reliability modelling methods for process reliability, a novel modelling method is presented herein that draws upon a knowledge network of process scenarios based on the analytic network process (ANP). An integration framework of manufacturing process reliability and product quality is presented together with a product development and reliability verification process. According to the roles of key characteristics (KCs) in manufacturing processes, KCs are organised into four clusters, that is, product KCs, material KCs, operation KCs and equipment KCs, which represent the process knowledge network of manufacturing processes. A mathematical model and algorithm is developed for calculating the reliability requirements of KCs with respect to different manufacturing process scenarios. A case study on valve-sleeve component manufacturing is provided as an application example of the new reliability modelling and verification procedure. This methodology is applied in the valve-sleeve component manufacturing processes to manage and deploy production resources.
Resumo:
The world is in a period of reflection about social and economic models. In particular there is a review of the capacities that countries have for improving their competitiveness. The experiences in a society are part of the process of learning and knowledge development in that society: especially in the development of communities. Risks appear continually in the process of the search for, analysis and implementation of solutions to problems. This paper discusses the issues related to the improvement of productivity and knowledge in a society, the risk that poor or even declining productivity brings to the communities and the need to develop people that support the decision making process in communities.The approach to improve the communities' development is through the design of a research programme in knowledge management based on distance learning. The research programme implementation is designed to provide value added to the decisions in communities in order to use collective intelligence, solve collective problems and to achieve goals that support local solutions. This program is organized and focused on four intelligence areas, artificial, collective, sentient and strategic. These areas are productivity related and seek to reduce the risk of lack of competitiveness through formal and integrated problem analysis. In a country such as Colombia, where different regions face varying problems to solve and there is a low level of infrastructure, the factors of production such as knowledge, skilled labour and "soft" infrastructure can be a way to develop the society.This entails using the local physical resources adequately for creating value with the support of people in the region to lead the analysis and search for solutions in the communities. The paper will describe the framework and programme and suggest how it could be applied in Colombia.
Resumo:
At the moment, the phrases “big data” and “analytics” are often being used as if they were magic incantations that will solve all an organization’s problems at a stroke. The reality is that data on its own, even with the application of analytics, will not solve any problems. The resources that analytics and big data can consume represent a significant strategic risk if applied ineffectively. Any analysis of data needs to be guided, and to lead to action. So while analytics may lead to knowledge and intelligence (in the military sense of that term), it also needs the input of knowledge and intelligence (in the human sense of that term). And somebody then has to do something new or different as a result of the new insights, or it won’t have been done to any purpose. Using an analytics example concerning accounts payable in the public sector in Canada, this paper reviews thinking from the domains of analytics, risk management and knowledge management, to show some of the pitfalls, and to present a holistic picture of how knowledge management might help tackle the challenges of big data and analytics.
Resumo:
Ontologies have become widely accepted as the main method for representing knowledge in Knowledge Management (KM) applica-tions. Given the continuous and rapid change and dynamic nature of knowledge in all fields, automated methods for construct-ing ontologies are of great importance. All ontologies or taxonomies currently in use have been hand built and require consider-able manpower to keep up to date. Taxono-mies are less logically rigorous than ontolo-gies, and in this paper we consider the re-quirements for a system which automatically constructed taxonomies. There are a number of potentially useful methods for construct-ing hierarchically organised concepts from a collection of texts and there are a number of automatic methods which permit one to as-sociate one word with another. The impor-tant issue for the successful development of this research area is to identify techniques for labelling the relation between two candi-date terms, if one exists. We consider a number of possible approaches and argue that the majority are unsuitable for our re-quirements.
Resumo:
Automatic ontology building is a vital issue in many fields where they are currently built manually. This paper presents a user-centred methodology for ontology construction based on the use of Machine Learning and Natural Language Processing. In our approach, the user selects a corpus of texts and sketches a preliminary ontology (or selects an existing one) for a domain with a preliminary vocabulary associated to the elements in the ontology (lexicalisations). Examples of sentences involving such lexicalisation (e.g. ISA relation) in the corpus are automatically retrieved by the system. Retrieved examples are validated by the user and used by an adaptive Information Extraction system to generate patterns that discover other lexicalisations of the same objects in the ontology, possibly identifying new concepts or relations. New instances are added to the existing ontology or used to tune it. This process is repeated until a satisfactory ontology is obtained. The methodology largely automates the ontology construction process and the output is an ontology with an associated trained leaner to be used for further ontology modifications.