9 resultados para Knowledge Identification

em Aston University Research Archive


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Purpose – The main purpose of this paper is to analyze knowledge management in service networks. It analyzes the knowledge management process and identifies related challenges. The authors take a strategic management approach instead of a more technology-oriented approach, since it is believed that managerial problems still remain after technological problems are solved. Design/methodology/approach – The paper explores the literature on the topic of knowledge management as well as the resource (or knowledge) based view of the firm. It offers conceptual insights and provides possible solutions for knowledge management problems. Findings – The paper discusses several possible solutions for managing knowledge processes in knowledge-intensive service networks. Solutions for knowledge identification/generation, knowledge application, knowledge combination/transfer and supporting the evolution of tacit network knowledge include personal and technological aspects, as well as organizational and cultural elements. Practical implications – In a complex environment, knowledge management and network management become crucial for business success. It is the task of network management to establish routines, and to build and regularly refresh meta-knowledge about the competencies and abilities that exist within the network. It is suggested that each network partner should be rated according to the contribution to the network knowledge base. Based on this rating, a particular network partner is a member of a certain knowledge club, meaning that the partner has access to a particular level of network knowledge. Such an established routine provides strong incentives to add knowledge to the network's knowledge base Originality/value – This paper is a first attempt to outline the problems of knowledge management in knowledge-intensive service networks and, by so doing, to introduce strategic management reasoning to the discussion.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Motivation: In molecular biology, molecular events describe observable alterations of biomolecules, such as binding of proteins or RNA production. These events might be responsible for drug reactions or development of certain diseases. As such, biomedical event extraction, the process of automatically detecting description of molecular interactions in research articles, attracted substantial research interest recently. Event trigger identification, detecting the words describing the event types, is a crucial and prerequisite step in the pipeline process of biomedical event extraction. Taking the event types as classes, event trigger identification can be viewed as a classification task. For each word in a sentence, a trained classifier predicts whether the word corresponds to an event type and which event type based on the context features. Therefore, a well-designed feature set with a good level of discrimination and generalization is crucial for the performance of event trigger identification. Results: In this article, we propose a novel framework for event trigger identification. In particular, we learn biomedical domain knowledge from a large text corpus built from Medline and embed it into word features using neural language modeling. The embedded features are then combined with the syntactic and semantic context features using the multiple kernel learning method. The combined feature set is used for training the event trigger classifier. Experimental results on the golden standard corpus show that >2.5% improvement on F-score is achieved by the proposed framework when compared with the state-of-the-art approach, demonstrating the effectiveness of the proposed framework. © 2014 The Author 2014. The source code for the proposed framework is freely available and can be downloaded at http://cse.seu.edu.cn/people/zhoudeyu/ETI_Sourcecode.zip.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present global economic crisis creates doubts about the good use of accumulated experience and knowledge in managing risk in financial services. Typically, risk management practice does not use knowledge management (KM) to improve and to develop new answers to the threats. A key reason is that it is not clear how to break down the “organizational silos” view of risk management (RM) that is commonly taken. As a result, there has been relatively little work on finding the relationships between RM and KM. We have been doing research for the last couple of years on the identification of relationships between these two disciplines. At ECKM 2007 we presented a general review of the literature(s) and some hypotheses for starting research on KM and its relationship to the perceived value of enterprise risk management. This article presents findings based on our preliminary analyses, concentrating on those factors affecting the perceived quality of risk knowledge sharing. These come from a questionnaire survey of RM employees in organisations in the financial services sector, which yielded 121 responses. We have included five explanatory variables for the perceived quality of risk knowledge sharing. These comprised two variables relating to people (organizational capacity for work coordination and perceived quality of communication among groups), one relating to process (perceived quality of risk control) and two related to technology (web channel functionality and RM information system functionality). Our findings so far are that four of these five variables have a significant positive association with the perceived quality of risk knowledge sharing: contrary to expectations, web channel functionality did not have a significant association. Indeed, in some of our exploratory regression studies its coefficient (although not significant) was negative. In stepwise regression, the variable organizational capacity for work coordination accounted for by far the largest part of the variation in the dependent variable perceived quality of risk knowledge sharing. The “people” variables thus appear to have the greatest influence on the perceived quality of risk knowledge sharing, even in a sector that relies heavily on technology and on quantitative approaches to decision making. We have also found similar results with the dependent variable perceived value of Enterprise Risk Management (ERM) implementation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The International Cooperation Agency (identified in this article as IDEA) working in Colombia is one of the most important in Colombian society with programs that support gender rights, human rights, justice and peace, scholarships, aboriginal population, youth, afro descendants population, economic development in communities, and environmental development. The identified problem is based on the diversified offer of services, collaboration and social intervention which requires diverse groups of people with multiple agendas, ways to support their mandates, disciplines, and professional competences. Knowledge creation and the growth and sustainability of the organization can be in danger because of a silo culture and the resulting reduced leverage of the separate group capabilities. Organizational memory is generally formed by the tacit knowledge of the organization members, given the value of accumulated experience that this kind of social work implies. Its loss is therefore a strategic and operational risk when most problem interventions rely on direct work in the socio-economic field and living real experiences with communities. The knowledge management solution presented in this article starts first, with the identification of the people and groups concerned and the creation of a knowledge map as a means to strengthen the ties between organizational members; second, by introducing a content management system designed to support the documentation process and knowledge sharing process; and third, introducing a methodology for the adaptation of a Balanced Scorecard based on the knowledge management processes. These three main steps lead to a knowledge management “solution” that has been implemented in the organization, comprising three components: a knowledge management system, training support and promotion of cultural change.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Initially this thesis examines the various mechanisms by which technology is acquired within anodizing plants. In so doing the history of the evolution of anodizing technology is recorded, with particular reference to the growth of major markets and to the contribution of the marketing efforts of the aluminium industry. The business economics of various types of anodizing plants are analyzed. Consideration is also given to the impact of developments in anodizing technology on production economics and market growth. The economic costs associated with work rejected for process defects are considered. Recent changes in the industry have created conditions whereby information technology has a potentially important role to play in retaining existing knowledge. One such contribution is exemplified by the expert system which has been developed for the identification of anodizing process defects. Instead of using a "rule-based" expert system, a commercial neural networks program has been adapted for the task. The advantages of neural networks over 'rule-based' systems is that they are better suited to production problems, since the actual conditions prevailing when the defect was produced are often not known with certainty. In using the expert system, the user first identifies the process stage at which the defect probably occurred and is then directed to a file enabling the actual defects to be identified. After making this identification, the user can consult a database which gives a more detailed description of the defect, advises on remedial action and provides a bibliography of papers relating to the defect. The database uses a proprietary hypertext program, which also provides rapid cross-referencing to similar types of defect. Additionally, a graphics file can be accessed which (where appropriate) will display a graphic of the defect on screen. A total of 117 defects are included, together with 221 literature references, supplemented by 48 cross-reference hyperlinks. The main text of the thesis contains 179 literature references. (DX186565)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The airway epithelium is the first point of contact in the lung for inhaled material, including infectious pathogens and particulate matter, and protects against toxicity from these substances by trapping and clearance via the mucociliary escalator, presence of a protective barrier with tight junctions and initiation of a local inflammatory response. The inflammatory response involves recruitment of phagocytic cells to neutralise and remove and invading materials and is oftern modelled using rodents. However, development of valid in vitro airway epithelial models is of great importance due to the restrictions on animal studies for cosmetic compound testing implicit in the 7th amendment to the European Union Cosmetics Directive. Further, rodent innate immune responses have fundamental differences to human. Pulmonary endothelial cells and leukocytes are also involved in the innate response initiated during pulmonary inflammation. Co-culture models of the airways, in particular where epithelial cells are cultured at air liquid interface with the presence of tight junctions and differentiated mucociliary cells, offer a solution to this problem. Ideally validated models will allow for detection of early biomarkers of response to exposure and investigation into inflammatory response during exposure. This thesis describes the approaches taken towards developing an in vitro epithelial/endothelial cell model of the human airways and identification biomarkers of response to exposure to xenobiotics. The model comprised normal human primary microvascular endothelial cells and the bronchial epithelial cell line BEAS-2B or normal human bronchial epithelial cells. BEAS-2B were chosen as their characterisation at air liquid interface is limited but they are robust in culture, thereby predicted to provide a more reliable test system. Proteomics analysis was undertaken on challenged cells to investigate biomarkers of exposure. BEAS-2B morphology was characterised at air liquid interface compared with normal human bronchial epithelial cells. The results indicate that BEAS-2B cells at an air liquid interface form tight junctions as shown by expression of the tight junction protein zonula occludens-1. To this author’s knowledge this is the first time this result has been reported. The inflammatory response of BEAS-2B (measured as secretion of the inflammatory mediators interleukin-8 and -6) air liquid interface mono-cultures to Escherichia coli lipopolysaccharide or particulate matter (fine and ultrafine titanium dioxide) was comparable to published data for epithelial cells. Cells were also exposed to polymers of “commercial interest” which were in the nanoparticle range (and referred to particles hereafter). BEAS-2B mono-cultures showed an increased secretion of inflammatory mediators after challenge. Inclusion of microvascular endothelial cells resulted in protection against LPS- and particle- induced epithelial toxicity, measured as cell viability and inflammatory response, indicating the importance of co-cultures for investigations into toxicity. Two-dimensional proteomic analysis of lysates from particle-challenged cells failed to identify biomarkers of toxicity due to assay interference and experimental variability. Separately, decreased plasma concentrations of serine protease inhibitors, and the negative acute phase proteins transthyretin, histidine-rich glycoprotein and alpha2-HS glycoprotein were identified as potential biomarkers of methyl methacrylate/ethyl methacrylate/butylacrylate treatment in rats.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The programme of research examines knowledge workers, their relationships with organisations, and perceptions of management practices through the development of a theoretical model and knowledge worker archetypes. Knowledge worker and non-knowledge worker archetypes were established through an analysis of the extant literature. After an exploratory study of knowledge workers in a small software development company the archetypes were refined to include occupational classification data and the findings from Study 1. The Knowledge Worker Characteristics Model (KWCM) was developed as a theoretical framework in order to analyse differences between the two archetypes within the IT sector. The KWCM comprises of the variables within the job characteristics model, creativity, goal orientation, identification and commitment. In Study 2, a global web based survey was conducted. There were insufficient non-knowledge worker responses and therefore a cluster analysis was conducted to interrogate the archetypes further. This demonstrated, unexpectedly, that that there were marked differences within the knowledge worker archetypes suggesting the need to granulate the archetype further. The theoretical framework and the archetypes were revised (as programmers and web developers) and the research study was refocused to examine occupational differences within knowledge work. Findings from Study 2 identified that there were significant differences between the archetypes in relation to the KWCM. 19 semi-structured interviews were conducted in Study 3 in order to deepen the analysis using qualitative data and to examine perceptions of people management practices. The findings from both studies demonstrate that there were significant differences between the two groups but also that job challenge, problem solving, intrinsic reward and team identification were of importance to both groups of knowledge workers. This thesis presents an examination of knowledge workers’ perceptions of work, organisations and people management practices in the granulation and differentiation of occupational archetypes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: During last decade the use of ECG recordings in biometric recognition studies has increased. ECG characteristics made it suitable for subject identification: it is unique, present in all living individuals, and hard to forge. However, in spite of the great number of approaches found in literature, no agreement exists on the most appropriate methodology. This study aimed at providing a survey of the techniques used so far in ECG-based human identification. Specifically, a pattern recognition perspective is here proposed providing a unifying framework to appreciate previous studies and, hopefully, guide future research. Methods: We searched for papers on the subject from the earliest available date using relevant electronic databases (Medline, IEEEXplore, Scopus, and Web of Knowledge). The following terms were used in different combinations: electrocardiogram, ECG, human identification, biometric, authentication and individual variability. The electronic sources were last searched on 1st March 2015. In our selection we included published research on peer-reviewed journals, books chapters and conferences proceedings. The search was performed for English language documents. Results: 100 pertinent papers were found. Number of subjects involved in the journal studies ranges from 10 to 502, age from 16 to 86, male and female subjects are generally present. Number of analysed leads varies as well as the recording conditions. Identification performance differs widely as well as verification rate. Many studies refer to publicly available databases (Physionet ECG databases repository) while others rely on proprietary recordings making difficult them to compare. As a measure of overall accuracy we computed a weighted average of the identification rate and equal error rate in authentication scenarios. Identification rate resulted equal to 94.95 % while the equal error rate equal to 0.92 %. Conclusions: Biometric recognition is a mature field of research. Nevertheless, the use of physiological signals features, such as the ECG traits, needs further improvements. ECG features have the potential to be used in daily activities such as access control and patient handling as well as in wearable electronics applications. However, some barriers still limit its growth. Further analysis should be addressed on the use of single lead recordings and the study of features which are not dependent on the recording sites (e.g. fingers, hand palms). Moreover, it is expected that new techniques will be developed using fiducials and non-fiducial based features in order to catch the best of both approaches. ECG recognition in pathological subjects is also worth of additional investigations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Decision-making in product quality is an indispensable stage in product development, in order to reduce product development risk. Based on the identification of the deficiencies of quality function deployment (QFD) and failure modes and effects analysis (FMEA), a novel decision-making method is presented that draws upon a knowledge network of failure scenarios. An ontological expression of failure scenarios is presented together with a framework of failure knowledge network (FKN). According to the roles of quality characteristics (QCs) in failure processing, QCs are set into three categories namely perceptible QCs, restrictive QCs, and controllable QCs, which present the monitor targets, control targets and improvement targets respectively for quality management. A mathematical model and algorithms based on the analytic network process (ANP) is introduced for calculating the priority of QCs with respect to different development scenarios. A case study is provided according to the proposed decision-making procedure based on FKN. This methodology is applied in the propeller design process to solve the problem of prioritising QCs. This paper provides a practical approach for decision-making in product quality. Copyright © 2011 Inderscience Enterprises Ltd.