990 resultados para Knowledge Database


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In knowledge-intensive economy an effective knowledge transfer is a part of the firm’s strategy to achieve a competitive advantage in the market. Knowledge transfer related to a variety of mechanisms depends on the nature of knowledge and context. The topic is, however, very little empirical studied and there is a research gap in scientific literature. This study examined and analyzed external knowledge transfer mechanisms in service business and especially in the context of acquisitions. The aim was to find out what kind of mechanisms was used when the buyer began to transfer data e.g. their own agendas and practices to the purchased units. Another major research goal was to identify the critical factors which contributed to knowledge transfer through different mechanisms. The study was conducted as a multiple-case study in a consultative service business company, in its four business units acquired by acquisition, in various parts of the country. The empirical part of the study was carried out as focus group interviews in each unit, and the data were analyzed using qualitative methods. The main findings of this study were firstly the nine different knowledge transfer mechanisms in service business acquisition: acquisition management team as an initiator, unit manager as a translator, formal training, self-directed learning, rooming-in, IT systems implementation, customer relationship management, codified database and ecommunication. The used mechanisms brought up several aspects as giving the face to changing, security of receiving right knowledge and correctly interpreted we-ness atmosphere, and orientation to use more consultative touch with customers. The study pointed out seven critical factors contributed to different mechanisms: absorption, motivation, organizational learning, social interaction, trust, interpretation and time resource. The two last mentioned were new findings compared to previous studies. Each of the mechanisms and the related critical factors contributed in different ways to the activity in different units after the acquisition. The role of knowledge management strategy was the most significant managerial contribution of the study. Phenomenon is not recognized enough although it is strongly linked in knowledge based companies. The recognition would help to develop a better understanding of the business through acquisitions, especially in situations such as where two different knowledge strategies combines in new common company.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the growth in new technologies, using online tools have become an everyday lifestyle. It has a greater impact on researchers as the data obtained from various experiments needs to be analyzed and knowledge of programming has become mandatory even for pure biologists. Hence, VTT came up with a new tool, R Executables (REX) which is a web application designed to provide a graphical interface for biological data functions like Image analysis, Gene expression data analysis, plotting, disease and control studies etc., which employs R functions to provide results. REX provides a user interactive application for the biologists to directly enter the values and run the required analysis with a single click. The program processes the given data in the background and prints results rapidly. Due to growth of data and load on server, the interface has gained problems concerning time consumption, poor GUI, data storage issues, security, minimal user interactive experience and crashes with large amount of data. This thesis handles the methods by which these problems were resolved and made REX a better application for the future. The old REX was developed using Python Django and now, a new programming language, Vaadin has been implemented. Vaadin is a Java framework for developing web applications and the programming language is extremely similar to Java with new rich components. Vaadin provides better security, better speed, good and interactive interface. In this thesis, subset functionalities of REX was selected which includes IST bulk plotting and image segmentation and implemented those using Vaadin. A code of 662 lines was programmed by me which included Vaadin as the front-end handler while R language was used for back-end data retrieval, computing and plotting. The application is optimized to allow further functionalities to be migrated with ease from old REX. Future development is focused on including Hight throughput screening functions along with gene expression database handling

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A GIS has been designed with limited Functionalities; but with a novel approach in Aits design. The spatial data model adopted in the design of KBGIS is the unlinked vector model. Each map entity is encoded separately in vector fonn, without referencing any of its neighbouring entities. Spatial relations, in other words, are not encoded. This approach is adequate for routine analysis of geographic data represented on a planar map, and their display (Pages 105-106). Even though spatial relations are not encoded explicitly, they can be extracted through the specially designed queries. This work was undertaken as an experiment to study the feasibility of developing a GIS using a knowledge base in place of a relational database. The source of input spatial data was accurate sheet maps that were manually digitised. Each identifiable geographic primitive was represented as a distinct object, with its spatial properties and attributes defined. Composite spatial objects, made up of primitive objects, were formulated, based on production rules defining such compositions. The facts and rules were then organised into a production system, using OPS5

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we discuss Conceptual Knowledge Discovery in Databases (CKDD) in its connection with Data Analysis. Our approach is based on Formal Concept Analysis, a mathematical theory which has been developed and proven useful during the last 20 years. Formal Concept Analysis has led to a theory of conceptual information systems which has been applied by using the management system TOSCANA in a wide range of domains. In this paper, we use such an application in database marketing to demonstrate how methods and procedures of CKDD can be applied in Data Analysis. In particular, we show the interplay and integration of data mining and data analysis techniques based on Formal Concept Analysis. The main concern of this paper is to explain how the transition from data to knowledge can be supported by a TOSCANA system. To clarify the transition steps we discuss their correspondence to the five levels of knowledge representation established by R. Brachman and to the steps of empirically grounded theory building proposed by A. Strauss and J. Corbin.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L'objectiu d'aquest article és presentar l'estructura de la base de dades relacional que inclou tota la informació sintictica continguda en el Diccionario Critico Etimológico Castellano e Hispánico de J. Corominas i J. A. Pascual. Tot i que aquest diccionari conté un ampli ventall d'informacions històriques de cadascun dels temes, aquestes no es mostren de forma estructurada, per la qual cosa ha estat necessari estudiar i classificar tots aquells elements relacionats amb aspectes sintàctics. És a partir d'aquest estudi previ que s'han elaborat els diferents camps de la base de dades, els quals s'agrupen en cinc blocs temàtics: informació lemàtica; gramatical; sintàctica; altres aspectes relacionats; i observacions o comentaris rellevants fets per l'investigador. Aquesta base de dades no només reprodueix els continguts del diccionari, sinó que inclou diferents camps interpretatius. Per aquesta raó, Syntax. dbf representa una eina de treball fonamental per a tots aquells investigadors interessats en la sintaxi diacrònica de l'espanyol

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dysregulation of lipid and glucose metabolism in the postprandial state are recognised as important risk factors for the development of cardiovascular disease and type 2 diabetes. Our objective was to create a comprehensive, standardised database of postprandial studies to provide insights into the physiological factors that influence postprandial lipid and glucose responses. Data were collated from subjects (n = 467) taking part in single and sequential meal postprandial studies conducted by researchers at the University of Reading, to form the DISRUPT (DIetary Studies: Reading Unilever Postprandial Trials) database. Subject attributes including age, gender, genotype, menopausal status, body mass index, blood pressure and a fasting biochemical profile, together with postprandial measurements of triacylglycerol (TAG), non-esterified fatty acids, glucose, insulin and TAG-rich lipoprotein composition are recorded. A particular strength of the studies is the frequency of blood sampling, with on average 10-13 blood samples taken during each postprandial assessment, and the fact that identical test meal protocols were used in a number of studies, allowing pooling of data to increase statistical power. The DISRUPT database is the most comprehensive postprandial metabolism database that exists worldwide and preliminary analysis of the pooled sequential meal postprandial dataset has revealed both confirmatory and novel observations with respect to the impact of gender and age on the postprandial TAG response. Further analysis of the dataset using conventional statistical techniques along with integrated mathematical models and clustering analysis will provide a unique opportunity to greatly expand current knowledge of the aetiology of inter-individual variability in postprandial lipid and glucose responses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This review of recent developments starts with the publication of Harold van der Heijden's Study Database Edition IV, John Nunn's second trilogy on the endgame, and a range of endgame tables (EGTs) to the DTC, DTZ and DTZ50 metrics. It then summarises data-mining work by Eiko Bleicher and Guy Haworth in 2010. This used CQL and pgn2fen to find some 3,000 EGT-faulted studies in the database above, and the Type A (value-critical) and Type B-DTM (DTM-depth-critical) zugzwangs in the mainlines of those studies. The same technique was used to mine Chessbase's BIG DATABASE 2010 to identify Type A/B zugzwangs, and to identify the pattern of value-concession and DTM-depth concession in sub-7-man play.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper sets out progress during the first eighteen months of doctoral research into the City of London office market. The overall aim of the research is to explore relationships between office rents and the economy in the UK over the last 150 years. To do this, a database of lettings has been created from which a long run index of City office rents can be constructed. With this index, it should then be possible to analyse trends in rents and relationships with their long run determinants. The focus of this paper is on the creation of the rent database. First, it considers the existing secondary sources of long run rental data for the UK. This highlights a lack of information for years prior to 1970 and the need for primary data collection if earlier periods are to be studied. The paper then discusses the selection of the City of London and of the time period chosen for research. After this, it describes how a dataset covering the period 1860-1960 has been assembled using the records of property companies active in the City office market. It is hoped that, if successful, this research will contribute to existing knowledge on the long run characteristics of commercial real estate. In particular, it should add a price dimension (rents) to the existing long run information on stock/supply and investment. Hence, it should enable a more complete picture of the development and performance of commercial real estate through time to be gained.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work identifies and analyzes literature about knowledge organization (KO), expressed in scientific journals communication of information science (IS). It performs an exploratory study on the Base de Dados Referencial de Artigos de Periodicos em Ciência da Informacio (BRAPCI, Reference Database of Journal Articles on Information Science) between the years 2000 and 2010. The descriptors relating to "knowledge organization" are used in order to recover and analyze the corresponding articles and to identify descriptors and concepts which integrate the semantic universe related to KO. Through the analysis of content, based on metrical studies, this article gathers and interprets data relating to documents and authors. Through this, it demonstrates the development of this field and its research fronts according to the observed characteristics, as well as noting the transformation indicative in the production of knowledge. The work describes the influences of the Spanish researchers on Brazilian literature in the fields of knowledge and information organization. As a result, it presents the most cited and productive authors, the theoretical currents which support them, and the most significant relationships of the Spanish-Brazilian authors network. Based on the constant key-words analysis in the cited articles, the co-existence of the French conception current and the incipient Spanish influence in Brazil is observed. Through this, it contributes to the comprehension of the thematic range relating to KO, stimulating both criticism and self-criticism, debate and knowledge creation, based on studies that have been developed and institutionalized in academic contexts in Spain and Brazil.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The analysis of large amounts of data is better performed by humans when represented in a graphical format. Therefore, a new research area called the Visual Data Mining is being developed endeavoring to use the number crunching power of computers to prepare data for visualization, allied to the ability of humans to interpret data presented graphically.This work presents the results of applying a visual data mining tool, called FastMapDB to detect the behavioral pattern exhibited by a dataset of clinical information about hemoglobinopathies known as thalassemia. FastMapDB is a visual data mining tool that get tabular data stored in a relational database such as dates, numbers and texts, and by considering them as points in a multidimensional space, maps them to a three-dimensional space. The intuitive three-dimensional representation of objects enables a data analyst to see the behavior of the characteristics from abnormal forms of hemoglobin, highlighting the differences when compared to data from a group without alteration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes a data mining environment for knowledge discovery in bioinformatics applications. The system has a generic kernel that implements the mining functions to be applied to input primary databases, with a warehouse architecture, of biomedical information. Both supervised and unsupervised classification can be implemented within the kernel and applied to data extracted from the primary database, with the results being suitably stored in a complex object database for knowledge discovery. The kernel also includes a specific high-performance library that allows designing and applying the mining functions in parallel machines. The experimental results obtained by the application of the kernel functions are reported. © 2003 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A significant set of information stored in different databases around the world, can be shared through peer-topeer databases. With that, is obtained a large base of knowledge, without the need for large investments because they are used existing databases, as well as the infrastructure in place. However, the structural characteristics of peer-topeer, makes complex the process of finding such information. On the other side, these databases are often heterogeneous in their schemas, but semantically similar in their content. A good peer-to-peer databases systems should allow the user access information from databases scattered across the network and receive only the information really relate to your topic of interest. This paper proposes to use ontologies in peer-to-peer database queries to represent the semantics inherent to the data. The main contribution of this work is enable integration between heterogeneous databases, improve the performance of such queries and use the algorithm of optimization Ant Colony to solve the problem of locating information on peer-to-peer networks, which presents an improve of 18% in results. © 2011 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of new technologies that use peer-to-peer networks grows every day, with the object to supply the need of sharing information, resources and services of databases around the world. Among them are the peer-to-peer databases that take advantage of peer-to-peer networks to manage distributed knowledge bases, allowing the sharing of information semantically related but syntactically heterogeneous. However, it is a challenge to ensure the efficient search for information without compromising the autonomy of each node and network flexibility, given the structural characteristics of these networks. On the other hand, some studies propose the use of ontology semantics by assigning standardized categorization of information. The main original contribution of this work is the approach of this problem with a proposal for optimization of queries supported by the Ant Colony algorithm and classification though ontologies. The results show that this strategy enables the semantic support to the searches in peer-to-peer databases, aiming to expand the results without compromising network performance. © 2011 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)