8 resultados para Knowledge Discovery Database

em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Presentation of Kristiina Hormia-Poutanen at the 25th Anniversary Conference of The National Repository Library of Finland, Kuopio 22th of May 2015.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This master's thesis coversthe concepts of knowledge discovery, data mining and technology forecasting methods in telecommunications. It covers the various aspects of knowledge discoveryin data bases and discusses in detail the methods of data mining and technologyforecasting methods that are used in telecommunications. Main concern in the overall process of this thesis is to emphasize the methods that are being used in technology forecasting for telecommunications and data mining. It tries to answer to some extent to the question of do forecasts create a future? It also describes few difficulties that arise in technology forecasting. This thesis was done as part of my master's studies in Lappeenranta University of Technology.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The first objective of this study was to find out reliable laboratory methods to predict the effect of enzymes on specific energy consumption and fiber properties of TMP pulp. The second one was to find with interactive software called “Knowledge discovery in databases” enzymes or other additives that can be used in finding a solution to reduce energy consumption of TMP pulp. The chemical composition of wood and enzymes, which have activity on main wood components were presented in the literature part of the work. The results of previous research in energy reduction of TMP process with enzymes were also highlighted. The main principles of knowledge discovery have been included in literature part too. The experimental part of the work contains the methods description in which the standard size chip, crushed chip and fiberized spruce chip (fiberized pulp) were used. Different types of enzymatic treatment with different dosages and time were tested during the experiments and showed. Pectinase, endoglucanase and mixture of enzymes were used for evaluation of method reliability. The fines content and fiber length of pulp was measured and used as evidence of enzymes' effect. The refining method with “Bauer” laboratory disc refiner was evaluated as not highly reliable. It was not able to provide high repeatability of results, because of uncontrolled feeding capacity and refining consistency. The refining method with Valley refiner did not have a lot of variables and showed stable and repeatable results in energy saving. The results of experiments showed that efficient enzymes impregnation is probably the main target with enzymes application for energy saving. During the work the fiberized pulp showed high accessibility to enzymatic treatment and liquid penetration without special impregnating equipment. The reason was that fiberized pulp has larger wood surface area and thereby the contact area between the enzymatic solution and wood is also larger. Standard size chip and crushed chip treatment without special impregnator of enzymatic solution was evaluated as not efficient and did not show visible, repeatable results in energy consumption decrease. Thereby it was concluded that using of fiberized pulp and Valley refiner for measurements of enzymes' effectiveness in SEC decrease is more suitable than normal size chip and crushed chip with “Bauer” refiner. Endoglucanase with 5 kg/t dosage showed about 20% energy consumption decrease. Mixture of enzymes with 1.5 kg/t dosage showed about 15% decrease of energy consumption during the refining. Pectinase at different dosages and treatment times did not show significant effect on energy consumption. Results of knowledge discovery in databases showed the xylanase, cellulase and pectinase blend as most promising for energy reduction in TMP process. Surfactants were determined as effective additives for energy saving with enzymes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In knowledge-intensive economy an effective knowledge transfer is a part of the firm’s strategy to achieve a competitive advantage in the market. Knowledge transfer related to a variety of mechanisms depends on the nature of knowledge and context. The topic is, however, very little empirical studied and there is a research gap in scientific literature. This study examined and analyzed external knowledge transfer mechanisms in service business and especially in the context of acquisitions. The aim was to find out what kind of mechanisms was used when the buyer began to transfer data e.g. their own agendas and practices to the purchased units. Another major research goal was to identify the critical factors which contributed to knowledge transfer through different mechanisms. The study was conducted as a multiple-case study in a consultative service business company, in its four business units acquired by acquisition, in various parts of the country. The empirical part of the study was carried out as focus group interviews in each unit, and the data were analyzed using qualitative methods. The main findings of this study were firstly the nine different knowledge transfer mechanisms in service business acquisition: acquisition management team as an initiator, unit manager as a translator, formal training, self-directed learning, rooming-in, IT systems implementation, customer relationship management, codified database and ecommunication. The used mechanisms brought up several aspects as giving the face to changing, security of receiving right knowledge and correctly interpreted we-ness atmosphere, and orientation to use more consultative touch with customers. The study pointed out seven critical factors contributed to different mechanisms: absorption, motivation, organizational learning, social interaction, trust, interpretation and time resource. The two last mentioned were new findings compared to previous studies. Each of the mechanisms and the related critical factors contributed in different ways to the activity in different units after the acquisition. The role of knowledge management strategy was the most significant managerial contribution of the study. Phenomenon is not recognized enough although it is strongly linked in knowledge based companies. The recognition would help to develop a better understanding of the business through acquisitions, especially in situations such as where two different knowledge strategies combines in new common company.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Context: Web services have been gaining popularity due to the success of service oriented architecture and cloud computing. Web services offer tremendous opportunity for service developers to publish their services and applications over the boundaries of the organization or company. However, to fully exploit these opportunities it is necessary to find efficient discovery mechanism thus, Web services discovering mechanism has attracted a considerable attention in Semantic Web research, however, there have been no literature surveys that systematically map the present research result thus overall impact of these research efforts and level of maturity of their results are still unclear. This thesis aims at providing an overview of the current state of research into Web services discovering mechanism using systematic mapping. The work is based on the papers published 2004 to 2013, and attempts to elaborate various aspects of the analyzed literature including classifying them in terms of the architecture, frameworks and methods used for web services discovery mechanism. Objective: The objective if this work is to summarize the current knowledge that is available as regards to Web service discovery mechanisms as well as to systematically identify and analyze the current published research works in order to identify different approaches presented. Method: A systematic mapping study has been employed to assess the various Web Services discovery approaches presented in the literature. Systematic mapping studies are useful for categorizing and summarizing the level of maturity research area. Results: The result indicates that there are numerous approaches that are consistently being researched and published in this field. In terms of where these researches are published, conferences are major contributing publishing arena as 48% of the selected papers were conference published papers illustrating the level of maturity of the research topic. Additionally selected 52 papers are categorized into two broad segments namely functional and non-functional based approaches taking into consideration architectural aspects and information retrieval approaches, semantic matching, syntactic matching, behavior based matching as well as QOS and other constraints.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the growth in new technologies, using online tools have become an everyday lifestyle. It has a greater impact on researchers as the data obtained from various experiments needs to be analyzed and knowledge of programming has become mandatory even for pure biologists. Hence, VTT came up with a new tool, R Executables (REX) which is a web application designed to provide a graphical interface for biological data functions like Image analysis, Gene expression data analysis, plotting, disease and control studies etc., which employs R functions to provide results. REX provides a user interactive application for the biologists to directly enter the values and run the required analysis with a single click. The program processes the given data in the background and prints results rapidly. Due to growth of data and load on server, the interface has gained problems concerning time consumption, poor GUI, data storage issues, security, minimal user interactive experience and crashes with large amount of data. This thesis handles the methods by which these problems were resolved and made REX a better application for the future. The old REX was developed using Python Django and now, a new programming language, Vaadin has been implemented. Vaadin is a Java framework for developing web applications and the programming language is extremely similar to Java with new rich components. Vaadin provides better security, better speed, good and interactive interface. In this thesis, subset functionalities of REX was selected which includes IST bulk plotting and image segmentation and implemented those using Vaadin. A code of 662 lines was programmed by me which included Vaadin as the front-end handler while R language was used for back-end data retrieval, computing and plotting. The application is optimized to allow further functionalities to be migrated with ease from old REX. Future development is focused on including Hight throughput screening functions along with gene expression database handling

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The traditional business models and the traditionally successful development methods that have been distinctive to the industrial era, do not satisfy the needs of modern IT companies. Due to the rapid nature of IT markets, the uncertainty of new innovations‟ success and the overwhelming competition with established companies, startups need to make quick decisions and eliminate wasted resources more effectively than ever before. There is a need for an empirical basis on which to build business models, as well as evaluate the presumptions regarding value and profit. Less than ten years ago, the Lean software development principles and practices became widely well-known in the academic circles. Those practices help startup entrepreneurs to validate their learning, test their assumptions and be more and more dynamical and flexible. What is special about today‟s software startups is that they are increasingly individual. There are quantitative research studies available regarding the details of Lean startups. Broad research with hundreds of companies presented in a few charts is informative, but a detailed study of fewer examples gives an insight to the way software entrepreneurs see Lean startup philosophy and how they describe it in their own words. This thesis focuses on Lean software startups‟ early phases, namely Customer Discovery (discovering a valuable solution to a real problem) and Customer Validation (being in a good market with a product which satisfies that market). The thesis first offers a sufficiently compact insight into the Lean software startup concept to a reader who is not previously familiar with the term. The Lean startup philosophy is then put into a real-life test, based on interviews with four Finnish Lean software startup entrepreneurs. The interviews reveal 1) whether the Lean startup philosophy is actually valuable for them, 2) how can the theory be practically implemented in real life and 3) does theoretical Lean startup knowledge compensate a lack of entrepreneurship experience. A reader gets familiar with the key elements and tools of Lean startups, as well as their mutual connections. The thesis explains why Lean startups waste less time and money than many other startups. The thesis, especially its research sections, aims at providing data and analysis simultaneously.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The traditional business models and the traditionally successful development methods that have been distinctive to the industrial era, do not satisfy the needs of modern IT companies. Due to the rapid nature of IT markets, the uncertainty of new innovations‟ success and the overwhelming competition with established companies, startups need to make quick decisions and eliminate wasted resources more effectively than ever before. There is a need for an empirical basis on which to build business models, as well as evaluate the presumptions regarding value and profit. Less than ten years ago, the Lean software development principles and practices became widely well-known in the academic circles. Those practices help startup entrepreneurs to validate their learning, test their assumptions and be more and more dynamical and flexible. What is special about today‟s software startups is that they are increasingly individual. There are quantitative research studies available regarding the details of Lean startups. Broad research with hundreds of companies presented in a few charts is informative, but a detailed study of fewer examples gives an insight to the way software entrepreneurs see Lean startup philosophy and how they describe it in their own words. This thesis focuses on Lean software startups‟ early phases, namely Customer Discovery (discovering a valuable solution to a real problem) and Customer Validation (being in a good market with a product which satisfies that market). The thesis first offers a sufficiently compact insight into the Lean software startup concept to a reader who is not previously familiar with the term. The Lean startup philosophy is then put into a real-life test, based on interviews with four Finnish Lean software startup entrepreneurs. The interviews reveal 1) whether the Lean startup philosophy is actually valuable for them, 2) how can the theory be practically implemented in real life and 3) does theoretical Lean startup knowledge compensate a lack of entrepreneurship experience. A reader gets familiar with the key elements and tools of Lean startups, as well as their mutual connections. The thesis explains why Lean startups waste less time and money than many other startups. The thesis, especially its research sections, aims at providing data and analysis simultaneously.