956 resultados para Production engineering Data processing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data mining means to summarize information from large amounts of raw data. It is one of the key technologies in many areas of economy, science, administration and the internet. In this report we introduce an approach for utilizing evolutionary algorithms to breed fuzzy classifier systems. This approach was exercised as part of a structured procedure by the students Achler, Göb and Voigtmann as contribution to the 2006 Data-Mining-Cup contest, yielding encouragingly positive results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Vorträge / Präsentationen des Automation Symposiums 2008

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A conceptual information system consists of a database together with conceptual hierarchies. The management system TOSCANA visualizes arbitrary combinations of conceptual hierarchies by nested line diagrams and allows an on-line interaction with a database to analyze data conceptually. The paper describes the conception of conceptual information systems and discusses the use of their visualization techniques for on-line analytical processing (OLAP).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

While most data analysis and decision support tools use numerical aspects of the data, Conceptual Information Systems focus on their conceptual structure. This paper discusses how both approaches can be combined.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a new algorithm called TITANIC for computing concept lattices. It is based on data mining techniques for computing frequent itemsets. The algorithm is experimentally evaluated and compared with B. Ganter's Next-Closure algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we discuss Conceptual Knowledge Discovery in Databases (CKDD) in its connection with Data Analysis. Our approach is based on Formal Concept Analysis, a mathematical theory which has been developed and proven useful during the last 20 years. Formal Concept Analysis has led to a theory of conceptual information systems which has been applied by using the management system TOSCANA in a wide range of domains. In this paper, we use such an application in database marketing to demonstrate how methods and procedures of CKDD can be applied in Data Analysis. In particular, we show the interplay and integration of data mining and data analysis techniques based on Formal Concept Analysis. The main concern of this paper is to explain how the transition from data to knowledge can be supported by a TOSCANA system. To clarify the transition steps we discuss their correspondence to the five levels of knowledge representation established by R. Brachman and to the steps of empirically grounded theory building proposed by A. Strauss and J. Corbin.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Formal Concept Analysis is an unsupervised learning technique for conceptual clustering. We introduce the notion of iceberg concept lattices and show their use in Knowledge Discovery in Databases (KDD). Iceberg lattices are designed for analyzing very large databases. In particular they serve as a condensed representation of frequent patterns as known from association rule mining. In order to show the interplay between Formal Concept Analysis and association rule mining, we discuss the algorithm TITANIC. We show that iceberg concept lattices are a starting point for computing condensed sets of association rules without loss of information, and are a visualization method for the resulting rules.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the last years, the main orientation of Formal Concept Analysis (FCA) has turned from mathematics towards computer science. This article provides a review of this new orientation and analyzes why and how FCA and computer science attracted each other. It discusses FCA as a knowledge representation formalism using five knowledge representation principles provided by Davis, Shrobe, and Szolovits [DSS93]. It then studies how and why mathematics-based researchers got attracted by computer science. We will argue for continuing this trend by integrating the two research areas FCA and Ontology Engineering. The second part of the article discusses three lines of research which witness the new orientation of Formal Concept Analysis: FCA as a conceptual clustering technique and its application for supporting the merging of ontologies; the efficient computation of association rules and the structuring of the results; and the visualization and management of conceptual hierarchies and ontologies including its application in an email management system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

About ten years ago, triadic contexts were presented by Lehmann and Wille as an extension of Formal Concept Analysis. However, they have rarely been used up to now, which may be due to the rather complex structure of the resulting diagrams. In this paper, we go one step back and discuss how traditional line diagrams of standard (dyadic) concept lattices can be used for exploring and navigating triadic data. Our approach is inspired by the slice & dice paradigm of On-Line-Analytical Processing (OLAP). We recall the basic ideas of OLAP, and show how they may be transferred to triadic contexts. For modeling the navigation patterns a user might follow, we use the formalisms of finite state machines. In order to present the benefits of our model, we show how it can be used for navigating the IT Baseline Protection Manual of the German Federal Office for Information Security.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A key argument for modeling knowledge in ontologies is the easy re-use and re-engineering of the knowledge. However, beside consistency checking, current ontology engineering tools provide only basic functionalities for analyzing ontologies. Since ontologies can be considered as (labeled, directed) graphs, graph analysis techniques are a suitable answer for this need. Graph analysis has been performed by sociologists for over 60 years, and resulted in the vivid research area of Social Network Analysis (SNA). While social network structures in general currently receive high attention in the Semantic Web community, there are only very few SNA applications up to now, and virtually none for analyzing the structure of ontologies. We illustrate in this paper the benefits of applying SNA to ontologies and the Semantic Web, and discuss which research topics arise on the edge between the two areas. In particular, we discuss how different notions of centrality describe the core content and structure of an ontology. From the rather simple notion of degree centrality over betweenness centrality to the more complex eigenvector centrality based on Hermitian matrices, we illustrate the insights these measures provide on two ontologies, which are different in purpose, scope, and size.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The 21st century has brought new challenges for forest management at a time when globalization in world trade is increasing and global climate change is becoming increasingly apparent. In addition to various goods and services like food, feed, timber or biofuels being provided to humans, forest ecosystems are a large store of terrestrial carbon and account for a major part of the carbon exchange between the atmosphere and the land surface. Depending on the stage of the ecosystems and/or management regimes, forests can be either sinks, or sources of carbon. At the global scale, rapid economic development and a growing world population have raised much concern over the use of natural resources, especially forest resources. The challenging question is how can the global demands for forest commodities be satisfied in an increasingly globalised economy, and where could they potentially be produced? For this purpose, wood demand estimates need to be integrated in a framework, which is able to adequately handle the competition for land between major land-use options such as residential land or agricultural land. This thesis is organised in accordance with the requirements to integrate the simulation of forest changes based on wood extraction in an existing framework for global land-use modelling called LandSHIFT. Accordingly, the following neuralgic points for research have been identified: (1) a review of existing global-scale economic forest sector models (2) simulation of global wood production under selected scenarios (3) simulation of global vegetation carbon yields and (4) the implementation of a land-use allocation procedure to simulate the impact of wood extraction on forest land-cover. Modelling the spatial dynamics of forests on the global scale requires two important inputs: (1) simulated long-term wood demand data to determine future roundwood harvests in each country and (2) the changes in the spatial distribution of woody biomass stocks to determine how much of the resource is available to satisfy the simulated wood demands. First, three global timber market models are reviewed and compared in order to select a suitable economic model to generate wood demand scenario data for the forest sector in LandSHIFT. The comparison indicates that the ‘Global Forest Products Model’ (GFPM) is most suitable for obtaining projections on future roundwood harvests for further study with the LandSHIFT forest sector. Accordingly, the GFPM is adapted and applied to simulate wood demands for the global forestry sector conditional on selected scenarios from the Millennium Ecosystem Assessment and the Global Environmental Outlook until 2050. Secondly, the Lund-Potsdam-Jena (LPJ) dynamic global vegetation model is utilized to simulate the change in potential vegetation carbon stocks for the forested locations in LandSHIFT. The LPJ data is used in collaboration with spatially explicit forest inventory data on aboveground biomass to allocate the demands for raw forest products and identify locations of deforestation. Using the previous results as an input, a methodology to simulate the spatial dynamics of forests based on wood extraction is developed within the LandSHIFT framework. The land-use allocation procedure specified in the module translates the country level demands for forest products into woody biomass requirements for forest areas, and allocates these on a five arc minute grid. In a first version, the model assumes only actual conditions through the entire study period and does not explicitly address forest age structure. Although the module is in a very preliminary stage of development, it already captures the effects of important drivers of land-use change like cropland and urban expansion. As a first plausibility test, the module performance is tested under three forest management scenarios. The module succeeds in responding to changing inputs in an expected and consistent manner. The entire methodology is applied in an exemplary scenario analysis for India. A couple of future research priorities need to be addressed, particularly the incorporation of plantation establishments; issue of age structure dynamics; as well as the implementation of a new technology change factor in the GFPM which can allow the specification of substituting raw wood products (especially fuelwood) by other non-wood products.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fujaba is an Open Source UML CASE tool project started at the software engineering group of Paderborn University in 1997. In 2002 Fujaba has been redesigned and became the Fujaba Tool Suite with a plug-in architecture allowing developers to add functionality easily while retaining full control over their contributions. Multiple Application Domains Fujaba followed the model-driven development philosophy right from its beginning in 1997. At the early days, Fujaba had a special focus on code generation from UML diagrams resulting in a visual programming language with a special emphasis on object structure manipulating rules. Today, at least six rather independent tool versions are under development in Paderborn, Kassel, and Darmstadt for supporting (1) reengineering, (2) embedded real-time systems, (3) education, (4) specification of distributed control systems, (5) integration with the ECLIPSE platform, and (6) MOF-based integration of system (re-) engineering tools. International Community According to our knowledge, quite a number of research groups have also chosen Fujaba as a platform for UML and MDA related research activities. In addition, quite a number of Fujaba users send requests for more functionality and extensions. Therefore, the 8th International Fujaba Days aimed at bringing together Fujaba develop- ers and Fujaba users from all over the world to present their ideas and projects and to discuss them with each other and with the Fujaba core development team.