9 resultados para complex knowledge structures
em Universitätsbibliothek Kassel, Universität Kassel, Germany
Resumo:
Formal Concept Analysis allows to derive conceptual hierarchies from data tables. Formal Concept Analysis is applied in various domains, e.g., data analysis, information retrieval, and knowledge discovery in databases. In order to deal with increasing sizes of the data tables (and to allow more complex data structures than just binary attributes), conceputal scales habe been developed. They are considered as metadata which structure the data conceptually. But in large applications, the number of conceptual scales increases as well. Techniques are needed which support the navigation of the user also on this meta-level of conceptual scales. In this paper, we attack this problem by extending the set of scales by hierarchically ordered higher level scales and by introducing a visualization technique called nested scaling. We extend the two-level architecture of Formal Concept Analysis (the data table plus one level of conceptual scales) to many-level architecture with a cascading system of conceptual scales. The approach also allows to use representation techniques of Formal Concept Analysis for the visualization of thesauri and ontologies.
Resumo:
Die relativistische Multikonfigurations Dirac-Fock (MCDF) Methode ist gegenwärtig eines der am häufigsten benutzten Verfahren zur Berechnung der elektronischen Struktur und der Eigenschaften freier Atome. In diesem Verfahren werden die Wellenfunktionen ausgewählter atomarer Zustände als eine Linearkombination von sogenannten Konfigurationszuständen (CSF - Configuration State Functions) konstruiert, die in einem Teilraum des N-Elektronen Hilbert-Raumes eine (Vielteilchen-)Basis aufspannen. Die konkrete Konstruktion dieser Basis entscheidet letzlich über die Güte der Wellenfunktionen, die üblicherweise mit Hilfe einer Variation des Erwartungswertes zum no-pair Dirac-Coulomb Hamiltonoperators gewonnen werden. Mit Hilfe von MCDF Wellenfunktionen können die dominanten relativistischen und Korrelationseffekte in freien Atomen allgemein recht gut erfaßt und verstanden werden. Außer der instantanen Coulombabstoßung zwischen allen Elektronenpaaren werden dabei auch die relativistischen Korrekturen zur Elektron-Elektron Wechselwirkung, d.h. die magnetischen und Retardierungsbeiträge in der Wechselwirkung der Elektronen untereinander, die Ankopplung der Elektronen an das Strahlungsfeld sowie der Einfluß eines ausgedehnten Kernmodells erfaßt. Im Vergleich mit früheren MCDF Rechnungen werden in den in dieser Arbeit diskutierten Fallstudien Wellenfunktionsentwicklungen verwendet, die um 1-2 Größenordnungen aufwendiger sind und daher systematische Untersuchungen inzwischen auch an Atomen mit offenen d- und f-Schalen erlauben. Eine spontane Emission oder Absorption von Photonen kann bei freien Atomen theoretisch am einfachsten mit Hilfe von Übergangswahrscheinlichkeiten erfaßt werden. Solche Daten werden heute in vielen Forschungsbereichen benötigt, wobei neben den traditionellen Gebieten der Fusionsforschung und Astrophysik zunehmend auch neue Forschungsrichtungen (z.B. Nanostrukturforschung und Röntgenlithographie) zunehmend ins Blickfeld rücken. Um die Zuverlässigkeit unserer theoretischen Vorhersagen zu erhöhen, wurde in dieser Arbeit insbesondere die Relaxation der gebundenen Elektronendichte, die rechentechnisch einen deutlich größeren Aufwand erfordert, detailliert untersucht. Eine Berücksichtigung dieser Relaxationseffekte führt oftmals auch zu einer deutlich besseren Übereinstimmung mit experimentellen Werten, insbesondere für dn=1 Übergänge sowie für schwache und Interkombinationslinien, die innerhalb einer Hauptschale (dn=0) vorkommen. Unsere in den vergangenen Jahren verbesserten Rechnungen zu den Wellenfunktionen und Übergangswahrscheinlichkeiten zeigen deutlich den Fortschritt bei der Behandlung komplexer Atome. Gleichzeitig kann dieses neue Herangehen künftig aber auch auf (i) kompliziertere Schalensstrukturen, (ii) die Untersuchung von Zwei-Elektronen-ein-Photon (TEOP) Übergängen sowie (iii) auf eine Reihe weiterer atomarer Eigenschaften übertragen werden, die bekanntermaßen empflindlich von der Relaxation der Elektronendichte abhängen. Dies sind bspw. Augerzerfälle, die atomare Photoionisation oder auch strahlende und dielektronische Rekombinationsprozesse, die theoretisch bisher nur selten überhaupt in der Dirac-Fock Näherung betrachtet wurden.
Resumo:
Designing is a heterogeneous, fuzzily defined, floating field of various activities and chunks of ideas and knowledge. Available theories about the foundations of designing as presented in "the basic PARADOX" (Jonas and Meyer-Veden 2004) have evoked the impression of Babylonian confusion. We located the reasons for this "mess" in the "non-fit", which is the problematic relation of theories and subject field. There seems to be a comparable interface problem in theory-building as in designing itself. "Complexity" sounds promising, but turns out to be a problematic and not really helpful concept. I will argue for a more precise application of systemic and evolutionary concepts instead, which - in my view - are able to model the underlying generative structures and processes that produce the visible phenomenon of complexity. It does not make sense to introduce a new fashionable meta-concept and to hope for a panacea before having clarified the more basic and still equally problematic older meta-concepts. This paper will take one step away from "theories of what" towards practice and doing and try to have a closer look at existing process models or "theories of how" to design instead. Doing this from a systemic perspective leads to an evolutionary view of the process, which finally allows to specify more clearly the "knowledge gaps" inherent in the design process. This aspect has to be taken into account as constitutive of any attempt at theory-building in design, which can be characterized as a "practice of not-knowing". I conclude, that comprehensive "unified" theories, or methods, or process models run aground on the identified knowledge gaps, which allow neither reliable models of the present, nor reliable projections into the future. Consolation may be found in performing a shift from the effort of adaptation towards strategies of exaptation, which means the development of stocks of alternatives for coping with unpredictable situations in the future.
Resumo:
Land use is a crucial link between human activities and the natural environment and one of the main driving forces of global environmental change. Large parts of the terrestrial land surface are used for agriculture, forestry, settlements and infrastructure. Given the importance of land use, it is essential to understand the multitude of influential factors and resulting land use patterns. An essential methodology to study and quantify such interactions is provided by the adoption of land-use models. By the application of land-use models, it is possible to analyze the complex structure of linkages and feedbacks and to also determine the relevance of driving forces. Modeling land use and land use changes has a long-term tradition. In particular on the regional scale, a variety of models for different regions and research questions has been created. Modeling capabilities grow with steady advances in computer technology, which on the one hand are driven by increasing computing power on the other hand by new methods in software development, e.g. object- and component-oriented architectures. In this thesis, SITE (Simulation of Terrestrial Environments), a novel framework for integrated regional sland-use modeling, will be introduced and discussed. Particular features of SITE are the notably extended capability to integrate models and the strict separation of application and implementation. These features enable efficient development, test and usage of integrated land-use models. On its system side, SITE provides generic data structures (grid, grid cells, attributes etc.) and takes over the responsibility for their administration. By means of a scripting language (Python) that has been extended by language features specific for land-use modeling, these data structures can be utilized and manipulated by modeling applications. The scripting language interpreter is embedded in SITE. The integration of sub models can be achieved via the scripting language or by usage of a generic interface provided by SITE. Furthermore, functionalities important for land-use modeling like model calibration, model tests and analysis support of simulation results have been integrated into the generic framework. During the implementation of SITE, specific emphasis was laid on expandability, maintainability and usability. Along with the modeling framework a land use model for the analysis of the stability of tropical rainforest margins was developed in the context of the collaborative research project STORMA (SFB 552). In a research area in Central Sulawesi, Indonesia, socio-environmental impacts of land-use changes were examined. SITE was used to simulate land-use dynamics in the historical period of 1981 to 2002. Analogous to that, a scenario that did not consider migration in the population dynamics, was analyzed. For the calculation of crop yields and trace gas emissions, the DAYCENT agro-ecosystem model was integrated. In this case study, it could be shown that land-use changes in the Indonesian research area could mainly be characterized by the expansion of agricultural areas at the expense of natural forest. For this reason, the situation had to be interpreted as unsustainable even though increased agricultural use implied economic improvements and higher farmers' incomes. Due to the importance of model calibration, it was explicitly addressed in the SITE architecture through the introduction of a specific component. The calibration functionality can be used by all SITE applications and enables largely automated model calibration. Calibration in SITE is understood as a process that finds an optimal or at least adequate solution for a set of arbitrarily selectable model parameters with respect to an objective function. In SITE, an objective function typically is a map comparison algorithm capable of comparing a simulation result to a reference map. Several map optimization and map comparison methodologies are available and can be combined. The STORMA land-use model was calibrated using a genetic algorithm for optimization and the figure of merit map comparison measure as objective function. The time period for the calibration ranged from 1981 to 2002. For this period, respective reference land-use maps were compiled. It could be shown, that an efficient automated model calibration with SITE is possible. Nevertheless, the selection of the calibration parameters required detailed knowledge about the underlying land-use model and cannot be automated. In another case study decreases in crop yields and resulting losses in income from coffee cultivation were analyzed and quantified under the assumption of four different deforestation scenarios. For this task, an empirical model, describing the dependence of bee pollination and resulting coffee fruit set from the distance to the closest natural forest, was integrated. Land-use simulations showed, that depending on the magnitude and location of ongoing forest conversion, pollination services are expected to decline continuously. This results in a reduction of coffee yields of up to 18% and a loss of net revenues per hectare of up to 14%. However, the study also showed that ecological and economic values can be preserved if patches of natural vegetation are conservated in the agricultural landscape. -----------------------------------------------------------------------
Resumo:
Conceptual Graphs and Formal Concept Analysis have in common basic concerns: the focus on conceptual structures, the use of diagrams for supporting communication, the orientation by Peirce's Pragmatism, and the aim of representing and processing knowledge. These concerns open rich possibilities of interplay and integration. We discuss the philosophical foundations of both disciplines, and analyze their specific qualities. Based on this analysis, we discuss some possible approaches of interplay and integration.
Resumo:
A key argument for modeling knowledge in ontologies is the easy re-use and re-engineering of the knowledge. However, beside consistency checking, current ontology engineering tools provide only basic functionalities for analyzing ontologies. Since ontologies can be considered as (labeled, directed) graphs, graph analysis techniques are a suitable answer for this need. Graph analysis has been performed by sociologists for over 60 years, and resulted in the vivid research area of Social Network Analysis (SNA). While social network structures in general currently receive high attention in the Semantic Web community, there are only very few SNA applications up to now, and virtually none for analyzing the structure of ontologies. We illustrate in this paper the benefits of applying SNA to ontologies and the Semantic Web, and discuss which research topics arise on the edge between the two areas. In particular, we discuss how different notions of centrality describe the core content and structure of an ontology. From the rather simple notion of degree centrality over betweenness centrality to the more complex eigenvector centrality based on Hermitian matrices, we illustrate the insights these measures provide on two ontologies, which are different in purpose, scope, and size.
Resumo:
The scope of this work is the fundamental growth, tailoring and characterization of self-organized indium arsenide quantum dots (QDs) and their exploitation as active region for diode lasers emitting in the 1.55 µm range. This wavelength regime is especially interesting for long-haul telecommunications as optical fibers made from silica glass have the lowest optical absorption. Molecular Beam Epitaxy is utilized as fabrication technique for the quantum dots and laser structures. The results presented in this thesis depict the first experimental work for which this reactor was used at the University of Kassel. Most research in the field of self-organized quantum dots has been conducted in the InAs/GaAs material system. It can be seen as the model system of self-organized quantum dots, but is not suitable for the targeted emission wavelength. Light emission from this system at 1.55 µm is hard to accomplish. To stay as close as possible to existing processing technology, the In(AlGa)As/InP (100) material system is deployed. Depending on the epitaxial growth technique and growth parameters this system has the drawback of producing a wide range of nano species besides quantum dots. Best known are the elongated quantum dashes (QDash). Such structures are preferentially formed, if InAs is deposited on InP. This is related to the low lattice-mismatch of 3.2 %, which is less than half of the value in the InAs/GaAs system. The task of creating round-shaped and uniform QDs is rendered more complex considering exchange effects of arsenic and phosphorus as well as anisotropic effects on the surface that do not need to be dealt with in the InAs/GaAs case. While QDash structures haven been studied fundamentally as well as in laser structures, they do not represent the theoretical ideal case of a zero-dimensional material. Creating round-shaped quantum dots on the InP(100) substrate remains a challenging task. Details of the self-organization process are still unknown and the formation of the QDs is not fully understood yet. In the course of the experimental work a novel growth concept was discovered and analyzed that eases the fabrication of QDs. It is based on different crystal growth and ad-atom diffusion processes under supply of different modifications of the arsenic atmosphere in the MBE reactor. The reactor is equipped with special valved cracking effusion cells for arsenic and phosphorus. It represents an all-solid source configuration that does not rely on toxic gas supply. The cracking effusion cell are able to create different species of arsenic and phosphorus. This constitutes the basis of the growth concept. With this method round-shaped QD ensembles with superior optical properties and record-low photoluminescence linewidth were achieved. By systematically varying the growth parameters and working out a detailed analysis of the experimental data a range of parameter values, for which the formation of QDs is favored, was found. A qualitative explanation of the formation characteristics based on the surface migration of In ad-atoms is developed. Such tailored QDs are finally implemented as active region in a self-designed diode laser structure. A basic characterization of the static and temperature-dependent properties was carried out. The QD lasers exceed a reference quantum well laser in terms of inversion conditions and temperature-dependent characteristics. Pulsed output powers of several hundred milli watt were measured at room temperature. In particular, the lasers feature a high modal gain that even allowed cw-emission at room temperature of a processed ridge wave guide device as short as 340 µm with output powers of 17 mW. Modulation experiments performed at the Israel Institute of Technology (Technion) showed a complex behavior of the QDs in the laser cavity. Despite the fact that the laser structure is not fully optimized for a high-speed device, data transmission capabilities of 15 Gb/s combined with low noise were achieved. To the best of the author`s knowledge, this renders the lasers the fastest QD devices operating at 1.55 µm. The thesis starts with an introductory chapter that pronounces the advantages of optical fiber communication in general. Chapter 2 will introduce the fundamental knowledge that is necessary to understand the importance of the active region`s dimensions for the performance of a diode laser. The novel growth concept and its experimental analysis are presented in chapter 3. Chapter 4 finally contains the work on diode lasers.
Resumo:
ZUSAMMENFASSUNG Von der „Chaosgruppe“ zur lernenden Organisation. Fallstudien zur Induzierung und Verbreitung von Innovation in ländlichen Kleinorganisationen im Buruli (Zentral-Uganda). Die oft fehlende Nachhaltigkeit landwirtschaftlicher Projekte in Afrika allgemein und in Buruli (Zentral-Uganda) insbesondere gab den Anstoß zu der Forschung, die der vorliegenden Dissertation zugrunde liegt. Ein häufiger Grund für das Scheitern von Projekten ist, dass die lokale Bevölkerung die landwirtschaftliche Innovation als Risiko für die Ernährungssicherheit der Familie betrachtet. Die vorliegende Arbeit ist daher ein Beitrag zur Suche nach einem Weg zur Nachhaltigkeit, der dieser Tatsache Rechnung trägt. Als Forschungsmethode wurden die Gruppendiskussion und die Beobachtung mit den beiden Varianten „teilnehmender Beobachter“ und „beobachtender Teilnehmer“ gemäß Lamnek(1995b) angewendet. Es stellte sich heraus, dass die ablehnende Haltung der Zielbevölkerung landwirtschaftlicher Innovation gegenüber durch finanzielle Anreize, Seminare oder die Überzeugungskunst von Mitarbeitern der Entwicklungsorganisationen kaum behoben werden kann, sondern nur durch den Einbezug der Menschen in einen von ihnen selbst gesteuerten Risikomanagementprozess. Die Prozessberatung von Schein (2000) und die nichtdirektive Beratung von Rogers (2010) haben sich im Rahmen unserer Untersuchung für die Motivierung der Bevölkerung für eine risikobewusste Entwicklungsinitiative von großem Nutzen erwiesen ebenso wie für die Beschreibung dieses Prozesses in der vorliegenden Studie. Die untersuchten Gruppen wurden durch diesen innovativen Ansatz der Entwicklungsberatung in die Lage versetzt, das Risiko von Innovation zu analysieren, zu bewerten und zu minimieren, ihre Zukunft selbst in die Hand zu nehmen und in einem sozialen, ökonomischen und physischen Umfeld zu gestalten sowie auf Veränderungen im Laufe der Umsetzung angemessen zu reagieren. Der Erwerb dieser Fähigkeit setzte eine Umwandlung einfacher Bauerngruppen ohne erkennbare Strukturen in strukturierte und organisierte Gruppen voraus, die einer lernenden Organisation im ländlichen Raum entsprechen. Diese Transformation bedarf als erstes eines Zugangs zur Information und einer zielorientierten Kommunikation. Die Umwandlung der Arbeitsgruppe zu einer lernenden Bauernorganisation förderte die Nachhaltigkeit des Gemüseanbauprojekts und das Risikomanagement und wurde so zu einem konkreten, von der Umwelt wahrgenommenen Beispiel für die Zweckmäßigkeit des oben beschriebenen Forschungsansatzes. Die Herausbildung einer lernenden Organisation ist dabei nicht Mittel zum Zweck, sondern ist selbst das zu erreichende Ziel. Die Beobachtung, Begleitung und Analyse dieses Umwandlungsprozesses erfordert einen multidisziplinären Ansatz. In diesem Fall flossen agrarwissenschaftliche, soziologische, linguistische und anthropologische Perspektiven in die partnerschaftlich ausgerichtete Forschung ein. Von der Entwicklungspolitik erfordert dieser Ansatz einen neuen Weg, der auf der Partnerschaft mit den Betroffenen und auf einer Entemotionalisierung des Entwicklungsvorhabens basiert und eine gegenseitige Wertschätzung zwischen den Akteuren voraussetzt. In diesem Prozess entwickelt sich im Laufe der Zeit die „lernende“ Bauernorganisation auch zu einer „lehrenden“ Organisation und wird dadurch eine Quelle der Inspiration für die Gesamtgesellschaft. Die Nachhaltigkeit von ländlichen Entwicklungsprojekten wird damit maßgeblich verbessert.
Resumo:
Mesh generation is an important step inmany numerical methods.We present the “HierarchicalGraphMeshing” (HGM)method as a novel approach to mesh generation, based on algebraic graph theory.The HGM method can be used to systematically construct configurations exhibiting multiple hierarchies and complex symmetry characteristics. The hierarchical description of structures provided by the HGM method can be exploited to increase the efficiency of multiscale and multigrid methods. In this paper, the HGMmethod is employed for the systematic construction of super carbon nanotubes of arbitrary order, which present a pertinent example of structurally and geometrically complex, yet highly regular, structures. The HGMalgorithm is computationally efficient and exhibits good scaling characteristics. In particular, it scales linearly for super carbon nanotube structures and is working much faster than geometry-based methods employing neighborhood search algorithms. Its modular character makes it conducive to automatization. For the generation of a mesh, the information about the geometry of the structure in a given configuration is added in a way that relates geometric symmetries to structural symmetries. The intrinsically hierarchic description of the resulting mesh greatly reduces the effort of determining mesh hierarchies for multigrid and multiscale applications and helps to exploit symmetry-related methods in the mechanical analysis of complex structures.