881 resultados para Harry-Dym hierarchies
Resumo:
In the last years, the main orientation of Formal Concept Analysis (FCA) has turned from mathematics towards computer science. This article provides a review of this new orientation and analyzes why and how FCA and computer science attracted each other. It discusses FCA as a knowledge representation formalism using five knowledge representation principles provided by Davis, Shrobe, and Szolovits [DSS93]. It then studies how and why mathematics-based researchers got attracted by computer science. We will argue for continuing this trend by integrating the two research areas FCA and Ontology Engineering. The second part of the article discusses three lines of research which witness the new orientation of Formal Concept Analysis: FCA as a conceptual clustering technique and its application for supporting the merging of ontologies; the efficient computation of association rules and the structuring of the results; and the visualization and management of conceptual hierarchies and ontologies including its application in an email management system.
Resumo:
It is known that cooperating distributed systems (CD-systems) of stateless deterministic restarting automata with window size 1 accept a class of semi-linear languages that properly includes all rational trace languages. Although the component automata of such a CD-system are all deterministic, in general the CD-system itself is not, as in each of its computations, the initial component and the successor components are still chosen nondeterministically. Here we study CD-systems of stateless deterministic restarting automata with window size 1 that are themselves completely deterministic. In fact, we consider two such types of CD-systems, the strictly deterministic systems and the globally deterministic systems.
Resumo:
Spätestens seit der Formulierung der modernen Portfoliotheorie durch Harry Markowitz (1952) wird den aktiven Portfoliomanagementstrategien besondere Aufmerksamkeit in Wissenschaft und Anlagepraxis gewidmet. Diese Arbeit ist im Schnittstellenbereich zwischen neoklassischer Kapitalmarkttheorie und technischer Analyse angesiedelt. Es wird untersucht, inwieweit eine passive Buy&Hold-Strategie, die als einzige im Einklang mit der Effizienzmarkthypothese nach Fama (1970) steht, durch Verwendung von aktiven Strategien geschlagen werden kann. Der Autor präsentiert einen Wavelet-basierten Ansatz für die Analyse der Finanzzeitreihen. Die Wavelet-Transformation wird als ein mathematisches Datenaufbereitungstool herangezogen und ermöglicht eine Multiskalendarstellung einer Datenreihe, durch das Aufspalten dieser in eine Approximationszeitreihe und eine Detailszeitreihe, ohne dass dadurch Informationen verloren gehen. Diese Arbeit beschränkt sich auf die Verwendung der Daubechies Wavelets. Die Multiskalendarstellung dient als Grundlage für die Entwicklung von zwei technischen Indikatoren. Der Wavelet Stochastik Indikator greift auf die Idee des bekannten Stochastik-Indikators zurück und verwendet nicht mehr die Kurszeitreihe, sondern die Approximationszeitreihe als Input. Eine auf diesem Indikator basierende Investmentstrategie wird umfangreicher Sensitivitätsanalyse unterworfen, die aufzeigt, dass eine Buy&Hold-Strategie durchaus outperformt werden kann. Die Idee des Momentum-Indikators wird durch den Wavelet Momentum Indikator aufgegriffen, welcher die Detailszeitreihen als Input heranzieht. Im Rahmen der Sensitivitätsanalyse einer Wavelet Momentum Strategie wird jedoch die Buy&Hold -Strategie nicht immer geschlagen. Ein Wavelet-basiertes Prognosemodell verwendet ähnlich wie die technischen Indikatoren die Multiskalendarstellung. Die Approximationszeitreihen werden dabei durch das Polynom 2. Grades und die Detailszeitreihen durch die Verwendung der Sinusregression extrapoliert. Die anschließende Aggregation der extrapolierten Zeitreihen führt zu prognostizierten Wertpapierkursen. Kombinierte Handelsstrategien zeigen auf, wie Wavelet Stochastik Indikator, Wavelet Momentum Indikator und das Wavelet-basierte Prognosemodell miteinander verknüpft werden können. Durch die Verknüpfung einzelner Strategien gelingt es, die Buy&Hold-Strategie zu schlagen. Der letzte Abschnitt der Arbeit beschäftigt sich mit der Modellierung von Handelssystem-portfolios. Angestrebt wird eine gleichzeitige Diversifikation zwischen Anlagen und Strategien, die einer ständigen Optimierung unterworfen wird. Dieses Verfahren wird als ein systematischer, an bestimmte Optimierungskriterien gebundener Investmentprozess verstanden, mit welchem es gelingt, eine passive Buy&Hold-Strategie zu outperformen. Die Arbeit stellt eine systematische Verknüpfung zwischen der diskreten Wavelet Transformation und technisch quantitativen Investmentstrategien her. Es werden auch die Problemfelder der durchaus viel versprechenden Verwendung der Wavelet Transformation im Rahmen der technischen Analyse beleuchtet.
Resumo:
The ongoing growth of the World Wide Web, catalyzed by the increasing possibility of ubiquitous access via a variety of devices, continues to strengthen its role as our prevalent information and commmunication medium. However, although tools like search engines facilitate retrieval, the task of finally making sense of Web content is still often left to human interpretation. The vision of supporting both humans and machines in such knowledge-based activities led to the development of different systems which allow to structure Web resources by metadata annotations. Interestingly, two major approaches which gained a considerable amount of attention are addressing the problem from nearly opposite directions: On the one hand, the idea of the Semantic Web suggests to formalize the knowledge within a particular domain by means of the "top-down" approach of defining ontologies. On the other hand, Social Annotation Systems as part of the so-called Web 2.0 movement implement a "bottom-up" style of categorization using arbitrary keywords. Experience as well as research in the characteristics of both systems has shown that their strengths and weaknesses seem to be inverse: While Social Annotation suffers from problems like, e. g., ambiguity or lack or precision, ontologies were especially designed to eliminate those. On the contrary, the latter suffer from a knowledge acquisition bottleneck, which is successfully overcome by the large user populations of Social Annotation Systems. Instead of being regarded as competing paradigms, the obvious potential synergies from a combination of both motivated approaches to "bridge the gap" between them. These were fostered by the evidence of emergent semantics, i. e., the self-organized evolution of implicit conceptual structures, within Social Annotation data. While several techniques to exploit the emergent patterns were proposed, a systematic analysis - especially regarding paradigms from the field of ontology learning - is still largely missing. This also includes a deeper understanding of the circumstances which affect the evolution processes. This work aims to address this gap by providing an in-depth study of methods and influencing factors to capture emergent semantics from Social Annotation Systems. We focus hereby on the acquisition of lexical semantics from the underlying networks of keywords, users and resources. Structured along different ontology learning tasks, we use a methodology of semantic grounding to characterize and evaluate the semantic relations captured by different methods. In all cases, our studies are based on datasets from several Social Annotation Systems. Specifically, we first analyze semantic relatedness among keywords, and identify measures which detect different notions of relatedness. These constitute the input of concept learning algorithms, which focus then on the discovery of synonymous and ambiguous keywords. Hereby, we assess the usefulness of various clustering techniques. As a prerequisite to induce hierarchical relationships, our next step is to study measures which quantify the level of generality of a particular keyword. We find that comparatively simple measures can approximate the generality information encoded in reference taxonomies. These insights are used to inform the final task, namely the creation of concept hierarchies. For this purpose, generality-based algorithms exhibit advantages compared to clustering approaches. In order to complement the identification of suitable methods to capture semantic structures, we analyze as a next step several factors which influence their emergence. Empirical evidence is provided that the amount of available data plays a crucial role for determining keyword meanings. From a different perspective, we examine pragmatic aspects by considering different annotation patterns among users. Based on a broad distinction between "categorizers" and "describers", we find that the latter produce more accurate results. This suggests a causal link between pragmatic and semantic aspects of keyword annotation. As a special kind of usage pattern, we then have a look at system abuse and spam. While observing a mixed picture, we suggest that an individual decision should be taken instead of disregarding spammers as a matter of principle. Finally, we discuss a set of applications which operationalize the results of our studies for enhancing both Social Annotation and semantic systems. These comprise on the one hand tools which foster the emergence of semantics, and on the one hand applications which exploit the socially induced relations to improve, e. g., searching, browsing, or user profiling facilities. In summary, the contributions of this work highlight viable methods and crucial aspects for designing enhanced knowledge-based services of a Social Semantic Web.
Resumo:
Mesh generation is an important step inmany numerical methods.We present the “HierarchicalGraphMeshing” (HGM)method as a novel approach to mesh generation, based on algebraic graph theory.The HGM method can be used to systematically construct configurations exhibiting multiple hierarchies and complex symmetry characteristics. The hierarchical description of structures provided by the HGM method can be exploited to increase the efficiency of multiscale and multigrid methods. In this paper, the HGMmethod is employed for the systematic construction of super carbon nanotubes of arbitrary order, which present a pertinent example of structurally and geometrically complex, yet highly regular, structures. The HGMalgorithm is computationally efficient and exhibits good scaling characteristics. In particular, it scales linearly for super carbon nanotube structures and is working much faster than geometry-based methods employing neighborhood search algorithms. Its modular character makes it conducive to automatization. For the generation of a mesh, the information about the geometry of the structure in a given configuration is added in a way that relates geometric symmetries to structural symmetries. The intrinsically hierarchic description of the resulting mesh greatly reduces the effort of determining mesh hierarchies for multigrid and multiscale applications and helps to exploit symmetry-related methods in the mechanical analysis of complex structures.
Resumo:
This thesis describes the development of a model-based vision system that exploits hierarchies of both object structure and object scale. The focus of the research is to use these hierarchies to achieve robust recognition based on effective organization and indexing schemes for model libraries. The goal of the system is to recognize parameterized instances of non-rigid model objects contained in a large knowledge base despite the presence of noise and occlusion. Robustness is achieved by developing a system that can recognize viewed objects that are scaled or mirror-image instances of the known models or that contain components sub-parts with different relative scaling, rotation, or translation than in models. The approach taken in this thesis is to develop an object shape representation that incorporates a component sub-part hierarchy- to allow for efficient and correct indexing into an automatically generated model library as well as for relative parameterization among sub-parts, and a scale hierarchy- to allow for a general to specific recognition procedure. After analysis of the issues and inherent tradeoffs in the recognition process, a system is implemented using a representation based on significant contour curvature changes and a recognition engine based on geometric constraints of feature properties. Examples of the system's performance are given, followed by an analysis of the results. In conclusion, the system's benefits and limitations are presented.
Resumo:
Texture provides one cue for identifying the physical cause of an intensity edge, such as occlusion, shadow, surface orientation or reflectance change. Marr, Julesz, and others have proposed that texture is represented by small lines or blobs, called 'textons' by Julesz [1981a], together with their attributes, such as orientation, elongation, and intensity. Psychophysical studies suggest that texture boundaries are perceived where distributions of attributes over neighborhoods of textons differ significantly. However, these studies, which deal with synthetic images, neglect to consider two important questions: How can these textons be extracted from images of natural scenes? And how, exactly, are texture boundaries then found? This thesis proposes answers to these questions by presenting an algorithm for computing blobs from natural images and a statistic for measuring the difference between two sample distributions of blob attributes. As part of the blob detection algorithm, methods for estimating image noise are presented, which are applicable to edge detection as well.
Resumo:
This thesis describes a methodology, a representation, and an implemented program for troubleshooting digital circuit boards at roughly the level of expertise one might expect in a human novice. Existing methods for model-based troubleshooting have not scaled up to deal with complex circuits, in part because traditional circuit models do not explicitly represent aspects of the device that troubleshooters would consider important. For complex devices the model of the target device should be constructed with the goal of troubleshooting explicitly in mind. Given that methodology, the principal contributions of the thesis are ways of representing complex circuits to help make troubleshooting feasible. Temporally coarse behavior descriptions are a particularly powerful simplification. Instantiating this idea for the circuit domain produces a vocabulary for describing digital signals. The vocabulary has a level of temporal detail sufficient to make useful predictions abut the response of the circuit while it remains coarse enough to make those predictions computationally tractable. Other contributions are principles for using these representations. Although not embodied in a program, these principles are sufficiently concrete that models can be constructed manually from existing circuit descriptions such as schematics, part specifications, and state diagrams. One such principle is that if there are components with particularly likely failure modes or failure modes in which their behavior is drastically simplified, this knowledge should be incorporated into the model. Further contributions include the solution of technical problems resulting from the use of explicit temporal representations and design descriptions with tangled hierarchies.
Resumo:
Esta investigación Monográfica busca fundamentalmente, evaluar si la creación de una institución financiera como Banco del Sur, representa una alternativa de apoyo financiero para el fortalecimiento del proceso de integración de la Unión Sudamericana de Naciones, por medio del financiamiento de proyectos transnacionales de infraestructura que permitan la integración física de Sudamérica. A partir de esto, se plantean como propósitos particulares: la identificación de los objetivos y funciones que tendrá el Banco del Sur para su funcionamiento, de acuerdo con los objetivos definidos por la UNASUR; así mismo, se pretende identificar los diferentes tipos de asimetrías en la región sudamericana y analizar de que manera, éstas constituyen un problema para el proceso de integración de la UNASUR y finalmente evaluar como el Banco del Sur, puede contribuir al fortalecimiento del proceso de integración de UNASUR, a través del financiamiento de proyectos de infraestructura física, para la construcción de la interconexión física de la región. Esta es una investigación aplicada de carácter prescriptivo, la cual identificará el problema de acuerdo a las principales dificultades, que pueden generar inconvenientes para la consolidación del proceso de integración de la UNASUR, tomando como referencia la creación del Banco del Sur como una posible alternativa aplicable para la solución del problema diagnosticado.
Resumo:
This Team Dargles poster is about the team's resource on the Copyright, Design and Patents Act 1988.
Resumo:
This document lists the references used by Team Dargles for both the poster and the resource for Info2009.
Resumo:
Our country is heading for a crisis. IT and computing are growing larger every day. With this comes an increasing skills shortage, so we need you to be the future of IT. Look ahead of you is an educational video, designed to teach you about the future of computing and why you should be a part of it.