983 resultados para Computer tools


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Social bookmark tools are rapidly emerging on the Web. In such systems users are setting up lightweight conceptual structures called folksonomies. These systems provide currently relatively few structure. We discuss in this paper, how association rule mining can be adopted to analyze and structure folksonomies, and how the results can be used for ontology learning and supporting emergent semantics. We demonstrate our approach on a large scale dataset stemming from an online system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Social bookmark tools are rapidly emerging on the Web. In such systems users are setting up lightweight conceptual structures called folksonomies. The reason for their immediate success is the fact that no specific skills are needed for participating. In this paper we specify a formal model for folksonomies and briefly describe our own system BibSonomy, which allows for sharing both bookmarks and publication references in a kind of personal library.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As the number of resources on the web exceeds by far the number of documents one can track, it becomes increasingly difficult to remain up to date on ones own areas of interest. The problem becomes more severe with the increasing fraction of multimedia data, from which it is difficult to extract some conceptual description of their contents. One way to overcome this problem are social bookmark tools, which are rapidly emerging on the web. In such systems, users are setting up lightweight conceptual structures called folksonomies, and overcome thus the knowledge acquisition bottleneck. As more and more people participate in the effort, the use of a common vocabulary becomes more and more stable. We present an approach for discovering topic-specific trends within folksonomies. It is based on a differential adaptation of the PageRank algorithm to the triadic hypergraph structure of a folksonomy. The approach allows for any kind of data, as it does not rely on the internal structure of the documents. In particular, this allows to consider different data types in the same analysis step. We run experiments on a large-scale real-world snapshot of a social bookmarking system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recently, research projects such as PADLR and SWAP have developed tools like Edutella or Bibster, which are targeted at establishing peer-to-peer knowledge management (P2PKM) systems. In such a system, it is necessary to obtain provide brief semantic descriptions of peers, so that routing algorithms or matchmaking processes can make decisions about which communities peers should belong to, or to which peers a given query should be forwarded. This paper proposes the use of graph clustering techniques on knowledge bases for that purpose. Using this clustering, we can show that our strategy requires up to 58% fewer queries than the baselines to yield full recall in a bibliographic P2PKM scenario.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A key argument for modeling knowledge in ontologies is the easy re-use and re-engineering of the knowledge. However, beside consistency checking, current ontology engineering tools provide only basic functionalities for analyzing ontologies. Since ontologies can be considered as (labeled, directed) graphs, graph analysis techniques are a suitable answer for this need. Graph analysis has been performed by sociologists for over 60 years, and resulted in the vivid research area of Social Network Analysis (SNA). While social network structures in general currently receive high attention in the Semantic Web community, there are only very few SNA applications up to now, and virtually none for analyzing the structure of ontologies. We illustrate in this paper the benefits of applying SNA to ontologies and the Semantic Web, and discuss which research topics arise on the edge between the two areas. In particular, we discuss how different notions of centrality describe the core content and structure of an ontology. From the rather simple notion of degree centrality over betweenness centrality to the more complex eigenvector centrality based on Hermitian matrices, we illustrate the insights these measures provide on two ontologies, which are different in purpose, scope, and size.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We report on an elementary course in ordinary differential equations (odes) for students in engineering sciences. The course is also intended to become a self-study package for odes and is is based on several interactive computer lessons using REDUCE and MATHEMATICA . The aim of the course is not to do Computer Algebra (CA) by example or to use it for doing classroom examples. The aim ist to teach and to learn mathematics by using CA-systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Kern der vorliegenden Arbeit ist die Erforschung von Methoden, Techniken und Werkzeugen zur Fehlersuche in modellbasierten Softwareentwicklungsprozessen. Hierzu wird zuerst ein von mir mitentwickelter, neuartiger und modellbasierter Softwareentwicklungsprozess, der sogenannte Fujaba Process, vorgestellt. Dieser Prozess wird von Usecase Szenarien getrieben, die durch spezielle Kollaborationsdiagramme formalisiert werden. Auch die weiteren Artefakte des Prozess bishin zur fertigen Applikation werden durch UML Diagrammarten modelliert. Es ist keine Programmierung im Quelltext nötig. Werkzeugunterstützung für den vorgestellte Prozess wird von dem Fujaba CASE Tool bereitgestellt. Große Teile der Werkzeugunterstützung für den Fujaba Process, darunter die Toolunterstützung für das Testen und Debuggen, wurden im Rahmen dieser Arbeit entwickelt. Im ersten Teil der Arbeit wird der Fujaba Process im Detail erklärt und unsere Erfahrungen mit dem Einsatz des Prozesses in Industrieprojekten sowie in der Lehre dargestellt. Der zweite Teil beschreibt die im Rahmen dieser Arbeit entwickelte Testgenerierung, die zu einem wichtigen Teil des Fujaba Process geworden ist. Hierbei werden aus den formalisierten Usecase Szenarien ausführbare Testfälle generiert. Es wird das zugrunde liegende Konzept, die konkrete technische Umsetzung und die Erfahrungen aus der Praxis mit der entwickelten Testgenerierung dargestellt. Der letzte Teil beschäftigt sich mit dem Debuggen im Fujaba Process. Es werden verschiedene im Rahmen dieser Arbeit entwickelte Konzepte und Techniken vorgestellt, die die Fehlersuche während der Applikationsentwicklung vereinfachen. Hierbei wurde darauf geachtet, dass das Debuggen, wie alle anderen Schritte im Fujaba Process, ausschließlich auf Modellebene passiert. Unter anderem werden Techniken zur schrittweisen Ausführung von Modellen, ein Objekt Browser und ein Debugger, der die rückwärtige Ausführung von Programmen erlaubt (back-in-time debugging), vorgestellt. Alle beschriebenen Konzepte wurden in dieser Arbeit als Plugins für die Eclipse Version von Fujaba, Fujaba4Eclipse, implementiert und erprobt. Bei der Implementierung der Plugins wurde auf eine enge Integration mit Fujaba zum einen und mit Eclipse auf der anderen Seite geachtet. Zusammenfassend wird also ein Entwicklungsprozess vorgestellt, die Möglichkeit in diesem mit automatischen Tests Fehler zu identifizieren und diese Fehler dann mittels spezieller Debuggingtechniken im Programm zu lokalisieren und schließlich zu beheben. Dabei läuft der komplette Prozess auf Modellebene ab. Für die Test- und Debuggingtechniken wurden in dieser Arbeit Plugins für Fujaba4Eclipse entwickelt, die den Entwickler bestmöglich bei der zugehörigen Tätigkeit unterstützen.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fujaba is an Open Source UML CASE tool project started at the software engineering group of Paderborn University in 1997. In 2002 Fujaba has been redesigned and became the Fujaba Tool Suite with a plug-in architecture allowing developers to add functionality easily while retaining full control over their contributions. Multiple Application Domains Fujaba followed the model-driven development philosophy right from its beginning in 1997. At the early days, Fujaba had a special focus on code generation from UML diagrams resulting in a visual programming language with a special emphasis on object structure manipulating rules. Today, at least six rather independent tool versions are under development in Paderborn, Kassel, and Darmstadt for supporting (1) reengineering, (2) embedded real-time systems, (3) education, (4) specification of distributed control systems, (5) integration with the ECLIPSE platform, and (6) MOF-based integration of system (re-) engineering tools. International Community According to our knowledge, quite a number of research groups have also chosen Fujaba as a platform for UML and MDA related research activities. In addition, quite a number of Fujaba users send requests for more functionality and extensions. Therefore, the 8th International Fujaba Days aimed at bringing together Fujaba develop- ers and Fujaba users from all over the world to present their ideas and projects and to discuss them with each other and with the Fujaba core development team.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ongoing growth of the World Wide Web, catalyzed by the increasing possibility of ubiquitous access via a variety of devices, continues to strengthen its role as our prevalent information and commmunication medium. However, although tools like search engines facilitate retrieval, the task of finally making sense of Web content is still often left to human interpretation. The vision of supporting both humans and machines in such knowledge-based activities led to the development of different systems which allow to structure Web resources by metadata annotations. Interestingly, two major approaches which gained a considerable amount of attention are addressing the problem from nearly opposite directions: On the one hand, the idea of the Semantic Web suggests to formalize the knowledge within a particular domain by means of the "top-down" approach of defining ontologies. On the other hand, Social Annotation Systems as part of the so-called Web 2.0 movement implement a "bottom-up" style of categorization using arbitrary keywords. Experience as well as research in the characteristics of both systems has shown that their strengths and weaknesses seem to be inverse: While Social Annotation suffers from problems like, e. g., ambiguity or lack or precision, ontologies were especially designed to eliminate those. On the contrary, the latter suffer from a knowledge acquisition bottleneck, which is successfully overcome by the large user populations of Social Annotation Systems. Instead of being regarded as competing paradigms, the obvious potential synergies from a combination of both motivated approaches to "bridge the gap" between them. These were fostered by the evidence of emergent semantics, i. e., the self-organized evolution of implicit conceptual structures, within Social Annotation data. While several techniques to exploit the emergent patterns were proposed, a systematic analysis - especially regarding paradigms from the field of ontology learning - is still largely missing. This also includes a deeper understanding of the circumstances which affect the evolution processes. This work aims to address this gap by providing an in-depth study of methods and influencing factors to capture emergent semantics from Social Annotation Systems. We focus hereby on the acquisition of lexical semantics from the underlying networks of keywords, users and resources. Structured along different ontology learning tasks, we use a methodology of semantic grounding to characterize and evaluate the semantic relations captured by different methods. In all cases, our studies are based on datasets from several Social Annotation Systems. Specifically, we first analyze semantic relatedness among keywords, and identify measures which detect different notions of relatedness. These constitute the input of concept learning algorithms, which focus then on the discovery of synonymous and ambiguous keywords. Hereby, we assess the usefulness of various clustering techniques. As a prerequisite to induce hierarchical relationships, our next step is to study measures which quantify the level of generality of a particular keyword. We find that comparatively simple measures can approximate the generality information encoded in reference taxonomies. These insights are used to inform the final task, namely the creation of concept hierarchies. For this purpose, generality-based algorithms exhibit advantages compared to clustering approaches. In order to complement the identification of suitable methods to capture semantic structures, we analyze as a next step several factors which influence their emergence. Empirical evidence is provided that the amount of available data plays a crucial role for determining keyword meanings. From a different perspective, we examine pragmatic aspects by considering different annotation patterns among users. Based on a broad distinction between "categorizers" and "describers", we find that the latter produce more accurate results. This suggests a causal link between pragmatic and semantic aspects of keyword annotation. As a special kind of usage pattern, we then have a look at system abuse and spam. While observing a mixed picture, we suggest that an individual decision should be taken instead of disregarding spammers as a matter of principle. Finally, we discuss a set of applications which operationalize the results of our studies for enhancing both Social Annotation and semantic systems. These comprise on the one hand tools which foster the emergence of semantics, and on the one hand applications which exploit the socially induced relations to improve, e. g., searching, browsing, or user profiling facilities. In summary, the contributions of this work highlight viable methods and crucial aspects for designing enhanced knowledge-based services of a Social Semantic Web.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work, we have mainly achieved the following: 1. we provide a review of the main methods used for the computation of the connection and linearization coefficients between orthogonal polynomials of a continuous variable, moreover using a new approach, the duplication problem of these polynomial families is solved; 2. we review the main methods used for the computation of the connection and linearization coefficients of orthogonal polynomials of a discrete variable, we solve the duplication and linearization problem of all orthogonal polynomials of a discrete variable; 3. we propose a method to generate the connection, linearization and duplication coefficients for q-orthogonal polynomials; 4. we propose a unified method to obtain these coefficients in a generic way for orthogonal polynomials on quadratic and q-quadratic lattices. Our algorithmic approach to compute linearization, connection and duplication coefficients is based on the one used by Koepf and Schmersau and on the NaViMa algorithm. Our main technique is to use explicit formulas for structural identities of classical orthogonal polynomial systems. We find our results by an application of computer algebra. The major algorithmic tools for our development are Zeilberger’s algorithm, q-Zeilberger’s algorithm, the Petkovšek-van-Hoeij algorithm, the q-Petkovšek-van-Hoeij algorithm, and Algorithm 2.2, p. 20 of Koepf's book "Hypergeometric Summation" and it q-analogue.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this study was to develop an internet-based seminar framework applicable for landscape architecture education. This process was accompanied by various aims. The basic expectation was to keep the main characteristics of landscape architecture education also in the online format. On top of that, four further objectives were anticipated: (1) training of competences for virtual team work, (2) fostering intercultural competence, (3) creation of equal opportunities for education through internet-based open access and (4) synergy effects and learning processes across institutional boundaries. This work started with the hypothesis that these four expected advantages would compensate for additional organisational efforts caused by the online delivery of the seminars and thus lead to a sustainable integration of this new learning mode into landscape architecture curricula. This rationale was followed by a presentation of four areas of knowledge to which the seminar development was directly related (1) landscape architecture as a subject and its pedagogy, (2) general learning theories, (3) developments in the ICT sector and (4) wider societal driving forces such as global citizenship and the increase of open educational resources. The research design took the shape of a pedagogical action research cycle. This approach was constructive: The author herself is teaching international landscape architecture students so that the model could directly be applied in practice. Seven online seminars were implemented in the period from 2008 to 2013 and this experience represents the core of this study. The seminars were conducted with varying themes while its pedagogy, organisation and the technological tools remained widely identical. The research design is further based on three levels of observation: (1) the seminar design on the basis of theory and methods from the learning sciences, in particular educational constructivism, (2) the seminar evaluation and (3) the evaluation of the seminars’ long term impact. The seminar model itself basically consists of four elements: (1) the taxonomy of learning objectives, (2) ICT tools and their application and pedagogy, (3) process models and (4) the case study framework. The seminar framework was followed by the presentation of the evaluation findings. The major findings of this study can be summed up as follows: Implementing online seminars across educational and national boundaries was possible both in term of organisation and technology. In particular, a high level of cultural diversity among the seminar participants has definitively been achieved. However, there were also obvious obstacles. These were primarily competing study commitments and incompatible schedules among the students attending from different academic programmes, partly even in different time zones. Both factors had negative impact on the individual and working group performances. With respect to the technical framework it can be concluded that the majority of the participants were able to use the tools either directly without any problem or after overcoming some smaller problems. Also the seminar wiki was intensively used for completing the seminar assignments. However, too less truly collaborative text production was observed which could be improved by changing the requirements for the collaborative task. Two different process models have been applied for guiding the collaboration of the small groups and both were in general successful. However, it needs to be said that even if the students were able to follow the collaborative task and to co-construct and compare case studies, most of them were not able to synthesize the knowledge they had compiled. This means that the area of consideration often remained on the level of the case and further reflections, generalisations and critique were largely missing. This shows that the seminar model needs to find better ways for triggering knowledge building and critical reflection. It was also suggested to have a more differentiated group building strategy in future seminars. A comparison of pre- and post seminar concept maps showed that an increase of factual and conceptual knowledge on the individual level was widely recognizable. Also the evaluation of the case studies (the major seminar output) revealed that the students have undergone developments of both the factual and the conceptual knowledge domain. Also their self-assessment with respect to individual learning development showed that the highest consensus was achieved in the field of subject-specific knowledge. The participants were much more doubtful with regard to the progress of generic competences such as analysis, communication and organisation. However, 50% of the participants confirmed that they perceived individual development on all competence areas the survey had asked for. Have the additional four targets been met? Concerning the competences for working in a virtual team it can be concluded that the vast majority was able to use the internet-based tools and to work with them in a target-oriented way. However, there were obvious differences regarding the intensity and activity of participation, both because of external and personal factors. A very positive aspect is the achievement of a high cultural diversity supporting the participants’ intercultural competence. Learning from group members was obviously a success factor for the working groups. Regarding the possibilities for better accessibility of educational opportunities it became clear that a significant number of participants were not able to go abroad during their studies because of financial or personal reasons. They confirmed that the online seminar was to some extent a compensation for not having been abroad for studying. Inter-institutional learning and synergy was achieved in so far that many teachers from different countries contributed with individual lectures. However, those teachers hardly ever followed more than one session. Therefore, the learning effect remained largely within the seminar learning group. Looking back at the research design it can be said that the pedagogical action research cycle was an appropriate and valuable approach allowing for strong interaction between theory and practice. However, some more external evaluation from peers in particular regarding the participants’ products would have been valuable.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Enterprise-Resource-Planning-Systeme (ERP-Systeme) bilden für die meisten mittleren und großen Unternehmen einen essentiellen Bestandteil ihrer IT-Landschaft zur Verwaltung von Geschäftsdaten und Geschäftsprozessen. Geschäftsdaten werden in ERP-Systemen in Form von Geschäftsobjekten abgebildet. Ein Geschäftsobjekt kann mehrere Attribute enthalten und über Assoziationen zu anderen Geschäftsobjekten einen Geschäftsobjektgraphen aufspannen. Existierende Schnittstellen ermöglichen die Abfrage von Geschäftsobjekten, insbesondere mit Hinblick auf deren Attribute. Die Abfrage mit Bezug auf ihre Position innerhalb des Geschäftsobjektgraphen ist jedoch über diese Schnittstellen häufig nur sehr schwierig zu realisieren. Zur Vereinfachung solcher Anfragen können semantische Technologien, wie RDF und die graphbasierte Abfragesprache SPARQL, verwendet werden. SPARQL ermöglicht eine wesentlich kompaktere und intuitivere Formulierung von Anfragen gegen Geschäftsobjektgraphen, als es mittels der existierenden Schnittstellen möglich ist. Die Motivation für diese Arbeit ist die Vereinfachung bestimmter Anfragen gegen das im Rahmen dieser Arbeit betrachtete SAP ERP-System unter Verwendung von SPARQL. Zur Speicherung von Geschäftsobjekten kommen in ERP-Systemen typischerweise relationale Datenbanken zum Einsatz. Die Bereitstellung von SPARQL-Endpunkten auf Basis von relationalen Datenbanken ist ein seit längerem untersuchtes Gebiet. Es existieren verschiedene Ansätze und Tools, welche die Anfrage mittels SPARQL erlauben. Aufgrund der Komplexität, der Größe und der Änderungshäufigkeit des ERP-Datenbankschemas können solche Ansätze, die direkt auf dem Datenbankschema aufsetzen, nicht verwendet werden. Ein praktikablerer Ansatz besteht darin, den SPARQL-Endpunkt auf Basis existierender Schnittstellen zu realisieren. Diese sind weniger komplex als das Datenbankschema, da sie die direkte Abfrage von Geschäftsobjekten ermöglichen. Dadurch wird die Definition des Mappings erheblich vereinfacht. Das ERP-System bietet mehrere Schnittstellen an, die sich hinsichtlich des Aufbaus, der Zielsetzung und der verwendeten Technologie unterscheiden. Unter anderem wird eine auf OData basierende Schnittstelle zur Verfügung gestellt. OData ist ein REST-basiertes Protokoll zur Abfrage und Manipulation von Daten. Von den bereitgestellten Schnittstellen weist das OData-Interface gegenüber den anderen Schnittstellen verschiedene Vorteile bei Realisierung eines SPARQL-Endpunktes auf. Es definiert eine Abfragesprache und einen Link-Adressierungsmechanismus, mit dem die zur Beantwortung einer Anfrage benötigten Service-Aufrufe und die zu übertragende Datenmenge erheblich reduziert werden können. Das Ziel dieser Arbeit besteht in der Entwicklung eines Verfahrens zur Realisierung eines SPARQL-Endpunktes auf Basis von OData-Services. Dazu wird zunächst eine Architektur vorgestellt, die als Grundlage für die Implementierung eines entsprechenden Systems dienen kann. Ausgehend von dieser Architektur, werden die durch den aktuellen Forschungsstand noch nicht abgedeckten Bereiche ermittelt. Nach bestem Wissen ist diese Arbeit die erste, welche die Abfrage von OData-Schnittstellen mittels SPARQL untersucht. Dabei wird als Teil dieser Arbeit ein neuartiges Konzept zur semantischen Beschreibung von OData-Services vorgestellt. Dieses ermöglicht die Definition von Abbildungen der von den Services bereitgestellten Daten auf RDF-Graphen. Aufbauend auf den Konzepten zur semantischen Beschreibung wird eine Evaluierungssemantik erarbeitet, welche die Auflösung von Ausdrücken der SPARQL-Algebra gegen semantisch annotierte OData-Services definiert. Dabei werden die Daten aller OData-Services ermittelt, die zur vollständigen Abarbeitung einer Anfrage benötigt werden. Zur Abfrage der relevanten Daten wurden Konzepte zur Erzeugung der entsprechenden OData-URIs entwickelt. Das vorgestellte Verfahren wurde prototypisch implementiert und anhand zweier Anwendungsfälle für die im betrachteten Szenario maßgeblichen Servicemengen evaluiert. Mit den vorgestellten Konzepten besteht nicht nur die Möglichkeit, einen SPARQL-Endpunkt für ein ERP-System zu realisieren, vielmehr kann jede Datenquelle, die eine OData-Schnittstelle anbietet, mittels SPARQL angefragt werden. Dadurch werden große Datenmengen, die bisher für die Verarbeitung mittels semantischer Technologien nicht zugänglich waren, für die Integration mit dem Semantic Web verfügbar gemacht. Insbesondere können auch Datenquellen, deren Integration miteinander bisher nicht oder nur schwierig möglich war, über Systeme zur föderierten Abfrage miteinander integriert werden.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Almost everyone sketches. People use sketches day in and day out in many different and heterogeneous fields, to share their thoughts and clarify ambiguous interpretations, for example. The media used to sketch varies from analog tools like flipcharts to digital tools like smartboards. Whereas analog tools are usually affected by insufficient editing capabilities like cut/copy/paste, digital tools greatly support these scenarios. Digital tools can be grouped into informal and formal tools. Informal tools can be understood as simple drawing environments, whereas formal tools offer sophisticated support to create, optimize and validate diagrams of a certain application domain. Most digital formal tools force users to stick to a concrete syntax and editing workflow, limiting the user’s creativity. For that reason, a lot of people first sketch their ideas using the flexibility of analog or digital informal tools. Subsequently, the sketch is "portrayed" in an appropriate digital formal tool. This work presents Scribble, a highly configurable and extensible sketching framework which allows to dynamically inject sketching features into existing graphical diagram editors, based on Eclipse GEF. This allows to combine the flexibility of informal tools with the power of formal tools without any effort. No additional code is required to augment a GEF editor with sophisticated sketching features. Scribble recognizes drawn elements as well as handwritten text and automatically generates the corresponding domain elements. A local training data library is created dynamically by incrementally learning shapes, drawn by the user. Training data can be shared with others using the WebScribble web application which has been created as part of this work.