165 resultados para File processing (Computer science)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data mining means to summarize information from large amounts of raw data. It is one of the key technologies in many areas of economy, science, administration and the internet. In this report we introduce an approach for utilizing evolutionary algorithms to breed fuzzy classifier systems. This approach was exercised as part of a structured procedure by the students Achler, Göb and Voigtmann as contribution to the 2006 Data-Mining-Cup contest, yielding encouragingly positive results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objektorientierte Modellierung (OOM) im Unterricht ist immer noch ein breit diskutiertes Thema - in der Didaktik akzeptiert und gewünscht, von der Praxis oft als unnötiger Overhead oder als schlicht zu komplex empfunden. Ich werde in dieser Arbeit zeigen, wie ein Unterrichtskonzept aufgebaut sein kann, das die lerntheoretischen Vorteile der OOM nutzt und dabei die berichteten Schwierigkeiten größtenteils vermeidet. Ausgehend von den in der Literatur dokumentierten Konzepten zur OOM und ihren Kritikpunkten habe ich ein Unterrichtskonzept entwickelt, das aus Erkenntnissen der Lernpsychologie, allgemeiner Didaktik, Fachdidaktik und auch der Softwaretechnik Unterrichtsmethoden herleitet, um den berichteten Schwierigkeiten wie z.B. dem "Lernen auf Vorrat" zu begegnen. Mein Konzept folgt vier Leitideen: models first, strictly objects first, Nachvollziehbarkeit und Ausführbarkeit. Die strikte Umsetzung dieser Ideen führte zu einem Unterrichtskonzept, das einerseits von Beginn an das Ziel der Modellierung berücksichtigt und oft von der dynamischen Sicht des Problems ausgeht. Da es weitgehend auf der grafischen Modellierungsebene verbleibt, werden viele Probleme eines Programmierkurses vermieden und dennoch entstehen als Ergebnis der Modellierung ausführbare Programme.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Information display technology is a rapidly growing research and development field. Using state-of-the-art technology, optical resolution can be increased dramatically by organic light-emitting diode - since the light emitting layer is very thin, under 100nm. The main question is what pixel size is achievable technologically? The next generation of display will considers three-dimensional image display. In 2D , one is considering vertical and horizontal resolutions. In 3D or holographic images, there is another dimension – depth. The major requirement is the high resolution horizontal dimension in order to sustain the third dimension using special lenticular glass or barrier masks, separate views for each eye. The high-resolution 3D display offers hundreds of more different views of objects or landscape. OLEDs have potential to be a key technology for information displays in the future. The display technology presented in this work promises to bring into use bright colour 3D flat panel displays in a unique way. Unlike the conventional TFT matrix, OLED displays have constant brightness and colour, independent from the viewing angle i.e. the observer's position in front of the screen. A sandwich (just 0.1 micron thick) of organic thin films between two conductors makes an OLE Display device. These special materials are named electroluminescent organic semi-conductors (or organic photoconductors (OPC )). When electrical current is applied, a bright light is emitted (electrophosphorescence) from the formed Organic Light-Emitting Diode. Usually for OLED an ITO layer is used as a transparent electrode. Such types of displays were the first for volume manufacture and only a few products are available in the market at present. The key challenges that OLED technology faces in the application areas are: producing high-quality white light achieving low manufacturing costs increasing efficiency and lifetime at high brightness. Looking towards the future, by combining OLED with specially constructed surface lenses and proper image management software it will be possible to achieve 3D images.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study several extensions of the notion of alternation from context-free grammars to context-sensitive and arbitrary phrase-structure grammars. Thereby new grammatical characterizations are obtained for the class of languages that are accepted by alternating pushdown automata.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This report gives a detailed discussion on the system, algorithms, and techniques that we have applied in order to solve the Web Service Challenges (WSC) of the years 2006 and 2007. These international contests are focused on semantic web service composition. In each challenge of the contests, a repository of web services is given. The input and output parameters of the services in the repository are annotated with semantic concepts. A query to a semantic composition engine contains a set of available input concepts and a set of wanted output concepts. In order to employ an offered service for a requested role, the concepts of the input parameters of the offered operations must be more general than requested (contravariance). In contrast, the concepts of the output parameters of the offered service must be more specific than requested (covariance). The engine should respond to a query by providing a valid composition as fast as possible. We discuss three different methods for web service composition: an uninformed search in form of an IDDFS algorithm, a greedy informed search based on heuristic functions, and a multi-objective genetic algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Restarting automata can be seen as analytical variants of classical automata as well as of regulated rewriting systems. We study a measure for the degree of nondeterminism of (context-free) languages in terms of deterministic restarting automata that are (strongly) lexicalized. This measure is based on the number of auxiliary symbols (categories) used for recognizing a language as the projection of its characteristic language onto its input alphabet. This type of recognition is typical for analysis by reduction, a method used in linguistics for the creation and verification of formal descriptions of natural languages. Our main results establish a hierarchy of classes of context-free languages and two hierarchies of classes of non-context-free languages that are based on the expansion factor of a language.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many examples for emergent behaviors may be observed in self-organizing physical and biological systems which prove to be robust, stable, and adaptable. Such behaviors are often based on very simple mechanisms and rules, but artificially creating them is a challenging task which does not comply with traditional software engineering. In this article, we propose a hybrid approach by combining strategies from Genetic Programming and agent software engineering, and demonstrate that this approach effectively yields an emergent design for given problems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In der vorliegenden Arbeit wird die Konzeption und Realisierung der Persistenz-, Verteilungs- und Versionierungsbibliothek CoObRA 2 vorgestellt. Es werden zunächst die Anforderungen an ein solches Rahmenwerk aufgenommen und vorhandene Technologien für dieses Anwendungsgebiet vorgestellt. Das in der neuen Bibliothek eingesetzte Verfahren setzt Änderungsprotokolle beziehungsweise -listen ein, um Persistenzdaten für Dokumente und Versionen zu definieren. Dieses Konzept wird dabei durch eine Abbildung auf Kontrukte aus der Graphentheorie gestützt, um die Semantik von Modell, Änderungen und deren Anwendung zu definieren. Bei der Umsetzung werden insbesondere das Design der Bibliothek und die Entscheidungen, die zu der gewählten Softwarearchitektur führten, eingehend erläutert. Dies ist zentraler Aspekt der Arbeit, da die Flexibilität des Rahmenwerks eine wichtige Anforderung darstellt. Abschließend werden die Einsatzmöglichkeiten an konkreten Beispielanwendungen erläutert und bereits gemachte Erfahrungen beim Einsatz in CASE-Tools, Forschungsanwendungen und Echtzeit-Simulationsumgebungen präsentiert.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The process of developing software that takes advantage of multiple processors is commonly referred to as parallel programming. For various reasons, this process is much harder than the sequential case. For decades, parallel programming has been a problem for a small niche only: engineers working on parallelizing mostly numerical applications in High Performance Computing. This has changed with the advent of multi-core processors in mainstream computer architectures. Parallel programming in our days becomes a problem for a much larger group of developers. The main objective of this thesis was to find ways to make parallel programming easier for them. Different aims were identified in order to reach the objective: research the state of the art of parallel programming today, improve the education of software developers about the topic, and provide programmers with powerful abstractions to make their work easier. To reach these aims, several key steps were taken. To start with, a survey was conducted among parallel programmers to find out about the state of the art. More than 250 people participated, yielding results about the parallel programming systems and languages in use, as well as about common problems with these systems. Furthermore, a study was conducted in university classes on parallel programming. It resulted in a list of frequently made mistakes that were analyzed and used to create a programmers' checklist to avoid them in the future. For programmers' education, an online resource was setup to collect experiences and knowledge in the field of parallel programming - called the Parawiki. Another key step in this direction was the creation of the Thinking Parallel weblog, where more than 50.000 readers to date have read essays on the topic. For the third aim (powerful abstractions), it was decided to concentrate on one parallel programming system: OpenMP. Its ease of use and high level of abstraction were the most important reasons for this decision. Two different research directions were pursued. The first one resulted in a parallel library called AthenaMP. It contains so-called generic components, derived from design patterns for parallel programming. These include functionality to enhance the locks provided by OpenMP, to perform operations on large amounts of data (data-parallel programming), and to enable the implementation of irregular algorithms using task pools. AthenaMP itself serves a triple role: the components are well-documented and can be used directly in programs, it enables developers to study the source code and learn from it, and it is possible for compiler writers to use it as a testing ground for their OpenMP compilers. The second research direction was targeted at changing the OpenMP specification to make the system more powerful. The main contributions here were a proposal to enable thread-cancellation and a proposal to avoid busy waiting. Both were implemented in a research compiler, shown to be useful in example applications, and proposed to the OpenMP Language Committee.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Land use is a crucial link between human activities and the natural environment and one of the main driving forces of global environmental change. Large parts of the terrestrial land surface are used for agriculture, forestry, settlements and infrastructure. Given the importance of land use, it is essential to understand the multitude of influential factors and resulting land use patterns. An essential methodology to study and quantify such interactions is provided by the adoption of land-use models. By the application of land-use models, it is possible to analyze the complex structure of linkages and feedbacks and to also determine the relevance of driving forces. Modeling land use and land use changes has a long-term tradition. In particular on the regional scale, a variety of models for different regions and research questions has been created. Modeling capabilities grow with steady advances in computer technology, which on the one hand are driven by increasing computing power on the other hand by new methods in software development, e.g. object- and component-oriented architectures. In this thesis, SITE (Simulation of Terrestrial Environments), a novel framework for integrated regional sland-use modeling, will be introduced and discussed. Particular features of SITE are the notably extended capability to integrate models and the strict separation of application and implementation. These features enable efficient development, test and usage of integrated land-use models. On its system side, SITE provides generic data structures (grid, grid cells, attributes etc.) and takes over the responsibility for their administration. By means of a scripting language (Python) that has been extended by language features specific for land-use modeling, these data structures can be utilized and manipulated by modeling applications. The scripting language interpreter is embedded in SITE. The integration of sub models can be achieved via the scripting language or by usage of a generic interface provided by SITE. Furthermore, functionalities important for land-use modeling like model calibration, model tests and analysis support of simulation results have been integrated into the generic framework. During the implementation of SITE, specific emphasis was laid on expandability, maintainability and usability. Along with the modeling framework a land use model for the analysis of the stability of tropical rainforest margins was developed in the context of the collaborative research project STORMA (SFB 552). In a research area in Central Sulawesi, Indonesia, socio-environmental impacts of land-use changes were examined. SITE was used to simulate land-use dynamics in the historical period of 1981 to 2002. Analogous to that, a scenario that did not consider migration in the population dynamics, was analyzed. For the calculation of crop yields and trace gas emissions, the DAYCENT agro-ecosystem model was integrated. In this case study, it could be shown that land-use changes in the Indonesian research area could mainly be characterized by the expansion of agricultural areas at the expense of natural forest. For this reason, the situation had to be interpreted as unsustainable even though increased agricultural use implied economic improvements and higher farmers' incomes. Due to the importance of model calibration, it was explicitly addressed in the SITE architecture through the introduction of a specific component. The calibration functionality can be used by all SITE applications and enables largely automated model calibration. Calibration in SITE is understood as a process that finds an optimal or at least adequate solution for a set of arbitrarily selectable model parameters with respect to an objective function. In SITE, an objective function typically is a map comparison algorithm capable of comparing a simulation result to a reference map. Several map optimization and map comparison methodologies are available and can be combined. The STORMA land-use model was calibrated using a genetic algorithm for optimization and the figure of merit map comparison measure as objective function. The time period for the calibration ranged from 1981 to 2002. For this period, respective reference land-use maps were compiled. It could be shown, that an efficient automated model calibration with SITE is possible. Nevertheless, the selection of the calibration parameters required detailed knowledge about the underlying land-use model and cannot be automated. In another case study decreases in crop yields and resulting losses in income from coffee cultivation were analyzed and quantified under the assumption of four different deforestation scenarios. For this task, an empirical model, describing the dependence of bee pollination and resulting coffee fruit set from the distance to the closest natural forest, was integrated. Land-use simulations showed, that depending on the magnitude and location of ongoing forest conversion, pollination services are expected to decline continuously. This results in a reduction of coffee yields of up to 18% and a loss of net revenues per hectare of up to 14%. However, the study also showed that ecological and economic values can be preserved if patches of natural vegetation are conservated in the agricultural landscape. -----------------------------------------------------------------------

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Vorträge / Präsentationen des Automation Symposiums 2008

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With this document, we provide a compilation of in-depth discussions on some of the most current security issues in distributed systems. The six contributions have been collected and presented at the 1st Kassel Student Workshop on Security in Distributed Systems (KaSWoSDS’08). We are pleased to present a collection of papers not only shedding light on the theoretical aspects of their topics, but also being accompanied with elaborate practical examples. In Chapter 1, Stephan Opfer discusses Viruses, one of the oldest threats to system security. For years there has been an arms race between virus producers and anti-virus software providers, with no end in sight. Stefan Triller demonstrates how malicious code can be injected in a target process using a buffer overflow in Chapter 2. Websites usually store their data and user information in data bases. Like buffer overflows, the possibilities of performing SQL injection attacks targeting such data bases are left open by unwary programmers. Stephan Scheuermann gives us a deeper insight into the mechanisms behind such attacks in Chapter 3. Cross-site scripting (XSS) is a method to insert malicious code into websites viewed by other users. Michael Blumenstein explains this issue in Chapter 4. Code can be injected in other websites via XSS attacks in order to spy out data of internet users, spoofing subsumes all methods that directly involve taking on a false identity. In Chapter 5, Till Amma shows us different ways how this can be done and how it is prevented. Last but not least, cryptographic methods are used to encode confidential data in a way that even if it got in the wrong hands, the culprits cannot decode it. Over the centuries, many different ciphers have been developed, applied, and finally broken. Ilhan Glogic sketches this history in Chapter 6.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bildbasierte Authentifizierung und Verschlüsselung: Identitätsbasierte Kryptographie (oft auch identity Based Encryption, IBE) ist eine Variation der asymmetrischen Schlüsselverfahren, bei der der öffentliche Schlüssel des Anwenders eine beliebig wählbare Zeichenfolge sein darf, die dem Besitzer offensichtlich zugeordnet werden kann. Adi Shamir stellte 1984 zunächst ein solches Signatursystem vor. In der Literatur wird dabei als öffentlicher Schlüssel meist die Email-Adresse oder eine Sozialversicherungsnummer genannt. Der Preis für die freie Schlüsselwahl ist die Einbeziehung eines vertrauenswürdigen Dritten, genannt Private Key Generator, der mit seinem privaten Generalschlüssel den privaten Schlüssel des Antragstellers generiert. Mit der Arbeit von Boneh und Franklin 2001 zum Einsatz der Weil-Paarbildung über elliptischen Kurven wurde IBE auf eine sichere und praktikable Grundlage gestellt. In dieser Arbeit wird nach einer allgemeinen Übersicht über Probleme und Lösungsmöglichkeiten für Authentifizierungsaufgaben im zweiten Teil als neue Idee der Einsatz eines Bildes des Anwenders als öffentlicher Schlüssel vorgeschlagen. Dazu wird der Ablauf der Schlüsselausgabe, die Bestellung einer Dienstleistung, z. B. die Ausstellung einer personengebundenen Fahrkarte, sowie deren Kontrolle dargestellt. Letztere kann offline auf dem Gerät des Kontrolleurs erfolgen, wobei Ticket und Bild auf dem Handy des Kunden bereitliegen. Insgesamt eröffnet sich dadurch die Möglichkeit einer Authentifizierung ohne weitere Preisgabe einer Identität, wenn man davon ausgeht, dass das Bild einer Person angesichts allgegenwärtiger Kameras sowieso öffentlich ist. Die Praktikabilität wird mit einer Implementierung auf der Basis des IBE-JCA Providers der National University of Ireland in Maynooth demonstriert und liefert auch Aufschluss auf das in der Praxis zu erwartende Laufzeitverhalten.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Die vorliegende Arbeit behandelt Restartautomaten und Erweiterungen von Restartautomaten. Restartautomaten sind ein Werkzeug zum Erkennen formaler Sprachen. Sie sind motiviert durch die linguistische Methode der Analyse durch Reduktion und wurden 1995 von Jancar, Mráz, Plátek und Vogel eingeführt. Restartautomaten bestehen aus einer endlichen Kontrolle, einem Lese/Schreibfenster fester Größe und einem flexiblen Band. Anfänglich enthält dieses sowohl die Eingabe als auch Bandbegrenzungssymbole. Die Berechnung eines Restartautomaten läuft in so genannten Zyklen ab. Diese beginnen am linken Rand im Startzustand, in ihnen wird eine lokale Ersetzung auf dem Band durchgeführt und sie enden mit einem Neustart, bei dem das Lese/Schreibfenster wieder an den linken Rand bewegt wird und der Startzustand wieder eingenommen wird. Die vorliegende Arbeit beschäftigt sich hauptsächlich mit zwei Erweiterungen der Restartautomaten: CD-Systeme von Restartautomaten und nichtvergessende Restartautomaten. Nichtvergessende Restartautomaten können einen Zyklus in einem beliebigen Zustand beenden und CD-Systeme von Restartautomaten bestehen aus einer Menge von Restartautomaten, die zusammen die Eingabe verarbeiten. Dabei wird ihre Zusammenarbeit durch einen Operationsmodus, ähnlich wie bei CD-Grammatik Systemen, geregelt. Für beide Erweiterungen zeigt sich, dass die deterministischen Modelle mächtiger sind als deterministische Standardrestartautomaten. Es wird gezeigt, dass CD-Systeme von Restartautomaten in vielen Fällen durch nichtvergessende Restartautomaten simuliert werden können und andererseits lassen sich auch nichtvergessende Restartautomaten durch CD-Systeme von Restartautomaten simulieren. Des Weiteren werden Restartautomaten und nichtvergessende Restartautomaten untersucht, die nichtdeterministisch sind, aber keine Fehler machen. Es zeigt sich, dass diese Automaten durch deterministische (nichtvergessende) Restartautomaten simuliert werden können, wenn sie direkt nach der Ersetzung einen neuen Zyklus beginnen, oder ihr Fenster nach links und rechts bewegen können. Außerdem gilt, dass alle (nichtvergessenden) Restartautomaten, die zwar Fehler machen dürfen, diese aber nach endlich vielen Zyklen erkennen, durch (nichtvergessende) Restartautomaten simuliert werden können, die keine Fehler machen. Ein weiteres wichtiges Resultat besagt, dass die deterministischen monotonen nichtvergessenden Restartautomaten mit Hilfssymbolen, die direkt nach dem Ersetzungsschritt den Zyklus beenden, genau die deterministischen kontextfreien Sprachen erkennen, wohingegen die deterministischen monotonen nichtvergessenden Restartautomaten mit Hilfssymbolen ohne diese Einschränkung echt mehr, nämlich die links-rechts regulären Sprachen, erkennen. Damit werden zum ersten Mal Restartautomaten mit Hilfssymbolen, die direkt nach dem Ersetzungsschritt ihren Zyklus beenden, von Restartautomaten desselben Typs ohne diese Einschränkung getrennt. Besonders erwähnenswert ist hierbei, dass beide Automatentypen wohlbekannte Sprachklassen beschreiben.