157 resultados para markup
Resumo:
Geography markup language - GML. Keyhole markup language - KML. Google earth. Google earth client.
Resumo:
XML e sua utilização em aplicações de bioinformática. Utilização de XML no módulo JPD do SMS. Discussão e trabalhos futuros.
Resumo:
This report describes our attempt to add animation as another data type to be used on the World Wide Web. Our current network infrastructure, the Internet, is incapable of carrying video and audio streams for them to be used on the web for presentation purposes. In contrast, object-oriented animation proves to be efficient in terms of network resource requirements. We defined an animation model to support drawing-based and frame-based animation. We also extended the HyperText Markup Language in order to include this animation mode. BU-NCSA Mosanim, a modified version of the NCSA Mosaic for X(v2.5), is available to demonstrate the concept and potentials of animation in presentations an interactive game playing over the web.
Resumo:
In research areas involving mathematical rigor, there are numerous benefits to adopting a formal representation of models and arguments: reusability, automatic evaluation of examples, and verification of consistency and correctness. However, broad accessibility has not been a priority in the design of formal verification tools that can provide these benefits. We propose a few design criteria to address these issues: a simple, familiar, and conventional concrete syntax that is independent of any environment, application, or verification strategy, and the possibility of reducing workload and entry costs by employing features selectively. We demonstrate the feasibility of satisfying such criteria by presenting our own formal representation and verification system. Our system’s concrete syntax overlaps with English, LATEX and MediaWiki markup wherever possible, and its verifier relies on heuristic search techniques that make the formal authoring process more manageable and consistent with prevailing practices. We employ techniques and algorithms that ensure a simple, uniform, and flexible definition and design for the system, so that it easy to augment, extend, and improve.
Resumo:
This research investigates some of the reasons for the reported difficulties experienced by writers when using editing software designed for structured documents. The overall objective was to determine if there are aspects of the software interfaces which militate against optimal document construction by writers who are not computer experts, and to suggest possible remedies. Studies were undertaken to explore the nature and extent of the difficulties, and to identify which components of the software interfaces are involved. A model of a revised user interface was tested, and some possible adaptations to the interface are proposed which may help overcome the difficulties. The methodology comprised: 1. identification and description of the nature of a ‘structured document’ and what distinguishes it from other types of document used on computers; 2. isolation of the requirements of users of such documents, and the construction a set of personas which describe them; 3. evaluation of other work on the interaction between humans and computers, specifically in software for creating and editing structured documents; 4. estimation of the levels of adoption of the available software for editing structured documents and the reactions of existing users to it, with specific reference to difficulties encountered in using it; 5. examination of the software and identification of any mismatches between the expectations of users and the facilities provided by the software; 6. assessment of any physical or psychological factors in the reported difficulties experienced, and to determine what (if any) changes to the software might affect these. The conclusions are that seven of the twelve modifications tested could contribute to an improvement in usability, effectiveness, and efficiency when writing structured text (new document selection; adding new sections and new lists; identifying key information typographically; the creation of cross-references and bibliographic references; and the inclusion of parts of other documents). The remaining five were seen as more applicable to editing existing material than authoring new text (adding new elements; splitting and joining elements [before and after]; and moving block text).
Resumo:
Rule testing in transport scheduling is a complex and potentially costly business problem. This paper proposes an automated method for the rule-based testing of business rules using the extensible Markup Language for rule representation and transportation. A compiled approach to rule execution is also proposed for performance-critical scheduling systems.
Resumo:
The objective of this paper is to describe and evaluate the application of the Text Encoding Initiative (TEI) Guidelines to a corpus of oral French, this being the first corpus of oral French where the TEI has been used. The paper explains the purpose of the corpus, both in creating a specialist corpus of néo-contage that will broaden the range of oral corpora available, and, more importantly, in creating a dataset to explore a variety of oral French that has a particularly interesting status in terms of factors such as conception orale/écrite, réalisation médiale and comportement communicatif (Koch and Oesterreicher 2001). The linguistic phenomena to be encoded are both stylistic (speech and thought presentation) and syntactic (negation, detachment, inversion), and all represent areas where previous research has highlighted the significance of factors such as medium, register and discourse type, as well as a host of linguistic factors (syntactic, phonetic, lexical). After a discussion of how a tagset can be designed and applied within the TEI to encode speech and thought presentation, negation, detachment and inversion, the final section of the paper evaluates the benefits and possible drawbacks of the methodology offered by the TEI when applied to a syntactic and stylistic markup of an oral corpus.
Resumo:
Using matched employer-employee data from the German LIAB for 2001, the authors found that German works councils are in general associated with higher earnings, even after accounting for establishment- and worker heterogeneity. Works Council wage premia exceed those of collective bargaining and are higher, in fact, where both institutions are present in the workplace. The authors also found evidence indicating that works councils benefit women relative to men and appear to favor foreign, east-German, and service-sector workers as well. Separate evidence from quantile regressions suggests that the conjunction of works council presence and collective bargaining is important to the narrowing process. In smaller plants even the presence of a works council markup depends on the coexistence of the works council entity With the machinery of collective bargaining.
Resumo:
Web 2.0 software in general and wikis in particular have been receiving growing attention as they constitute new and powerful tools, capable of supporting information sharing, creation of knowledge and a wide range of collaborative processes and learning activities. This paper introduces briefly some of the new opportunities made possible by Web 2.0 or the social Internet, focusing on those offered by the use of wikis as learning spaces. A wiki allows documents to be created, edited and shared on a group basis; it has a very easy and efficient markup language, using a simple Web browser. One of the most important characteristics of wiki technology is the ease with which pages are created and edited. The facility for wiki content to be edited by its users means that its pages and structure form a dynamic entity, in permanent evolution, where users can insert new ideas, supplement previously existing information and correct errors and typos in a document at any time, up to the agreed final version. This paper explores wikis as a collaborative learning and knowledge-building space and its potential for supporting Virtual Communities of Practice (VCoPs). In the academic years (2007/8 and 2008/9), students of the Business Intelligence module at the Master's programme of studies on Knowledge Management and Business Intelligence at Instituto Superior de Estatistica e Gestao de Informacao of the Universidade Nova de Lisboa, Portugal, have been actively involved in the creation of BIWiki - a wiki for Business Intelligence in the Portuguese language. Based on usage patterns and feedback from students participating in this experience, some conclusions are drawn regarding the potential of this technology to support the emergence of VCoPs; some provisional suggestions will be made regarding the use of wikis to support information sharing, knowledge creation and transfer and collaborative learning in Higher Education.
Resumo:
Using a rich and highly accurate dataset for Portugal spanning from 1986 to 2013, this paper analyzes the determinants of downward nominal wage rigidity, mainly focusing on macroeconomic factors. The data supports the hypothesis that recessionary periods alongside with low in ation contribute to a higher degree of wage rigidity, as measured by the incidence of nominal wage freezes. It is further highlighted how this lack of wage adjustments con- tributed to an increase in labor costs which culminated in a wage markup of 6-7%. This paper, thus seems to corroborate the argument that low in ation did exacerbated the downward in exibility of (real) wages after the Great Recession.
Resumo:
Sharing of information with those in need of it has always been an idealistic goal of networked environments. With the proliferation of computer networks, information is so widely distributed among systems, that it is imperative to have well-organized schemes for retrieval and also discovery. This thesis attempts to investigate the problems associated with such schemes and suggests a software architecture, which is aimed towards achieving a meaningful discovery. Usage of information elements as a modelling base for efficient information discovery in distributed systems is demonstrated with the aid of a novel conceptual entity called infotron.The investigations are focused on distributed systems and their associated problems. The study was directed towards identifying suitable software architecture and incorporating the same in an environment where information growth is phenomenal and a proper mechanism for carrying out information discovery becomes feasible. An empirical study undertaken with the aid of an election database of constituencies distributed geographically, provided the insights required. This is manifested in the Election Counting and Reporting Software (ECRS) System. ECRS system is a software system, which is essentially distributed in nature designed to prepare reports to district administrators about the election counting process and to generate other miscellaneous statutory reports.Most of the distributed systems of the nature of ECRS normally will possess a "fragile architecture" which would make them amenable to collapse, with the occurrence of minor faults. This is resolved with the help of the penta-tier architecture proposed, that contained five different technologies at different tiers of the architecture.The results of experiment conducted and its analysis show that such an architecture would help to maintain different components of the software intact in an impermeable manner from any internal or external faults. The architecture thus evolved needed a mechanism to support information processing and discovery. This necessitated the introduction of the noveI concept of infotrons. Further, when a computing machine has to perform any meaningful extraction of information, it is guided by what is termed an infotron dictionary.The other empirical study was to find out which of the two prominent markup languages namely HTML and XML, is best suited for the incorporation of infotrons. A comparative study of 200 documents in HTML and XML was undertaken. The result was in favor ofXML.The concept of infotron and that of infotron dictionary, which were developed, was applied to implement an Information Discovery System (IDS). IDS is essentially, a system, that starts with the infotron(s) supplied as clue(s), and results in brewing the information required to satisfy the need of the information discoverer by utilizing the documents available at its disposal (as information space). The various components of the system and their interaction follows the penta-tier architectural model and therefore can be considered fault-tolerant. IDS is generic in nature and therefore the characteristics and the specifications were drawn up accordingly. Many subsystems interacted with multiple infotron dictionaries that were maintained in the system.In order to demonstrate the working of the IDS and to discover the information without modification of a typical Library Information System (LIS), an Information Discovery in Library Information System (lDLIS) application was developed. IDLIS is essentially a wrapper for the LIS, which maintains all the databases of the library. The purpose was to demonstrate that the functionality of a legacy system could be enhanced with the augmentation of IDS leading to information discovery service. IDLIS demonstrates IDS in action. IDLIS proves that any legacy system could be augmented with IDS effectively to provide the additional functionality of information discovery service.Possible applications of IDS and scope for further research in the field are covered.
Resumo:
Die vorliegende Diplomarbeit, befasst sich mit der Darstellung von TEI-Dokumenten im Content-Management-System „Drupal“. Dazu wird ein Modul entwickelt, welches das einfache Publizieren von Dokumenten in diesem, auf der Extensible Markup Language (XML) basierenden Format, in Drupal ermöglicht. Das Modul bietet eine Oberfläche zum Hochladen dieser Dokumente an und stellt zusätzlich Optionen bereit, die es ermöglichen die Darstellung der angezeigten Dokumente zu beeinflussen. Dabei ist es durch ein spezielles Menü möglich, Farben, Schriftgröße und -art festzulegen. Die Konvertierung der Dokumente geschieht per XSL Transformation und basiert auf dem Ergebnis eines vorangegangenen Projekts. Die Darstellung wird angereichert durch dynamische Elemente, wie Anmerkungen des Autors oder die Möglichkeit zur Umschaltung zwischen verschiedenen Textversionen wie z.B. einer originalen und einer korrigierten Fassung. Diese Funktion ist durch eine Werkzeugleiste zugänglich, die im unteren Bereich eingeblendet wird und es auch ermöglicht Seitenzahlen, die im Dokument als solche erkannt wurden, zu suchen oder direkt aufzurufen.
Resumo:
Die Auszeichnungssprache XML dient zur Annotation von Dokumenten und hat sich als Standard-Datenaustauschformat durchgesetzt. Dabei entsteht der Bedarf, XML-Dokumente nicht nur als reine Textdateien zu speichern und zu transferieren, sondern sie auch persistent in besser strukturierter Form abzulegen. Dies kann unter anderem in speziellen XML- oder relationalen Datenbanken geschehen. Relationale Datenbanken setzen dazu bisher auf zwei grundsätzlich verschiedene Verfahren: Die XML-Dokumente werden entweder unverändert als binäre oder Zeichenkettenobjekte gespeichert oder aber aufgespalten, sodass sie in herkömmlichen relationalen Tabellen normalisiert abgelegt werden können (so genanntes „Flachklopfen“ oder „Schreddern“ der hierarchischen Struktur). Diese Dissertation verfolgt einen neuen Ansatz, der einen Mittelweg zwischen den bisherigen Lösungen darstellt und die Möglichkeiten des weiterentwickelten SQL-Standards aufgreift. SQL:2003 definiert komplexe Struktur- und Kollektionstypen (Tupel, Felder, Listen, Mengen, Multimengen), die es erlauben, XML-Dokumente derart auf relationale Strukturen abzubilden, dass der hierarchische Aufbau erhalten bleibt. Dies bietet zwei Vorteile: Einerseits stehen bewährte Technologien, die aus dem Bereich der relationalen Datenbanken stammen, uneingeschränkt zur Verfügung. Andererseits lässt sich mit Hilfe der SQL:2003-Typen die inhärente Baumstruktur der XML-Dokumente bewahren, sodass es nicht erforderlich ist, diese im Bedarfsfall durch aufwendige Joins aus den meist normalisierten und auf mehrere Tabellen verteilten Tupeln zusammenzusetzen. In dieser Arbeit werden zunächst grundsätzliche Fragen zu passenden, effizienten Abbildungsformen von XML-Dokumenten auf SQL:2003-konforme Datentypen geklärt. Darauf aufbauend wird ein geeignetes, umkehrbares Umsetzungsverfahren entwickelt, das im Rahmen einer prototypischen Applikation implementiert und analysiert wird. Beim Entwurf des Abbildungsverfahrens wird besonderer Wert auf die Einsatzmöglichkeit in Verbindung mit einem existierenden, ausgereiften relationalen Datenbankmanagementsystem (DBMS) gelegt. Da die Unterstützung von SQL:2003 in den kommerziellen DBMS bisher nur unvollständig ist, muss untersucht werden, inwieweit sich die einzelnen Systeme für das zu implementierende Abbildungsverfahren eignen. Dabei stellt sich heraus, dass unter den betrachteten Produkten das DBMS IBM Informix die beste Unterstützung für komplexe Struktur- und Kollektionstypen bietet. Um die Leistungsfähigkeit des Verfahrens besser beurteilen zu können, nimmt die Arbeit Untersuchungen des nötigen Zeitbedarfs und des erforderlichen Arbeits- und Datenbankspeichers der Implementierung vor und bewertet die Ergebnisse.
Resumo:
Das hier frei verfügbare Skript und die Sammlung an Klausuren mit Musterlösungen aus den Jahren 2006 bis 2015 geht auf die gleichnamige Vorlesung im Bachelorstudiengang Informatik an der Universität Kassel zurück, die von Prof. Dr. Wegner und ab 2012 von Dr. Schweinsberg angeboten wurde. Behandelt werden die Grundlagen der eXtensible Markup Language, die sich als Datenaustauschsprache etabliert hat. Im Gegensatz zu HTML erlaubt sie die semantische Anreicherung von Dokumenten. In der Vorlesung wird die Entwicklung von XML-basierten Sprachen sowie die Transformierung von XML-Dokumenten mittels Stylesheets (eXtensible Stylesheet Language XSL) behandelt. Ebenfalls werden die DOM-Schnittstelle (Document Object Model) und SAX (Simple API for XML) vorgestellt.
Resumo:
Resumen basado en el de la publicaci??n