978 resultados para hypertext markup language
Resumo:
The Arnamagnæan Institute, principally in the form of the present writer, has been involved in a number of projects to do with the digitisation, electronic description and text-encoding of medieval manuscripts. Several of these projects were dealt with in a previous article 'The view from the North: Some Scandinavian digitisation projects', NCD review, 4 (2004), pp. 22-30. This paper looks in some depth at two others, MASTER and CHLT. The Arnamagnæan Institute is a teaching and research institute within the Faculty of Humanities at the University of Copenhagen. It is named after the Icelandic scholar and antiquarian Árni Magnússon (1663-1730), secretary of the Royal Danish Archives and Professor of Danish Antiquities at the University of Copenhagen, who in the course of his lifetime built up what is arguably the single most important collection of early Scandinavian manuscripts in the world, some 2,500 manuscript items, the earliest dating from the 12th century. The majority of these are from Iceland, but the collection also contains important Norwegian, Danish and Swedish manuscripts, along with approximately 100 manuscripts of continental provenance. In addition to the manuscripts proper, there are collections of original charters and apographa: 776 Norwegian (including Faroese, Shetlandic and Orcadian) charters and 2895 copies, 1571 Danish charters and 1372 copies, and 1345 Icelandic charters and 5942 copies. When he died in 1730, Árni Magnússon bequeathed his collection to the University of Copenhagen. The original collection has subsequently been augmented through individual purchases and gifts and the acquisition of a number of smaller collections, bringing the total to nearly 3000 manuscript items, which, with the charters and apographa, comprise over half a million pages.
Resumo:
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
In Model-Driven Engineering (MDE), the developer creates a model using a language such as Unified Modeling Language (UML) or UML for Real-Time (UML-RT) and uses tools such as Papyrus or Papyrus-RT that generate code for them based on the model they create. Tracing allows developers to get insights such as which events occur and timing information into their own application as it runs. We try to add monitoring capabilities using Linux Trace Toolkit: next generation (LTTng) to models created in UML-RT using Papyrus-RT. The implementation requires changing the code generator to add tracing statements for the events that the user wants to monitor to the generated code. We also change the makefile to automate the build process and we create an Extensible Markup Language (XML) file that allows developers to view their traces visually using Trace Compass, an Eclipse-based trace viewing tool. Finally, we validate our results using three models we create and trace.
Resumo:
The advantages of a COG (Component Object Graphic) approach to the composition of PDF pages have been set out in a previous paper [1]. However, if pages are to be composed in this way then the individual graphic objects must have known bounding boxes and must be correctly placed on the page in a process that resembles the link editing of a multi-module computer program. Ideally the linker should be able to utilize all declared resource information attached to each COG. We have investigated the use of an XML application called Personalized Print Markup Language (PPML) to control the link editing process for PDF COGs. Our experiments, though successful, have shown up the shortcomings of PPML's resource handling capabilities which are currently active at the document and page levels but which cannot be elegantly applied to individual graphic objects at a sub-page level. Proposals are put forward for modifications to PPML that would make easier any COG-based approach to page composition.
Resumo:
Public agencies are increasingly required to collaborate with each other in order to provide high-quality e-government services. This collaboration is usually based on the service-oriented approach and supported by interoperability platforms. Such platforms are specialized middleware-based infrastructures enabling the provision, discovery and invocation of interoperable software services. In turn, given that personal data handled by governments are often very sensitive, most governments have developed some sort of legislation focusing on data protection. This paper proposes solutions for monitoring and enforcing data protection laws within an E-government Interoperability Platform. In particular, the proposal addresses requirements posed by the Uruguayan Data Protection Law and the Uruguayan E-government Platform, although it can also be applied in similar scenarios. The solutions are based on well-known integration mechanisms (e.g. Enterprise Service Bus) as well as recognized security standards (e.g. eXtensible Access Control Markup Language) and were completely prototyped leveraging the SwitchYard ESB product.
Resumo:
Repeat photography is an efficient, effective and useful method to identify trends of changes in the landscapes. It was used to illustrate long-term changes occurring in the landscapes. In the Northeast of Portugal, landscapes changes is currently driven mostly by agriculture abandonment and agriculture and energy policy. However, there is a need to monitoring changes in the region using a multitemporal and multiscale approach. This project aimed to establish an online repository of oblique digital photography from the region to be used to register the condition of the landscape as recorded in historical and contemporary photography over time as well as to support qualitative and quantitative assessment of change in the landscape using repeat photography techniques and methods. It involved the development of a relational database and a series of web-based services using PHP: Hypertext Preprocessor language, and the development of an interface, with Joomla, of pictures uploading and downloading by users. The repository will make possible to upload, store, search by location, theme, or date, display, and download pictures for Northeastern Portugal. The website service is devoted to help researchers to obtain quickly the photographs needed to apply RP through a developed search engine. It can be accessed at: http://esa.ipb.pt/digitalandscape/.
Resumo:
Variable Data Printing (VDP) has brought new flexibility and dynamism to the printed page. Each printed instance of a specific class of document can now have different degrees of customized content within the document template. This flexibility comes at a cost. If every printed page is potentially different from all others it must be rasterized separately, which is a time-consuming process. Technologies such as PPML (Personalized Print Markup Language) attempt to address this problem by dividing the bitmapped page into components that can be cached at the raster level, thereby speeding up the generation of page instances. A large number of documents are stored in Page Description Languages at a higher level of abstraction than the bitmapped page. Much of this content could be reused within a VDP environment provided that separable document components can be identified and extracted. These components then need to be individually rasterisable so that each high-level component can be related to its low-level (bitmap) equivalent. Unfortunately, the unstructured nature of most Page Description Languages makes it difficult to extract content easily. This paper outlines the problems encountered in extracting component-based content from existing page description formats, such as PostScript, PDF and SVG, and how the differences between the formats affects the ease with which content can be extracted. The techniques are illustrated with reference to a tool called COG Extractor, which extracts content from PDF and SVG and prepares it for reuse.
Resumo:
Effective and efficient implementation of intelligent and/or recently emerged networked manufacturing systems require an enterprise level integration. The networked manufacturing offers several advantages in the current competitive atmosphere by way to reduce, by shortening manufacturing cycle time and maintaining the production flexibility thereby achieving several feasible process plans. The first step in this direction is to integrate manufacturing functions such as process planning and scheduling for multi-jobs in a network based manufacturing system. It is difficult to determine a proper plan that meets conflicting objectives simultaneously. This paper describes a mobile-agent based negotiation approach to integrate manufacturing functions in a distributed manner; and its fundamental framework and functions are presented. Moreover, ontology has been constructed by using the Protégé software which possesses the flexibility to convert knowledge into Extensible Markup Language (XML) schema of Web Ontology Language (OWL) documents. The generated XML schemas have been used to transfer information throughout the manufacturing network for the intelligent interoperable integration of product data models and manufacturing resources. To validate the feasibility of the proposed approach, an illustrative example along with varied production environments that includes production demand fluctuations is presented and compared the proposed approach performance and its effectiveness with evolutionary algorithm based Hybrid Dynamic-DNA (HD-DNA) algorithm. The results show that the proposed scheme is very effective and reasonably acceptable for integration of manufacturing functions.
Resumo:
A interacção dos humanos com os computadores envolve uma combinação das tarefas de programação e de utilização. Nem sempre é explícita a diferença entre as duas tarefas. Introduzir comandos num programa de desenho assistido por computador é utilização ou programação numa linguagem interpretada? Modificar uma folha de cálculo com macros é utilização ou programação? Usar um “Integrated Development Environment” ou IDE para inserir dados num ficheiro é utilização (do IDE) ou programação? A escrita de um texto usando LaTeX ou HTML é utilização ou programação numa “markup language”? Recorrer a um programa de computação simbólica é utilização ou programação? Utilizar um processador de texto é utilização ou programação visual? Ao utilizador não se exige um conhecimento completo de todos os comandos, todos os menus, todos os símbolos do software que utiliza. Nem a memorização da sintaxe e de todos os pormenores de funcionamento de um programa é um atributo necessário ou sequer útil ao utilizador; a concretização desse conhecimento não assegura maior eficiência na utilização. Quando se começa, apenas algumas instruções elementares são recebidas, por vezes de um colega, de um Professor, ou obtidas recorrendo à pesquisa na Internet. Com a familiarização, o utilizador exige mais do Software que usa e de si próprio: um manual passa a ser um recurso de grande utilidade. A confiança conquistada gera, periodicamente, a necessidade de auto-exame e de aumento do âmbito do conhecimento. Desta forma, quem utiliza computadores acaba por ser confrontado com uma tarefa que, efectivamente, pode ser considerada ou requer programação. Põe-se uma questão no imediato (se ninguém decidiu por si) que é a da selecção da linguagem de programação. A abordagem multiparadigma e longa experiência de utilização do C++ tornam-no atractivo para aplicações onde a eficiência se combina com a disponibilidade de estruturas de dados e algoritmos adoptados pela indústria (o que coloquialmente se denomina STL, Standard Template Library, cf. [#breymann, #josuttis], mais geralmente biblioteca Standard). Adicionalmente, linguagens populares como o Java, C# e PHP possuem sintaxes inspiradas e em muitas partes coincidentes com as do C e C++. Por exemplo, um ciclo “for” em Java é parcialmente coincidente com o do C99, que é um sub-conjunto do “for” do C++. São os pormenores, a eficiência e as capacidades do C++ que permitem a criação de software Profissional. Todos os sistemas operativos clássicos (Unix, Microsoft Windows, Linux) dispõem de compiladores, IDE, bibliotecas e são em grande parte construídos recorrendo a C e C++. Relativamente a outras linguagens, a quantidade de ferramentas disponível e o conhecimento adquirido durante décadas é difícil de ignorar. Esse conhecimento faz com que a sintaxe do C++ pareça muito maior do que o estritamente necessário e afaste potenciais interessados. A longa evolução do C++ introduziu também uma diferença no estilo muito marcada. Código dos anos 80 e 90 do século XX é frequentemente menos legível do que o que correntemente se produz. Muitos tutoriais disponíveis online fazem parecer a linguagem menos rigorosa (e mais complexa) do que na realidade é, já que raramente é apresentado o caso geral da sintaxe. Constata-se que muitos autores ainda usam os cabeçalhos do C, quando já não são necessários. Scott Meyers afirma que o C++ é uma federação de linguagens [#scottmeyers] e por esse facto requer perspectivas de abordagem distintas de outras linguagens. Sem alguma sistematização é difícil apreciar a sua compacidade e coerência. Porém, a forma harmoniosa como as componentes sintácticas se encaixam é uma grande mais-valia do C++ só constatada com experimentação e leitura atenta. A presente monografia dirige-se a quem pretenda utilizar o C++ como ferramenta profissional de Software. Em termos de pré-requisitos Académicos, dir-se-á que um curso (1º Ciclo) de Ciência ou de Engenharia aumentará o interesse por certos aspectos mais técnicos da linguagem mas qualquer indivíduo com gosto pela experimentação tirará proveito do conteúdo. Este texto não busca a exaustividade enciclopédica na cobertura do tema. Neste texto forneço, de forma directa, uma introdução ao C++ a qual permite começar a produzir código sem os custos da dispersão de fontes e notações na recolha de informação. Antecipo assim a sua utilização nos Países de Língua Portuguesa, uma vez que os textos que encontrei são ora mais exigentes ora menos completos, frequentemente ambos.
Resumo:
Nowadays people heavily rely on the Internet for information and knowledge. Wikipedia is an online multilingual encyclopaedia that contains a very large number of detailed articles covering most written languages. It is often considered to be a treasury of human knowledge. It includes extensive hypertext links between documents of the same language for easy navigation. However, the pages in different languages are rarely cross-linked except for direct equivalent pages on the same subject in different languages. This could pose serious difficulties to users seeking information or knowledge from different lingual sources, or where there is no equivalent page in one language or another. In this thesis, a new information retrieval task—cross-lingual link discovery (CLLD) is proposed to tackle the problem of the lack of cross-lingual anchored links in a knowledge base such as Wikipedia. In contrast to traditional information retrieval tasks, cross language link discovery algorithms actively recommend a set of meaningful anchors in a source document and establish links to documents in an alternative language. In other words, cross-lingual link discovery is a way of automatically finding hypertext links between documents in different languages, which is particularly helpful for knowledge discovery in different language domains. This study is specifically focused on Chinese / English link discovery (C/ELD). Chinese / English link discovery is a special case of cross-lingual link discovery task. It involves tasks including natural language processing (NLP), cross-lingual information retrieval (CLIR) and cross-lingual link discovery. To justify the effectiveness of CLLD, a standard evaluation framework is also proposed. The evaluation framework includes topics, document collections, a gold standard dataset, evaluation metrics, and toolkits for run pooling, link assessment and system evaluation. With the evaluation framework, performance of CLLD approaches and systems can be quantified. This thesis contributes to the research on natural language processing and cross-lingual information retrieval in CLLD: 1) a new simple, but effective Chinese segmentation method, n-gram mutual information, is presented for determining the boundaries of Chinese text; 2) a voting mechanism of name entity translation is demonstrated for achieving a high precision of English / Chinese machine translation; 3) a link mining approach that mines the existing link structure for anchor probabilities achieves encouraging results in suggesting cross-lingual Chinese / English links in Wikipedia. This approach was examined in the experiments for better, automatic generation of cross-lingual links that were carried out as part of the study. The overall major contribution of this thesis is the provision of a standard evaluation framework for cross-lingual link discovery research. It is important in CLLD evaluation to have this framework which helps in benchmarking the performance of various CLLD systems and in identifying good CLLD realisation approaches. The evaluation methods and the evaluation framework described in this thesis have been utilised to quantify the system performance in the NTCIR-9 Crosslink task which is the first information retrieval track of this kind.
Resumo:
Hypertexts are digital texts characterized by interactive hyperlinking and a fragmented textual organization. Increasingly prominent since the early 1990s, hypertexts have become a common text type both on the Internet and in a variety of other digital contexts. Although studied widely in disciplines like hypertext theory and media studies, formal linguistic approaches to hypertext continue to be relatively rare. This study examines coherence negotiation in hypertext with particularly reference to hypertext fiction. Coherence, or the quality of making sense, is a fundamental property of textness. Proceeding from the premise that coherence is a subjectively evaluated property rather than an objective quality arising directly from textual cues, the study focuses on the processes through which readers interact with hyperlinks and negotiate continuity between hypertextual fragments. The study begins with a typological discussion of textuality and an overview of the historical and technological precedents of modern hypertexts. Then, making use of text linguistic, discourse analytical, pragmatic, and narratological approaches to textual coherence, the study takes established models developed for analyzing and describing conventional texts, and examines their applicability to hypertext. Primary data derived from a collection of hyperfictions is used throughout to illustrate the mechanisms in practice. Hypertextual coherence negotiation is shown to require the ability to cognitively operate between local and global coherence by means of processing lexical cohesion, discourse topical continuities, inferences and implications, and shifting cognitive frames. The main conclusion of the study is that the style of reading required by hypertextuality fosters a new paradigm of coherence. Defined as fuzzy coherence, this new approach to textual sensemaking is predicated on an acceptance of the coherence challenges readers experience when the act of reading comes to involve repeated encounters with referentially imprecise hyperlinks and discourse topical shifts. A practical application of fuzzy coherence is shown to be in effect in the way coherence is actively manipulated in hypertext narratives.
Resumo:
Metaphor is a multi-stage programming language extension to an imperative, object-oriented language in the style of C# or Java. This paper discusses some issues we faced when applying multi-stage language design concepts to an imperative base language and run-time environment. The issues range from dealing with pervasive references and open code to garbage collection and implementing cross-stage persistence.
Resumo:
Language is a unique aspect of human communication because it can be used to discuss itself in its own terms. For this reason, human societies potentially have superior capacities of co-ordination, reflexive self-correction, and innovation than other animal, physical or cybernetic systems. However, this analysis also reveals that language is interconnected with the economically and technologically mediated social sphere and hence is vulnerable to abstraction, objectification, reification, and therefore ideology – all of which are antithetical to its reflexive function, whilst paradoxically being a fundamental part of it. In particular, in capitalism, language is increasingly commodified within the social domains created and affected by ubiquitous communication technologies. The advent of the so-called ‘knowledge economy’ implicates exchangeable forms of thought (language) as the fundamental commodities of this emerging system. The historical point at which a ‘knowledge economy’ emerges, then, is the critical point at which thought itself becomes a commodified ‘thing’, and language becomes its “objective” means of exchange. However, the processes by which such commodification and objectification occurs obscures the unique social relations within which these language commodities are produced. The latest economic phase of capitalism – the knowledge economy – and the obfuscating trajectory which accompanies it, we argue, is destroying the reflexive capacity of language particularly through the process of commodification. This can be seen in that the language practices that have emerged in conjunction with digital technologies are increasingly non-reflexive and therefore less capable of self-critical, conscious change.