37 resultados para natiivi XML -tietokanta


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Esta dissertação apresenta uma proposta de sistema capaz de preencher a lacuna entre documentos legislativos em formato PDF e documentos legislativos em formato aberto. O objetivo principal é mapear o conhecimento presente nesses documentos de maneira a representar essa coleção como informação interligada. O sistema é composto por vários componentes responsáveis pela execução de três fases propostas: extração de dados, organização de conhecimento, acesso à informação. A primeira fase propõe uma abordagem à extração de estrutura, texto e entidades de documentos PDF de maneira a obter a informação desejada, de acordo com a parametrização do utilizador. Esta abordagem usa dois métodos de extração diferentes, de acordo com as duas fases de processamento de documentos – análise de documento e compreensão de documento. O critério utilizado para agrupar objetos de texto é a fonte usada nos objetos de texto de acordo com a sua definição no código de fonte (Content Stream) do PDF. A abordagem está dividida em três partes: análise de documento, compreensão de documento e conjunção. A primeira parte da abordagem trata da extração de segmentos de texto, adotando uma abordagem geométrica. O resultado é uma lista de linhas do texto do documento; a segunda parte trata de agrupar os objetos de texto de acordo com o critério estipulado, produzindo um documento XML com o resultado dessa extração; a terceira e última fase junta os resultados das duas fases anteriores e aplica regras estruturais e lógicas no sentido de obter o documento XML final. A segunda fase propõe uma ontologia no domínio legal capaz de organizar a informação extraída pelo processo de extração da primeira fase. Também é responsável pelo processo de indexação do texto dos documentos. A ontologia proposta apresenta três características: pequena, interoperável e partilhável. A primeira característica está relacionada com o facto da ontologia não estar focada na descrição pormenorizada dos conceitos presentes, propondo uma descrição mais abstrata das entidades presentes; a segunda característica é incorporada devido à necessidade de interoperabilidade com outras ontologias do domínio legal, mas também com as ontologias padrão que são utilizadas geralmente; a terceira característica é definida no sentido de permitir que o conhecimento traduzido, segundo a ontologia proposta, seja independente de vários fatores, tais como o país, a língua ou a jurisdição. A terceira fase corresponde a uma resposta à questão do acesso e reutilização do conhecimento por utilizadores externos ao sistema através do desenvolvimento dum Web Service. Este componente permite o acesso à informação através da disponibilização de um grupo de recursos disponíveis a atores externos que desejem aceder à informação. O Web Service desenvolvido utiliza a arquitetura REST. Uma aplicação móvel Android também foi desenvolvida de maneira a providenciar visualizações dos pedidos de informação. O resultado final é então o desenvolvimento de um sistema capaz de transformar coleções de documentos em formato PDF para coleções em formato aberto de maneira a permitir o acesso e reutilização por outros utilizadores. Este sistema responde diretamente às questões da comunidade de dados abertos e de Governos, que possuem muitas coleções deste tipo, para as quais não existe a capacidade de raciocinar sobre a informação contida, e transformá-la em dados que os cidadãos e os profissionais possam visualizar e utilizar.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The latest medical diagnosis devices enable the performance of e-diagnosis making the access to these services easier, faster and available in remote areas. However this imposes new communications and data interchange challenges. In this paper a new XML based format for storing cardiac signals and related information is presented. The proposed structure encompasses data acquisition devices, patient information, data description, pathological diagnosis and waveform annotation. When compared with similar purpose formats several advantages arise. Besides the full integrated data model it may also be noted the available geographical references for e-diagnosis, the multi stream data description, the ability to handle several simultaneous devices, the possibility of independent waveform annotation and a HL7 compliant structure for common contents. These features represent an enhanced integration with existent systems and an improved flexibility for cardiac data representation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Current Learning Management Systems focus on the management of students, keeping track of their progress across all types of training activities. This type of systems lacks integration with other e-Learning systems. For instance, learning objects stored in a centralized repository are unavailable throughout an organization for potential reuse. In this paper we present the interoperability features of crimsonHex - a service oriented repository of learning objects - highlighting the use of XML languages. Its nteroperability features are compliant with the existing standards and we propose extensions to the IMS interoperability recommendation, adding new functions, formalizing an XML message interchange and providing also a REST interface. To validate the proposed extensions and its implementation in crimsonHex we designed two repository plugins for Moodle 2.0, the first of which is already implemented and is expected to be included in the next release of this popular learning management system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

São vários os factores sociais e económicos que valorizam a aplicação de tecnologias de domótica em edifícios. No caso particular dos edifícios residenciais, a tendência dos seus utilizadores é a instalação de sistemas de controlo da segurança, do ambiente, de mecanismos de rega e de alarmes. Assim, seguindo a premissa do marketing, que identifica como uma boa prática a projecção de produtos / serviços que satisfaçam as necessidades inventariadas pelos seus utilizadores, este trabalho assenta na criação de um sistema domótico, controlado remotamente através de uma aplicação Android, que pretende, numa primeira instância, o controlo das lâmpadas de uma habitação. Neste trabalho é utilizado o protocolo KNX.TP para a comunicação dos dispositivos de domótica existentes no ISEP, que constituem o ambiente domótico deste trabalho. De forma a implementar o controlo remoto destes dispositivos via internet, este trabalho foca-se no desenvolvimento de uma interface IP-KNX, usando como hardware de controlo, um Arduino Mega 2560, uma placa de interface Ethernet para Arduino, a placa de integração KNX, e um servidor web com a linguagem PHP instalada. Para efeitos de demonstração, foi criada uma aplicação para o SO Android que controla as lâmpadas da rede KNX. Neste trabalho foram utilizadas várias linguagens de programação: C++ no firmware do Arduino, PHP no servidor web e JAVA + XML na aplicação Android.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Vishnu is a tool for XSLT visual programming in Eclipse - a popular and extensible integrated development environment. Rather than writing the XSLT transformations, the programmer loads or edits two document instances, a source document and its corresponding target document, and pairs texts between then by drawing lines over the documents. This form of XSLT programming is intended for simple transformations between related document types, such as HTML formatting or conversion among similar formats. Complex XSLT programs involving, for instance, recursive templates or second order transformations are out of the scope of Vishnu. We present the architecture of Vishnu composed by a graphical editor and a programming engine. The editor is an Eclipse plug-in where the programmer loads and edits document examples and pairs their content using graphical primitives. The programming engine receives the data collected by the editor and produces an XSLT program. The design of the engine and the process of creation of an XSLT program from examples are also detailed. It starts with the generation of an initial transformation that maps source document to the target document. This transformation is fed to a rewrite process where each step produces a refined version of the transformation. Finally, the transformation is simplified before being presented to the programmer for further editing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The content of a Learning Object is frequently characterized by metadata from several standards, such as LOM, SCORM and QTI. Specialized domains require new application profiles that further complicate the task of editing the metadata of learning object since their data models are not supported by existing authoring tools. To cope with this problem we designed a metadata editor supporting multiple metadata languages, each with its own data model. It is assumed that the supported languages have an XML binding and we use RDF to create a common metadata representation, independent from the syntax of each metadata languages. The combined data model supported by the editor is defined as an ontology. Thus, the process of extending the editor to support a new metadata language is twofold: firstly, the conversion from the XML binding of the metadata language to RDF and vice-versa; secondly, the extension of the ontology to cover the new metadata model. In this paper we describe the general architecture of the editor, we explain how a typical metadata language for learning objects is represented as an ontology, and how this formalization captures all the data required to generate the graphical user interface of the editor.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent studies of mobile Web trends show a continuous explosion of mobile-friendly content. However, the increasing number and heterogeneity of mobile devices poses several challenges for Web programmers who want to automatically get the delivery context and adapt the content to mobile devices. In this process, the devices detection phase assumes an important role where an inaccurate detection could result in a poor mobile experience for the enduser. In this paper we compare the most promising approaches for mobile device detection. Based on this study, we present an architecture for a system to detect and deliver uniform m-Learning content to students in a Higher School. We focus mainly on the devices capabilities repository manageable and accessible through an API. We detail the structure of the capabilities XML Schema that formalizes the data within the devices capabilities XML repository and the REST Web Service API for selecting the correspondent devices capabilities data according to a specific request. Finally, we validate our approach by presenting the access and usage statistics of the mobile web interface of the proposed system such as hits and new visitors, mobile platforms, average time on site and rejection rate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

XSLT is a powerful and widely used language for transforming XML documents. However its power and complexity can be overwhelming for novice or infrequent users, many of which simply give up on using this language. On the other hand, many XSLT programs of practical use are simple enough to be automatically inferred from examples of source and target documents. An inferred XSLT program is seldom adequate for production usage but can be used as a skeleton of the final program, or at least as scaffolding in the process of coding it. It should be noted that the authors do not claim that XSLT programs, in general, can be inferred from examples. The aim of Vishnu - the XSLT generator engine described in this paper – is to produce XSLT programs for processing documents similar to the given examples and with enough readability to be easily understood by a programmer not familiar with the language. The architecture of Vishnu is composed by a graphical editor and a programming engine. In this paper we focus on the editor as a GWT web application where the programmer loads and edits document examples and pairs their content using graphical primitives. The programming engine receives the data collected by the editor and produces an XSLT program.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Several standards appeared in recent years to formalize the metadata of learning objects, but they are still insufficient to fully describe a specialized domain. In particular, the programming exercise domain requires interdependent resources (e.g. test cases, solution programs, exercise description) usually processed by different services in the programming exercise life-cycle. Moreover, the manual creation of these resources is time-consuming and error-prone leading to what is an obstacle to the fast development of programming exercises of good quality. This paper focuses on the definition of an XML dialect called PExIL (Programming Exercises Interoperability Language). The aim of PExIL is to consolidate all the data required in the programming exercise life-cycle, from when it is created to when it is graded, covering also the resolution, the evaluation and the feedback. We introduce the XML Schema used to formalize the relevant data of the programming exercise life-cycle. The validation of this approach is made through the evaluation of the usefulness and expressiveness of the PExIL definition. In the former we present the tools that consume the PExIL definition to automatically generate the specialized resources. In the latter we use the PExIL definition to capture all the constraints of a set of programming exercises stored in a learning objects repository.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The concept of Learning Object (LO) is crucial for the standardization on eLearning. The latest LO standard from IMS Global Learning Consortium is the IMS Common Cartridge (IMS CC) that organizes and distributes digital learning content. By analyzing this new specification we considered two interoperability levels: content and communication. A common content format is the backbone of interoperability and is the basis for content exchange among eLearning systems. Communication is more than just exchanging content; it includes also accessing to specialized systems and services and reporting on content usage. This is particularly important when LOs are used for evaluation. In this paper we analyze the Common Cartridge profile based on the two interoperability levels we proposed. We detail its data model that comprises a set of derived schemata referenced on the CC schema and we explore the use of the IMS Learning Tools Interoperability (LTI) to allow remote tools and content to be integrated into a Learning Management System (LMS). In order to test the applicability of IMS CC for automatic evaluation we define a representation of programming exercises using this standard. This representation is intended to be the cornerstone of a network of eLearning systems where students can solve computer programming exercises and obtain feedback automatically. The CC learning object is automatically generated based on a XML dialect called PExIL that aims to consolidate all the data need to describe resources within the programming exercise life-cycle. Finally, we test the generated cartridge on the IMS CC online validator to verify its conformance with the IMS CC specification.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent studies of mobile Web trends show the continued explosion of mobile-friend content. However, the wide number and heterogeneity of mobile devices poses several challenges for Web programmers, who want automatic delivery of context and adaptation of the content to mobile devices. Hence, the device detection phase assumes an important role in this process. In this chapter, the authors compare the most used approaches for mobile device detection. Based on this study, they present an architecture for detecting and delivering uniform m-Learning content to students in a Higher School. The authors focus mainly on the XML device capabilities repository and on the REST API Web Service for dealing with device data. In the former, the authors detail the respective capabilities schema and present a new caching approach. In the latter, they present an extension of the current API for dealing with it. Finally, the authors validate their approach by presenting the overall data and statistics collected through the Google Analytics service, in order to better understand the adherence to the mobile Web interface, its evolution over time, and the main weaknesses.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Several standards have appeared in recent years to formalize the metadata of learning objects, but they are still insufficient to fully describe a specialized domain. In particular, the programming exercise domain requires interdependent resources (e.g. test cases, solution programs, exercise description) usually processed by different services in the programming exercise lifecycle. Moreover, the manual creation of these resources is time-consuming and error-prone, leading to an obstacle to the fast development of programming exercises of good quality. This chapter focuses on the definition of an XML dialect called PExIL (Programming Exercises Interoperability Language). The aim of PExIL is to consolidate all the data required in the programming exercise lifecycle from when it is created to when it is graded, covering also the resolution, the evaluation, and the feedback. The authors introduce the XML Schema used to formalize the relevant data of the programming exercise lifecycle. The validation of this approach is made through the evaluation of the usefulness and expressiveness of the PExIL definition. In the former, the authors present the tools that consume the PExIL definition to automatically generate the specialized resources. In the latter, they use the PExIL definition to capture all the constraints of a set of programming exercises stored in a learning objects repository.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

XSLT is a powerful and widely used language for transforming XML documents. However, its power and complexity can be overwhelming for novice or infrequent users, many of whom simply give up on using this language. On the other hand, many XSLT programs of practical use are simple enough to be automatically inferred from examples of source and target documents. An inferred XSLT program is seldom adequate for production usage but can be used as a skeleton of the final program, or at least as scaffolding in the process of coding it. It should be noted that the authors do not claim that XSLT programs, in general, can be inferred from examples. The aim of Vishnu—the XSLT generator engine described in this chapter—is to produce XSLT programs for processing documents similar to the given examples and with enough readability to be easily understood by a programmer not familiar with the language. The architecture of Vishnu is composed by a graphical editor and a programming engine. In this chapter, the authors focus on the editor as a GWT Web application where the programmer loads and edits document examples and pairs their content using graphical primitives. The programming engine receives the data collected by the editor and produces an XSLT program.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper reports on a first step towards the implementation of a framework for remote experimentation of electric machines ? the RemoteLabs platform. This project was focused on the development of two main modules: the user Web-based and the electric machines interfaces. The Web application provides the user with a front-end and interacts with the back-end ? the user and experiment persistent data. The electric machines interface is implemented as a distributed client server application where the clients, launched by the Web application, interact with the server modules located in platforms physically connected the electric machines drives. Users can register and authenticate, schedule, specify and run experiments and obtain results in the form of CSV, XML and PDF files. These functionalities were successfully tested with real data, but still without including the electric machines. This inclusion is part of another project scheduled to start soon.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O objectivo deste trabalho é a criação de um modelo do mercado energético da pequena geração dispersa através de serviços Web, agentes m´oveis e leilões. Neste cenário, o mercado, supervisionado pelo leiloeiro, ´e constituído basicamente por dois tipos de actores: os vendedores – com uma determinada carteira de pequenos produtores de energia, equipados com diversos tipos de geradores, e os compradores – entidades que distribuem e comercializam energia, bem como grandes consumidores. Apresenta-se a arquitectura adoptada, composta por agentes estáticos e agentes m´oveis, assim como a metodologia de desenvolvimento integrado elegida. Esta metodologia especifica uma abordagem, suportada pela tecnologia XML, que permite, a partir da informação relativa aos intervenientes, criar uma ontologia comum de representação do conhecimento do domínio, gerar automaticamente os agentes que modelam os intervenientes e, por ultimo, ´ transformá-los em serviços Web. Os agentes compradores e vendedores participam no mercado através de agentes m´oveis, a quem delegam a sua representação durante o leilão. O trabalho, que está em curso, encontra-se na fase do desenvolvimento dos agentes/serviços Web.