12 resultados para spatio-temporal data model
em Instituto Politécnico do Porto, Portugal
Resumo:
O documento em anexo encontra-se na versão post-print (versão corrigida pelo editor).
Resumo:
Nos últimos anos tem-se assistido à introdução de novos dispositivos de medição da poluição do ar baseados na utilização de sensores de baixo custo. A utilização menos complexa destes sistemas, possibilita a obtenção de dados com elevada resolução temporal e espacial, abrindo novas oportunidades para diferentes metodologias de estudos de monitorização da poluição do ar. Apesar de apresentarem capacidades analíticas distantes dos métodos de referência, a utilização destes sensores tem sido sugerida e incentivada pela União Europeia no âmbito das medições indicativas previstas na Diretiva 2008/50/CE, com uma incerteza expandida máxima de 25%. O trabalho desenvolvido no âmbito da disciplina de Projeto consistiu na escolha, caracterização e utilização em medições reais de um sensor de qualidade do ar, integrado num equipamento protótipo desenvolvido com esse fim, visando obtenção uma estimativa da incerteza de medição associada à utilização deste dispositivo através da aplicação da metodologia de demonstração de equivalência de métodos de medição de qualidade do ar definida pela União Europeia. A pesquisa bibliográfica realizada permitiu constatar que o monóxido de carbono é neste momento o parâmetro de qualidade do ar que permite ser medido de forma mais exata através da utilização de sensores, nomeadamente o sensor eletroquímico da marca Alphasense, modelo COB4, amplamente utilizado em projetos de desenvolvimento neste cotexto de monitorização ambiental. O sensor foi integrado num sistema de medição com o objetivo de poder ser utlizado em condições de autonomia de fornecimento de energia elétrica, aquisição interna dos dados, tendo em consideração ser o mais pequeno possível e de baixo custo. Foi utlizado um sistema baseado na placa Arduino Uno com gravação de dados em cartão de memória SD, baterias e painel solar, permitindo para além do registo das tensões elétricas do sensor, a obtenção dos valores de temperatura, humidade relativa e pressão atmosférica, com um custo global a rondar os 300 euros. Numa primeira fase foram executados um conjunto de testes laboratoriais que permitiram a determinação de várias características de desempenho em dois sensores iguais: tempo de resposta, a equação modelo do sensor, avaliação da repetibilidade, desvio de curto e longo termo, interferência da temperatura e histerese. Os resultados demonstraram um comportamento dos sensores muito linear, com um tempo de resposta inferior a um minuto e com uma equação modelo do sensor dependente da variação da temperatura. A estimativa da incerteza expandida laboratorial ficou, para ambos os sensores, abaixo dos 10%. Após a realização de duas campanhas reais de medição de CO em que os valores foram muito baixos, foi realizada uma campanha de quinze dias num parque de estacionamento subterrâneo que permitiu a obtenção de concentrações suficientemente elevadas e a comparação dos resultados dos sensores com o método de referência em toda a gama de medição (0 a 12 mol.mol-1). Os valores de concentração obtidos pelos dois sensores demonstraram uma excelente correlação com o método de referência (r2≥0,998), obtendo-se resultados para a estimativa da incerteza expandida de campo inferiores aos obtidos para a incerteza laboratorial, cumprindo o objetivo de qualidade de dados definido para as medições indicativas de incerteza expandida máxima de 25%. Os resultados observados durante o trabalho realizado permitiram confirmar o bom desempenho que este tipo de sensor pode ter no âmbito de medições de poluição do ar com um caracter mais indicativo.
Resumo:
POSTDATA is a 5 year's European Research Council (ERC) Starting Grant Project that started in May 2016 and is hosted by the Universidad Nacional de Educación a Distancia (UNED), Madrid, Spain. The context of the project is the corpora of European Poetry (EP), with a special focus on poetic materials from different languages and literary traditions. POSTDATA aims to offer a standardized model in the philological field and a metadata application profile (MAP) for EP in order to build a common classification of all these poetic materials. The information of Spanish, Italian and French repertoires will be published in the Linked Open Data (LOD) ecosystem. Later we expect to extend the model to include additional corpora. There are a number of Web Based Information Systems in Europe with repertoires of poems available to human consumption but not in an appropriate condition to be accessible and reusable by the Semantic Web. These systems are not interoperable; they are in fact locked in their databases and proprietary software, not suitable to be linked in the Semantic Web. A way to make this data interoperable is to develop a MAP in order to be able to publish this data available in the LOD ecosystem, and also to publish new data that will be created and modeled based on this MAP. To create a common data model for EP is not simple since the existent data models are based on conceptualizations and terminology belonging to their own poetical traditions and each tradition has developed an idiosyncratic analytical terminology in a different and independent way for years. The result of this uncoordinated evolution is a set of varied terminologies to explain analogous metrical phenomena through the different poetic systems whose correspondences have been hardly studied – see examples in González-Blanco & Rodríguez (2014a and b). This work has to be done by domain experts before the modeling actually starts. On the other hand, the development of a MAP is a complex task though it is imperative to follow a method for this development. The last years Curado Malta & Baptista (2012, 2013a, 2013b) have been studying the development of MAP's in a Design Science Research (DSR) methodological process in order to define a method for the development of MAPs (see Curado Malta (2014)). The output of this DSR process was a first version of a method for the development of Metadata Application Profiles (Me4MAP) (paper to be published). The DSR process is now in the validation phase of the Relevance Cycle to validate Me4MAP. The development of this MAP for poetry will follow the guidelines of Me4MAP and this development will be used to do the validation of Me4MAP. The final goal of the POSTDATA project is: i) to be able to publish all the data locked in the WIS, in LOD, where any agent interested will be able to build applications over the data in order to serve final users; ii) to build a Web platform where: a) researchers, students and other final users interested in EP will be able to access poems (and their analyses) of all databases; b) researchers, students and other final users will be able to upload poems, the digitalized images of manuscripts, and fill in the information concerning the analysis of the poem, collaboratively contributing to a LOD dataset of poetry.
Resumo:
Dissertação de Mestrado apresentado ao Instituto de Contabilidade e Administração do Porto para a obtenção do grau de Mestre em Empreendedorismo e Internacionalização, sob orientação de Maria Clara Dias Pinto Ribeiro
Resumo:
The latest medical diagnosis devices enable the performance of e-diagnosis making the access to these services easier, faster and available in remote areas. However this imposes new communications and data interchange challenges. In this paper a new XML based format for storing cardiac signals and related information is presented. The proposed structure encompasses data acquisition devices, patient information, data description, pathological diagnosis and waveform annotation. When compared with similar purpose formats several advantages arise. Besides the full integrated data model it may also be noted the available geographical references for e-diagnosis, the multi stream data description, the ability to handle several simultaneous devices, the possibility of independent waveform annotation and a HL7 compliant structure for common contents. These features represent an enhanced integration with existent systems and an improved flexibility for cardiac data representation.
Resumo:
The content of a Learning Object is frequently characterized by metadata from several standards, such as LOM, SCORM and QTI. Specialized domains require new application profiles that further complicate the task of editing the metadata of learning object since their data models are not supported by existing authoring tools. To cope with this problem we designed a metadata editor supporting multiple metadata languages, each with its own data model. It is assumed that the supported languages have an XML binding and we use RDF to create a common metadata representation, independent from the syntax of each metadata languages. The combined data model supported by the editor is defined as an ontology. Thus, the process of extending the editor to support a new metadata language is twofold: firstly, the conversion from the XML binding of the metadata language to RDF and vice-versa; secondly, the extension of the ontology to cover the new metadata model. In this paper we describe the general architecture of the editor, we explain how a typical metadata language for learning objects is represented as an ontology, and how this formalization captures all the data required to generate the graphical user interface of the editor.
Resumo:
The concept of Learning Object (LO) is crucial for the standardization on eLearning. The latest LO standard from IMS Global Learning Consortium is the IMS Common Cartridge (IMS CC) that organizes and distributes digital learning content. By analyzing this new specification we considered two interoperability levels: content and communication. A common content format is the backbone of interoperability and is the basis for content exchange among eLearning systems. Communication is more than just exchanging content; it includes also accessing to specialized systems and services and reporting on content usage. This is particularly important when LOs are used for evaluation. In this paper we analyze the Common Cartridge profile based on the two interoperability levels we proposed. We detail its data model that comprises a set of derived schemata referenced on the CC schema and we explore the use of the IMS Learning Tools Interoperability (LTI) to allow remote tools and content to be integrated into a Learning Management System (LMS). In order to test the applicability of IMS CC for automatic evaluation we define a representation of programming exercises using this standard. This representation is intended to be the cornerstone of a network of eLearning systems where students can solve computer programming exercises and obtain feedback automatically. The CC learning object is automatically generated based on a XML dialect called PExIL that aims to consolidate all the data need to describe resources within the programming exercise life-cycle. Finally, we test the generated cartridge on the IMS CC online validator to verify its conformance with the IMS CC specification.
Resumo:
This master’s thesis addresses the maintenance of pre-computed structures, which store a frequent or expensive query, for the nested bag data type in the high level work-flow language Pig Latin. This thesis defines a model suitable to accommodate incremental expressions over nested bags on Pig Latin. Afterwards, the partitioned normal form for sets is extended with further restrictions, in order to accommodate the nested bag model, allow the Pig Latin nest and unnest operators revert each other, and create a suitable environment to the incremental computations. Subsequently, the extended operators – extended union and extended difference – are defined for the nested bag data model with the partitioned normal form for bags (PNF Bag) restriction, and semantics for the extended operators are given. Finally, incremental data propagation expressions are proposed for the nest and unnest operators on the data model proposed with the PNF Bag restriction, and the proof of correctness is given.
Resumo:
A novel control technique is investigated in the adaptive control of a typical paradigm, an approximately and partially modeled cart plus double pendulum system. In contrast to the traditional approaches that try to build up ”complete” and ”permanent” system models it develops ”temporal” and ”partial” ones that are valid only in the actual dynamic environment of the system, that is only within some ”spatio-temporal vicinity” of the actual observations. This technique was investigated for various physical systems via ”preliminary” simulations integrating by the simplest 1st order finite element approach for the time domain. In 2004 INRIA issued its SCILAB 3.0 and its improved numerical simulation tool ”Scicos” making it possible to generate ”professional”, ”convenient”, and accurate simulations. The basic principles of the adaptive control, the typical tools available in Scicos, and others developed by the authors, as well as the improved simulation results and conclusions are presented in the contribution.
Resumo:
A presente dissertação tem como objectivo descrever o trabalho desenvolvido sobre o projecto iCOPE, uma plataforma dedicada ao auxilio do processo psicoterapêutico para pessoas com perturbações psicóticas. A sua concepção e motivada pela necessidade de fornecer um meio psicoterapêutico com base na portabilidade dos dispositivos móveis. O desenvolvimento foi conseguido através de uma colaboração multidisciplinar, orientada por especialistas de terapia ocupacional, e pela engenharia de software. O iCOPE é um sistema centralizado, no qual o progresso de um paciente é registado e monitorizado através de outra aplicação, por um terapeuta designado. Esta filosofia levou à criação de uma API baseada em REST, capaz de comunicar com uma base de dados. A construção da API concretizou-se com recurso a linguagem PHP, aliada a micro-framework Slim. O objectivo desta API passa não só pela necessidade de fornecer um sistema acessível, mas também com a ambição de conceber uma plataforma com um potencial escalável e expansível, para o caso de ser necessário implementar novas funcionalidades futuras (future-proof). O autor desta dissertação foi responsável pelo levantamento de requisitos, o desenvolvimento da aplicação móvel, o desenvolvimento colaborativo do modelo de dados e base de dados e da interface da API de comunicação. No fim do desenvolvimento foi feita uma apreciação funcional pelos utilizadores alvo, que realizaram uma avaliação sobre a utilização e integração da aplicação no seu tratamento. Face aos resultados obtidos foram tiradas conclusões sobre o futuro desenvolvimento da aplicação e que outros aspectos poderiam ser integrados para efectivamente chegar a mais pacientes.
Resumo:
Post-MAPS is a web platform that collects gastroenterological exam data from several european hospital centers, to be used in future clinical studies and was developed in partnership with experts from the gastroenterological area and information technology (IT) technicians. However, although functional, this platform has some issues that are crucial for its functioning, and can render user interaction unpleasant and exhaustive. Accordingly, we proposed the development of a new web platform, in which we aimed for an improvement in terms of usability, data uni cation and interoperability. Therefore, it was necessary to identify and study different ways of acquiring clinical data and review some of the existing clinical databases in order to understand how they work and what type of data they store, as well as their impact and contribution to clinical knowledge. Closely linked to the data model is the ability to share data with other systems, so, we also studied the concept of interoperability and analyzed some of the most widely used international standards, such as DICOM, HL7 and openEHR. As one of the primary objectives of this project was to achieve a better level of usability, practices related to Human Computer-Interaction, such as requirement analysis, creation of conceptual models, prototyping, and evaluation were also studied. Before we began the development, we conducted an analysis of the previous platform, from a functional point of view, which allowed us to gather not only a list of architectural and interface issues, but also a list of improvement opportunities. It was also performed a small preliminary study in order to evaluate the platform's usability, where we were able to realize that perceived usability is different between users, and that, in some aspects, varies according to their location, age and years of experience. Based on the information gathered during the platform's analysis and in the conclusions of the preliminary study, a new platform was developed, prepared for all potential users, from the inexperienced to the most comfortable with technology. It presents major improvements in terms of usability, also providing several new features that simplify the users' work, improving their interaction with the system, making their experience more enjoyable.
Resumo:
Applications are subject of a continuous evolution process with a profound impact on their underlining data model, hence requiring frequent updates in the applications' class structure and database structure as well. This twofold problem, schema evolution and instance adaptation, usually known as database evolution, is addressed in this thesis. Additionally, we address concurrency and error recovery problems with a novel meta-model and its aspect-oriented implementation. Modern object-oriented databases provide features that help programmers deal with object persistence, as well as all related problems such as database evolution, concurrency and error handling. In most systems there are transparent mechanisms to address these problems, nonetheless the database evolution problem still requires some human intervention, which consumes much of programmers' and database administrators' work effort. Earlier research works have demonstrated that aspect-oriented programming (AOP) techniques enable the development of flexible and pluggable systems. In these earlier works, the schema evolution and the instance adaptation problems were addressed as database management concerns. However, none of this research was focused on orthogonal persistent systems. We argue that AOP techniques are well suited to address these problems in orthogonal persistent systems. Regarding the concurrency and error recovery, earlier research showed that only syntactic obliviousness between the base program and aspects is possible. Our meta-model and framework follow an aspect-oriented approach focused on the object-oriented orthogonal persistent context. The proposed meta-model is characterized by its simplicity in order to achieve efficient and transparent database evolution mechanisms. Our meta-model supports multiple versions of a class structure by applying a class versioning strategy. Thus, enabling bidirectional application compatibility among versions of each class structure. That is to say, the database structure can be updated because earlier applications continue to work, as well as later applications that have only known the updated class structure. The specific characteristics of orthogonal persistent systems, as well as a metadata enrichment strategy within the application's source code, complete the inception of the meta-model and have motivated our research work. To test the feasibility of the approach, a prototype was developed. Our prototype is a framework that mediates the interaction between applications and the database, providing them with orthogonal persistence mechanisms. These mechanisms are introduced into applications as an {\it aspect} in the aspect-oriented sense. Objects do not require the extension of any super class, the implementation of an interface nor contain a particular annotation. Parametric type classes are also correctly handled by our framework. However, classes that belong to the programming environment must not be handled as versionable due to restrictions imposed by the Java Virtual Machine. Regarding concurrency support, the framework provides the applications with a multithreaded environment which supports database transactions and error recovery. The framework keeps applications oblivious to the database evolution problem, as well as persistence. Programmers can update the applications' class structure because the framework will produce a new version for it at the database metadata layer. Using our XML based pointcut/advice constructs, the framework's instance adaptation mechanism is extended, hence keeping the framework also oblivious to this problem. The potential developing gains provided by the prototype were benchmarked. In our case study, the results confirm that mechanisms' transparency has positive repercussions on the programmer's productivity, simplifying the entire evolution process at application and database levels. The meta-model itself also was benchmarked in terms of complexity and agility. Compared with other meta-models, it requires less meta-object modifications in each schema evolution step. Other types of tests were carried out in order to validate prototype and meta-model robustness. In order to perform these tests, we used an OO7 small size database due to its data model complexity. Since the developed prototype offers some features that were not observed in other known systems, performance benchmarks were not possible. However, the developed benchmark is now available to perform future performance comparisons with equivalent systems. In order to test our approach in a real world scenario, we developed a proof-of-concept application. This application was developed without any persistence mechanisms. Using our framework and minor changes applied to the application's source code, we added these mechanisms. Furthermore, we tested the application in a schema evolution scenario. This real world experience using our framework showed that applications remains oblivious to persistence and database evolution. In this case study, our framework proved to be a useful tool for programmers and database administrators. Performance issues and the single Java Virtual Machine concurrent model are the major limitations found in the framework.