908 resultados para Semantic Web Services


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Anualmente, realizam-se no país inúmeras iniciativas de Todo-Terreno Turístico (TTT) onde são automaticamente registadas as coordenadas de Global Positioning System (GPS) por aplicações de dispositivos móveis. Este tipo de informação pode ser utilizada, quer para fins de divulgação turística, quer por outro tipo de entidades que necessitem de circular nesses caminhos rurais, tipicamente no meio da montanha. Entre outras, são registadas a posição, velocidade e altitude do veículo, o que permite obter informações relevantes, tais como, se o percurso se encontra transitável ou qual a velocidade recomendada. Por exemplo, durante os combates a incêndios, os bombeiros e proteção civil poderão saber se estes percursos são utilizáveis no planeamento dos combates a incêndios com reduzida probabilidade de complicações relativa ao acesso dos veículos, melhorando assim o tempo de resposta. O presente documento discute como poderá ser concebida uma aplicação web mapping, de código aberto, que permita a partilha, utilização e valorização de dados relativos aos percursos todo-terreno dos praticantes de TTT. O presente documento descreve como a aplicação desenvolvida no âmbito da dissertação de mestrado permite selecionar e ordenar possíveis trajetos que incluem os trajetos de TTT, apresentando as caraterísticas do terreno de modo a auxiliar a tomada de decisão por membros das corporações de Bombeiros. Será igualmente apresentada a interface atual da aplicação que inclui um mapa dinâmico e um gestor de pontos de referência.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The digital revolution of the 21st century contributed to stem the Internet of Things (IoT). Trillions of embedded devices using the Internet Protocol (IP), also called smart objects, will be an integral part of the Internet. In order to support such an extremely large address space, a new Internet Protocol, called Internet Protocol Version 6 (IPv6) is being adopted. The IPv6 over Low Power Wireless Personal Area Networks (6LoWPAN) has accelerated the integration of WSNs into the Internet. At the same time, the Constrained Application Protocol (CoAP) has made it possible to provide resource constrained devices with RESTful Web services functionalities. This work builds upon previous experience in street lighting networks, for which a proprietary protocol, devised by the Lighting Living Lab, was implemented and used for several years. The proprietary protocol runs on a broad range of lighting control boards. In order to support heterogeneous applications with more demanding communication requirements and to improve the application development process, it was decided to port the Contiki OS to the four channel LED driver (4LD) board from Globaltronic. This thesis describes the work done to adapt the Contiki OS to support the Microchip TM PIC24FJ128GA308 microprocessor and presents an IP based solution to integrate sensors and actuators in smart lighting applications. Besides detailing the system’s architecture and implementation, this thesis presents multiple results showing that the performance of CoAP based resource retrievals in constrained nodes is adequate for supporting networking services in street lighting networks.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Este artigo descreve um modelo para implementação de plataformas dinâmicas de negócio, exclusivamente baseadas em tecnologias XML. Apresenta um caso real que implementa as suas diversas camadas, apresentação, lógica e dados, em XML. A comunicação entre camadas é assegurada por serviços Web (WS), tornando esta arquitectura orientada aos serviços (SOA). O modelo proposto sustenta toda a programação do processo de negócio numa linguagem de alto nível, o WS-BPEL, proporcionando, desse modo, condições de adaptação ao dinamismo exigido pelo negócio da organização e à heterogeneidade dos sistemas. O caso real é de uma secretaria electrónica que surge no contexto dos portais Web universitários. O sistema desenvolvido visa oferecer um conjunto de serviços para acesso a informação e para o despoletar de acções computacionais e/ou humanas.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Like many other higher educat ion schools, ISCAP`s population has grown at a rate of almost 100% in the end of the twentieth century. Its administrative structures were reinforced, but it was not in the same proportion. Face to face with the inability to resolve the problem, the administration decided to implement a computer based system, available in the Internet. In a first stage, in 1997, the system was implemented as a services support. The next stage, in 1999, proposes to increase student services quality. A project that aims to bring student services available on the Internet begins to be developed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This article discusses the application of Information and Communication Technologies and strategies for best practices in order to capture and maintain faculty students' attention. It is based on a case study of ten years, using a complete information system. This system, in addition to be considered an ERP, to support the activities of academic management, also has a strong component of SRM that provides support to academic and administrative activities. It describes the extent to which the presented system facilitates the interaction and communication between members of the academic community, using the Internet, with services available on the Web complementing them with email, SMS and CTI. Through a perception, backed by empirical analysis and results of investigations, it demonstrates how this type of practice may raise the level of satisfaction of the community. In particular, it is possible to combat failure at school, avoid that students leave their course before its completion and also that they recommend them to potential students. In addition, such a strategy also allows strong economies in the management of the institution, increasing its value. As future work, we present the new phase of the project towards implementation of Business Intelligence to optimize the management process, making it proactive. The technological vision that guides new developments to a construction based on Web services and procedural languages is also presented.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Edge-labeled graphs have proliferated rapidly over the last decade due to the increased popularity of social networks and the Semantic Web. In social networks, relationships between people are represented by edges and each edge is labeled with a semantic annotation. Hence, a huge single graph can express many different relationships between entities. The Semantic Web represents each single fragment of knowledge as a triple (subject, predicate, object), which is conceptually identical to an edge from subject to object labeled with predicates. A set of triples constitutes an edge-labeled graph on which knowledge inference is performed. Subgraph matching has been extensively used as a query language for patterns in the context of edge-labeled graphs. For example, in social networks, users can specify a subgraph matching query to find all people that have certain neighborhood relationships. Heavily used fragments of the SPARQL query language for the Semantic Web and graph queries of other graph DBMS can also be viewed as subgraph matching over large graphs. Though subgraph matching has been extensively studied as a query paradigm in the Semantic Web and in social networks, a user can get a large number of answers in response to a query. These answers can be shown to the user in accordance with an importance ranking. In this thesis proposal, we present four different scoring models along with scalable algorithms to find the top-k answers via a suite of intelligent pruning techniques. The suggested models consist of a practically important subset of the SPARQL query language augmented with some additional useful features. The first model called Substitution Importance Query (SIQ) identifies the top-k answers whose scores are calculated from matched vertices' properties in each answer in accordance with a user-specified notion of importance. The second model called Vertex Importance Query (VIQ) identifies important vertices in accordance with a user-defined scoring method that builds on top of various subgraphs articulated by the user. Approximate Importance Query (AIQ), our third model, allows partial and inexact matchings and returns top-k of them with a user-specified approximation terms and scoring functions. In the fourth model called Probabilistic Importance Query (PIQ), a query consists of several sub-blocks: one mandatory block that must be mapped and other blocks that can be opportunistically mapped. The probability is calculated from various aspects of answers such as the number of mapped blocks, vertices' properties in each block and so on and the most top-k probable answers are returned. An important distinguishing feature of our work is that we allow the user a huge amount of freedom in specifying: (i) what pattern and approximation he considers important, (ii) how to score answers - irrespective of whether they are vertices or substitution, and (iii) how to combine and aggregate scores generated by multiple patterns and/or multiple substitutions. Because so much power is given to the user, indexing is more challenging than in situations where additional restrictions are imposed on the queries the user can ask. The proposed algorithms for the first model can also be used for answering SPARQL queries with ORDER BY and LIMIT, and the method for the second model also works for SPARQL queries with GROUP BY, ORDER BY and LIMIT. We test our algorithms on multiple real-world graph databases, showing that our algorithms are far more efficient than popular triple stores.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The continuous flow of technological developments in communications and electronic industries has led to the growing expansion of the Internet of Things (IoT). By leveraging the capabilities of smart networked devices and integrating them into existing industrial, leisure and communication applications, the IoT is expected to positively impact both economy and society, reducing the gap between the physical and digital worlds. Therefore, several efforts have been dedicated to the development of networking solutions addressing the diversity of challenges associated with such a vision. In this context, the integration of Information Centric Networking (ICN) concepts into the core of IoT is a research area gaining momentum and involving both research and industry actors. The massive amount of heterogeneous devices, as well as the data they produce, is a significant challenge for a wide-scale adoption of the IoT. In this paper we propose a service discovery mechanism, based on Named Data Networking (NDN), that leverages the use of a semantic matching mechanism for achieving a flexible discovery process. The development of appropriate service discovery mechanisms enriched with semantic capabilities for understanding and processing context information is a key feature for turning raw data into useful knowledge and ensuring the interoperability among different devices and applications. We assessed the performance of our solution through the implementation and deployment of a proof-of-concept prototype. Obtained results illustrate the potential of integrating semantic and ICN mechanisms to enable a flexible service discovery in IoT scenarios.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Esta dissertação apresenta uma arquitectura interoperável que permite lidar com a obtenção, manipulação, processamento e análise de informação geográfica. A aplicação 30, implementada como parte da arquitectura, para além de permitir a visualização e manipulação de dados dentro de um ambiente 30, oferece métodos que permitem descobrir, aceder e usar geo-processos, disponíveis através de serviços Web. A interacção com o utilizador é também feita através uma abordagem que quebra a típica complexidade que a maioria dos Sistemas de Informação Geográfica apresenta. O recurso à programação visual reduz a complexidade do sistema, e permite aos operadores tirar proveito da localização e de uma abstracção de um processo complexo, onde as unidades de processamento são representadas no terreno através de componentes 30 que podem ser directamente manipuladas e ligadas de modo a criar encandeamentos complexos de processos. Estes processos podem também ser criados visualmente e disponibilizados online. ABSTRACT; This thesis presents an interoperable architecture mainly designed for manipulation, processing and geographical information analysis. The three-dimensional interface, implemented as part of the architecture, besides allowing the visualization and manipulation of spatial data within a 30 environment, offers methods for discovering, accessing and using geo-processes, available through Web Services. Furthermore, the user interaction is done through an approach that breaks the typical complexity of most Geographic information Systems. This simplicity is in general archived through a visual programming approach that allows operators to take advantage of location, and use processes through abstract representations. Thus, processing units are represented on the terrain through 30 components, which can be directly manipulated and linked to create complex process chains. New processes can also be visually created and deployed online.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: Understanding transcriptional regulation by genome-wide microarray studies can contribute to unravel complex relationships between genes. Attempts to standardize the annotation of microarray data include the Minimum Information About a Microarray Experiment (MIAME) recommendations, the MAGE-ML format for data interchange, and the use of controlled vocabularies or ontologies. The existing software systems for microarray data analysis implement the mentioned standards only partially and are often hard to use and extend. Integration of genomic annotation data and other sources of external knowledge using open standards is therefore a key requirement for future integrated analysis systems. Results: The EMMA 2 software has been designed to resolve shortcomings with respect to full MAGE-ML and ontology support and makes use of modern data integration techniques. We present a software system that features comprehensive data analysis functions for spotted arrays, and for the most common synthesized oligo arrays such as Agilent, Affymetrix and NimbleGen. The system is based on the full MAGE object model. Analysis functionality is based on R and Bioconductor packages and can make use of a compute cluster for distributed services. Conclusion: Our model-driven approach for automatically implementing a full MAGE object model provides high flexibility and compatibility. Data integration via SOAP-based web-services is advantageous in a distributed client-server environment as the collaborative analysis of microarray data is gaining more and more relevance in international research consortia. The adequacy of the EMMA 2 software design and implementation has been proven by its application in many distributed functional genomics projects. Its scalability makes the current architecture suited for extensions towards future transcriptomics methods based on high-throughput sequencing approaches which have much higher computational requirements than microarrays.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La nueva generación de la Web, la Web Semántica, plantea potenciales oportunidades para dotar de significado a los contenidos Web. Las ontologías constituyen una de las principales herramientas para especificar explícitamente los conceptos de un dominio concreto, sus propiedades y sus relaciones; de manera que la información se publique en formatos que sean inteligibles automáticamente por agentes máquinas que pueden localizar y gestionar de forma precisa la información. En este trabajo se presenta un marco de trabajo para una red de ontologías para representar conceptos, atributos, operaciones y restricciones, en relación a los ítems curriculares que se usan en procesos nacionales de categorización de docentes universitarios ecuatorianos. En una primera parte se muestra el contexto del dominio, trabajos relacionados, luego se describe el proceso seguido, la abstracción del modelo ontológico y finalmente se presenta una ontología. Es una ontología de dominio debido a que proporciona el significado de los conceptos y sus relaciones dentro del dominio de ítems curriculares producidos por docentes universitarios, que son requisitos de los proceso de categorización docente universitaria en Ecuador.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

En este documento se propone un marco de trabajo basado en tecnologías de la Web Semántica para detectar potenciales redes de colaboración, mediante el enriquecimiento semántico de artículos científicos producidos por investigadores que publican con afiliaciones ecuatorianas. El marco de trabajo se describe a través de un ciclo de publicación de datos enlazados. Como alcance se consideraron publicaciones que tienen al menos un autor con afiliación ecuatoriana. Las redes de colaboración detectadas son un insumo importante para fortalecer los esfuerzos del gobierno ecuatoriano y las autoridades universitarias del país, priorizar los esfuerzos y recursos invertidos en investigación y determinar la pertinencia o coherencia de los programas de investigación.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

POSTDATA is a 5 year's European Research Council (ERC) Starting Grant Project that started in May 2016 and is hosted by the Universidad Nacional de Educación a Distancia (UNED), Madrid, Spain. The context of the project is the corpora of European Poetry (EP), with a special focus on poetic materials from different languages and literary traditions. POSTDATA aims to offer a standardized model in the philological field and a metadata application profile (MAP) for EP in order to build a common classification of all these poetic materials. The information of Spanish, Italian and French repertoires will be published in the Linked Open Data (LOD) ecosystem. Later we expect to extend the model to include additional corpora. There are a number of Web Based Information Systems in Europe with repertoires of poems available to human consumption but not in an appropriate condition to be accessible and reusable by the Semantic Web. These systems are not interoperable; they are in fact locked in their databases and proprietary software, not suitable to be linked in the Semantic Web. A way to make this data interoperable is to develop a MAP in order to be able to publish this data available in the LOD ecosystem, and also to publish new data that will be created and modeled based on this MAP. To create a common data model for EP is not simple since the existent data models are based on conceptualizations and terminology belonging to their own poetical traditions and each tradition has developed an idiosyncratic analytical terminology in a different and independent way for years. The result of this uncoordinated evolution is a set of varied terminologies to explain analogous metrical phenomena through the different poetic systems whose correspondences have been hardly studied – see examples in González-Blanco & Rodríguez (2014a and b). This work has to be done by domain experts before the modeling actually starts. On the other hand, the development of a MAP is a complex task though it is imperative to follow a method for this development. The last years Curado Malta & Baptista (2012, 2013a, 2013b) have been studying the development of MAP's in a Design Science Research (DSR) methodological process in order to define a method for the development of MAPs (see Curado Malta (2014)). The output of this DSR process was a first version of a method for the development of Metadata Application Profiles (Me4MAP) (paper to be published). The DSR process is now in the validation phase of the Relevance Cycle to validate Me4MAP. The development of this MAP for poetry will follow the guidelines of Me4MAP and this development will be used to do the validation of Me4MAP. The final goal of the POSTDATA project is: i) to be able to publish all the data locked in the WIS, in LOD, where any agent interested will be able to build applications over the data in order to serve final users; ii) to build a Web platform where: a) researchers, students and other final users interested in EP will be able to access poems (and their analyses) of all databases; b) researchers, students and other final users will be able to upload poems, the digitalized images of manuscripts, and fill in the information concerning the analysis of the poem, collaboratively contributing to a LOD dataset of poetry.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Today, biodiversity is endangered by the currently applied intensive farming methods imposed on food producers by intermediate actors (e.g.: retailers). The lack of a direct communication technology between the food producer and the consumer creates dependency on the intermediate actors for both producers and the consumers. A tool allowing producers to directly and efficiently market produce that meets customer demands could greatly reduce the dependency enforced by intermediate actors. To this end, in this thesis, we propose, develop, implement and validate a Real Time Context Sharing (RCOS) system. RCOS takes advantage of the widely used publish/subscribe paradigm to exchange messages between producers and consumers, directly, according to their interest and context. Current systems follow a topic-based model or a content-based model. With RCOS, we propose a context-awareness approach into the matching process of publish/subscribe paradigm. Finally, as a proof of concept, we extend the Apache ActiveMQ Artemis software and create a client prototype. We evaluate our proof of concept for larger scale deployment.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

En la actualidad, el uso del Cloud Computing se está incrementando y existen muchos proveedores que ofrecen servicios que hacen uso de esta tecnología. Uno de ellos es Amazon Web Services, que a través de su servicio Amazon EC2, nos ofrece diferentes tipos de instancias que podemos utilizar según nuestras necesidades. El modelo de negocio de AWS se basa en el pago por uso, es decir, solo realizamos el pago por el tiempo que se utilicen las instancias. En este trabajo se implementa en Amazon EC2, una aplicación cuyo objetivo es extraer de diferentes fuentes de información, los datos de las ventas realizadas por las editoriales y librerías de España. Estos datos son procesados, cargados en una base de datos y con ellos se generan reportes estadísticos, que ayudarán a los clientes a tomar mejores decisiones. Debido a que la aplicación procesa una gran cantidad de datos, se propone el desarrollo y validación de un modelo, que nos permita obtener una ejecución óptima en Amazon EC2. En este modelo se tienen en cuenta el tiempo de ejecución, el coste por uso y una métrica de coste/rendimiento. Adicionalmente, se utilizará la tecnología de contenedores Docker para llevar a cabo un caso específico del despliegue de la aplicación.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Part 21: Mobility and Logistics