483 resultados para grafana,SEPA,Plugin,RDF,SPARQL


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Monográfico con el título: 'Didácticas específicas'

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Resumen basado en el de la publicaci??n

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Resumen basado en el de la publicaci??n

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Resumen basado en el de la publicaci??n

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El uso de smartphones se está generalizando. Cada vez podemos ver y disfrutar más de este tipo de dispositivos, además el sistema operativo Android está teniendo un crecimiento tan rápido que nos hace pensar que puede ser una plataforma ideal para el desarrollo de aplicaciones SIG móviles. Estos nuevos dispositivos cuentan con unas características hardware muy interesantes, pues vienen equipados con sensores como GPS, acelerómetros, cámara y brújula lo que les hace perfectos para usarlos tanto como navegadores de realidad aumentada como para la visualización de mapas. Hay dos proyectos de fuentes abiertas muy interesantes implementados sobre Android: libregeosocial que sirve para trabajar con realidad aumentada y gvSIG mini que es un visualizador de mapas capaz de consumir servicios OGC. Usando estas dos librerías se desarrolla un plugin para gvSIG mini que permite consumir servicios WFS (provistos desde un servidor Geoserver) como fuente de datos para servicios de realidad aumentada. Se pretende con este proyecto el uso de herramientas genéricas, abiertas y estándares OGC para la realización de tareas avanzadas

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An ozonesonde profile over the Network for Detection of Stratospheric Change (NDSC) site at Lauder (45.0° S, 169.7° E), New Zealand, for 24 December 1998 showed atypically low ozone centered around 24 km altitude (600 K potential temperature). The origin of the anomaly is explained using reverse domain filling (RDF) calculations combined with a PV/O3 fitting technique applied to ozone measurements from the Polar Ozone and Aerosol Measurement (POAM) III instrument. The RDF calculations for two isentropic surfaces, 550 and 600 K, show that ozone-poor air from the Antarctic polar vortex reached New Zealand on 24–26 December 1998. The vortex air on the 550 K isentrope originated in the ozone hole region, unlike the air on 600 K where low ozone values were caused by dynamical effects. High-resolution ozone maps were generated, and their examination shows that a vortex remnant situated above New Zealand was the cause of the altered ozone profile on 24 December. The maps also illustrate mixing of the vortex filaments into southern midlatitudes, whereby the overall mid-latitude ozone levels were decreased.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In today's society it is becoming more and more important with direct marketing. Some of the direct marketing is done through e-mail, in which companies see an easy way to advertise himself. I did this thesis work at WebDoc Systems. They have a product that creates web documents directly in your browser, also called CMS. The CMS has a module for sending mass e-mail, but this module does not function properly and WebDoc Systems customers are dissatisfied with that part of the product. The problem with the module was that sometimes it didn't send the e-mail, and that it was not possible to obtain some form of follow-up information on the e-mail. The goal of this work was to develop a Web service that could easily send e-mail to many receivers, just as easily be able to view statistics on how mailing has gone. The first step was to do a literature review to get a good picture of available programming platforms, but also to be able create a good application infrastructure. The next step was to implement this design and improve it over time by using an iterative development methodology. The result was an application infrastructure that consists of three main parts and a plugin interface. The parts that were implemented were a Web service application, a Web application and a Windows service application. The three elements cooperate with each other and share a database, and plugins.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Semantiska webben är ett begrepp som handlar om att göra data tillgängligt på ett sätt som gör att datorer kan söka, tolka och sätta data i ett sammanhang. Då mycket av datalagring idag sker i relationsdatabaser behövs nya sätt att omvandla och lagra data för att det ska vara tillgängligt för den semantiska webben.Forskning som genomförts har visat att transformering av data från relationsdatabaser till RDF som är det format som gör data sökbart på semantiska webben är möjlig men det finns idag ingen standardisering för hur detta ska ske.För att data som transformeras ska få rätt betydelse i RDF så krävs ontologier som beskriver olika begrepps relationer. Nationella vägdatabasen (NVDB) är en relationsdatabas som hantera geospatiala data som används i olika geografiska informationssystem (GIS). För samarbetspartnern Triona var det intressant att beskriva hur denna typ av data kan omvandlas för att passa den semantiska webben.Syftet var att analysera hur man överför geospatiala data från en relationsdatabas till den semantiska webben. Målet med studien var att skapa en modell för hur man överför geospatiala data till i en relationsdatabas till en RDF-lagring och hur man skapar en ontologi som passar för NVDB’s data och datastruktur.En fallstudie genomfördes med dokumentstudier utifrån en inledande litteraturstudie.En ontologi skapades för det specifika fallet och utifrån detta skapades en modell för hur man överför geospatiala data från NVDB till RDF via programvaran TripleGeo. Analysen har skett genom att transformerad data har analyserats med hjälp av befintlig teori om RDF och dess struktur och sedan jämföra och se så att data får rätt betydelse. Resultatet har också validerats genom att använda W3C’s tjänst för att validera RDF.Resultatet visar hur man transformerar data från en relationsdatabas med geospatiala data till RDF samt hur en ontologi för detta skapats. Resultatet visar också en modell som beskriver hur detta utförs och kan ses som ett försök till att generalisera och standardisera en metod för att överföra geospatiala data till RDF.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The authors take a broad view that ultimately Grid- or Web-services must be located via personalised, semantic-rich discovery processes. They argue that such processes must rely on the storage of arbitrary metadata about services that originates from both service providers and service users. Examples of such metadata are reliability metrics, quality of service data, or semantic service description markup. This paper presents UDDI-MT, an extension to the standard UDDI service directory approach that supports the storage of such metadata via a tunnelling technique that ties the metadata store to the original UDDI directory. They also discuss the use of a rich, graph-based RDF query language for syntactic queries on this data. Finally, they analyse the performance of each of these contributions in our implementation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We take a broad view that ultimately Grid- or Web-services must be located via personalised, semantic-rich discovery processes. We argue that such processes must rely on the storage of arbitrary metadata about services that originates from both service providers and service users. Examples of such metadata are reliability metrics, quality of service data, or semantic service description markup. This paper presents UDDI-MT, an extension to the standard UDDI service directory approach that supports the storage of such metadata via a tunnelling technique that ties the metadata store to the original UDDI directory. We also discuss the use of a rich, graph-based RDF query language for syntactic queries on this data. Finally, we analyse the performance of each of these contributions in our implementation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Interoperability of water quality data depends on the use of common models, schemas and vocabularies. However, terms are usually collected during different activities and projects in isolation of one another, resulting in vocabularies that have the same scope being represented with different terms, using different formats and formalisms, and published in various access methods. Significantly, most water quality vocabularies conflate multiple concepts in a single term, e.g. quantity kind, units of measure, substance or taxon, medium and procedure. This bundles information associated with separate elements from the OGC Observations and Measurements (O&M) model into a single slot. We have developed a water quality vocabulary, formalized using RDF, and published as Linked Data. The terms were extracted from existing water quality vocabularies. The observable property model is inspired by O&M but aligned with existing ontologies. The core is an OWL ontology that extends the QUDT ontology for Unit and QuantityKind definitions. We add classes to generalize the QuantityKind model, and properties for explicit description of the conflated concepts. The key elements are defined to be sub-classes or sub-properties of SKOS elements, which enables a SKOS view to be published through standard vocabulary APIs, alongside the full view. QUDT terms are re-used where possible, supplemented with additional Unit and QuantityKind entries required for water quality. Along with items from separate vocabularies developed for objects, media, and procedures, these are linked into definitions in the actual observable property vocabulary. Definitions of objects related to chemical substances are linked to items from the Chemical Entities of Biological Interest (ChEBI) ontology. Mappings to other vocabularies, such as DBPedia, are in separately maintained files. By formalizing the model for observable properties, and clearly labelling the separate concerns, water quality observations from different sources may be more easily merged and also transformed to O&M for cross-domain applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este trabalho apresenta um modelo de metadados para descrever e recuperar imagens médicas na Web. As classes pertencentes ao modelo viabilizam a descrição de imagens de várias especialidades médicas, incluindo suas propriedades, seus componentes e as relações existentes entre elas. Uma das propriedades que o modelo incorpora é a classificação internacional de doenças, versão 10 (CID-10). O modelo de metadados proposto, inspirado em classes, favorece a especialização e sua implementação na arquitetura de metadados RDF. O modelo serviu de base para a implementação de um protótipo denominado de Sistema MedISeek (Medical Image Seek) que permite a usuários autorizados: descrever, armazenar e recuperar imagens na Web. Além disto, é sugerida uma estrutura persistente apropriada de banco de dados para armazenamento e recuperação dos metadados propostos.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nowadays, the popularity of the Web encourages the development of Hypermedia Systems dedicated to e-learning. Nevertheless, most of the available Web teaching systems apply the traditional paper-based learning resources presented as HTML pages making no use of the new capabilities provided by the Web. There is a challenge to develop educative systems that adapt the educative content to the style of learning, context and background of each student. Another research issue is the capacity to interoperate on the Web reusing learning objects. This work presents an approach to address these two issues by using the technologies of the Semantic Web. The approach presented here models the knowledge of the educative content and the learner’s profile with ontologies whose vocabularies are a refinement of those defined on standards situated on the Web as reference points to provide semantics. Ontologies enable the representation of metadata concerning simple learning objects and the rules that define the way that they can feasibly be assembled to configure more complex ones. These complex learning objects could be created dynamically according to the learners’ profile by intelligent agents that use the ontologies as the source of their beliefs. Interoperability issues were addressed by using an application profile of the IEEE LOM- Learning Object Metadata standard.