979 resultados para Web mapping


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper charts the current evidence on effectiveness of different anti-corruption reforms, and identifies significant evidence gaps. Despite a substantial amount of literature on corruption, this review found very few studies focusing on anti-corruption reforms, and even fewer that credibly assess issues of effectiveness and impact. The evidence was strong for only two types of interventions: public financial management (PFM) reforms and supreme audit institutions (SAIs). For PFM, the evidence in general showed positive results, whereas the effectiveness was mixed for SAIs. No strong evidence indicates that any of the interventions pursued have been ineffective, but there is fair evidence that anti-corruption authorities, civil service reforms and the use of corruption conditionality in aid allocation decisions in general have not been effective. The paper advocates more operationally-relevant research and rigorous evaluations to build up the missing evidence base, particularly in conflict-afflicted states, in regards to the private sector, and on the interactions and interdependencies between different anti-corruption interventions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe ncWMS, an implementation of the Open Geospatial Consortium’s Web Map Service (WMS) specification for multidimensional gridded environmental data. ncWMS can read data in a large number of common scientific data formats – notably the NetCDF format with the Climate and Forecast conventions – then efficiently generate map imagery in thousands of different coordinate reference systems. It is designed to require minimal configuration from the system administrator and, when used in conjunction with a suitable client tool, provides end users with an interactive means for visualizing data without the need to download large files or interpret complex metadata. It is also used as a “bridging” tool providing interoperability between the environmental science community and users of geographic information systems. ncWMS implements a number of extensions to the WMS standard in order to fulfil some common scientific requirements, including the ability to generate plots representing timeseries and vertical sections. We discuss these extensions and their impact upon present and future interoperability. We discuss the conceptual mapping between the WMS data model and the data models used by gridded data formats, highlighting areas in which the mapping is incomplete or ambiguous. We discuss the architecture of the system and particular technical innovations of note, including the algorithms used for fast data reading and image generation. ncWMS has been widely adopted within the environmental data community and we discuss some of the ways in which the software is integrated within data infrastructures and portals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background qtl.outbred is an extendible interface in the statistical environment, R, for combining quantitative trait loci (QTL) mapping tools. It is built as an umbrella package that enables outbred genotype probabilities to be calculated and/or imported into the software package R/qtl. Findings Using qtl.outbred, the genotype probabilities from outbred line cross data can be calculated by interfacing with a new and efficient algorithm developed for analyzing arbitrarily large datasets (included in the package) or imported from other sources such as the web-based tool, GridQTL. Conclusion qtl.outbred will improve the speed for calculating probabilities and the ability to analyse large future datasets. This package enables the user to analyse outbred line cross data accurately, but with similar effort than inbred line cross data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Research objectives Poker and responsible gambling both entail the use of the executive functions (EF), which are higher-level cognitive abilities. The main objective of this work was to assess if online poker players of different ability show different performances in their EF and if so, which functions are the most discriminating ones. The secondary objective was to assess if the EF performance can predict the quality of gambling, according to the Gambling Related Cognition Scale (GRCS), the South Oaks Gambling Screen (SOGS) and the Problem Gambling Severity Index (PGSI). Sample and methods The study design consisted of two stages: 46 Italian active players (41m, 5f; age 32±7,1ys; education 14,8±3ys) fulfilled the PGSI in a secure IT web system and uploaded their own hand history files, which were anonymized and then evaluated by two poker experts. 36 of these players (31m, 5f; age 33±7,3ys; education 15±3ys) accepted to take part in the second stage: the administration of an extensive neuropsychological test battery by a blinded trained professional. To answer the main research question we collected all final and intermediate scores of the EF tests on each player together with the scoring on the playing ability. To answer the secondary research question, we referred to GRCS, PGSI and SOGS scores.  We determined which variables that are good predictors of the playing ability score using statistical techniques able to deal with many regressors and few observations (LASSO, best subset algorithms and CART). In this context information criteria and cross-validation errors play a key role for the selection of the relevant regressors, while significance testing and goodness-of-fit measures can lead to wrong conclusions.   Preliminary findings We found significant predictors of the poker ability score in various tests. In particular, there are good predictors 1) in some Wisconsin Card Sorting Test items that measure flexibility in choosing strategy of problem-solving, strategic planning, modulating impulsive responding, goal setting and self-monitoring, 2) in those Cognitive Estimates Test variables related to deductive reasoning, problem solving, development of an appropriate strategy and self-monitoring, 3) in the Emotional Quotient Inventory Short (EQ-i:S) Stress Management score, composed by the Stress Tolerance and Impulse Control scores, and in the Interpersonal score (Empathy, Social Responsibility, Interpersonal Relationship). As for the quality of gambling, some EQ-i:S scales scores provide the best predictors: General Mood for the PGSI; Intrapersonal (Self-Regard; Emotional Self-Awareness, Assertiveness, Independence, Self-Actualization) and Adaptability  (Reality Testing, Flexibility, Problem Solving) for the SOGS, Adaptability for the GRCS. Implications for the field Through PokerMapper we gathered knowledge and evaluated the feasibility of the construction of short tasks/card games in online poker environments for profiling users’ executive functions. These card games will be part of an IT system able to dynamically profile EF and provide players with a feedback on their expected performance and ability to gamble responsibly in that particular moment. The implementation of such system in existing gambling platforms could lead to an effective proactive tool for supporting responsible gambling. 

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coastal flooding poses serious threats to coastal areas around the world, billions of dollars in damage to property and infrastructure, and threatens the lives of millions of people. Therefore, disaster management and risk assessment aims at detecting vulnerability and capacities in order to reduce coastal flood disaster risk. In particular, non-specialized researchers, emergency management personnel, and land use planners require an accurate, inexpensive method to determine and map risk associated with storm surge events and long-term sea level rise associated with climate change. This study contributes to the spatially evaluation and mapping of social-economic-environmental vulnerability and risk at sub-national scale through the development of appropriate tools and methods successfully embedded in a Web-GIS Decision Support System. A new set of raster-based models were studied and developed in order to be easily implemented in the Web-GIS framework with the purpose to quickly assess and map flood hazards characteristics, damage and vulnerability in a Multi-criteria approach. The Web-GIS DSS is developed recurring to open source software and programming language and its main peculiarity is to be available and usable by coastal managers and land use planners without requiring high scientific background in hydraulic engineering. The effectiveness of the system in the coastal risk assessment is evaluated trough its application to a real case study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Se presenta un panorama y los interrogantes fundamentales de la etapa de la Web 3.0. Se analizan las características actuales de los sistemas bibliográficos estructurados con el modelo entidad-relación. Se definen los niveles conceptual, lógico y físico en los sistemas informáticos; consecuentemente se presentan las características de los FRBR y se obervan las relaciones entre obra y documento en el modelo conceptual FRBR. Se describen los FRBRoo como una interpretación con una lógica de objetos de los mismos requerimientos funcionales. Finalmente se plantean las tendencias a futuro, tales como pasar de las modelizaciones de entidad-relación a la de objetos, la explicitación con anotación semántica consistente, el mapeo de bases bibliográficas existentes y el desarrollo de ontologías para que los sistemas documentales se integren en la Web Semática

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Se presenta un panorama y los interrogantes fundamentales de la etapa de la Web 3.0. Se analizan las características actuales de los sistemas bibliográficos estructurados con el modelo entidad-relación. Se definen los niveles conceptual, lógico y físico en los sistemas informáticos; consecuentemente se presentan las características de los FRBR y se obervan las relaciones entre obra y documento en el modelo conceptual FRBR. Se describen los FRBRoo como una interpretación con una lógica de objetos de los mismos requerimientos funcionales. Finalmente se plantean las tendencias a futuro, tales como pasar de las modelizaciones de entidad-relación a la de objetos, la explicitación con anotación semántica consistente, el mapeo de bases bibliográficas existentes y el desarrollo de ontologías para que los sistemas documentales se integren en la Web Semática

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Se presenta un panorama y los interrogantes fundamentales de la etapa de la Web 3.0. Se analizan las características actuales de los sistemas bibliográficos estructurados con el modelo entidad-relación. Se definen los niveles conceptual, lógico y físico en los sistemas informáticos; consecuentemente se presentan las características de los FRBR y se obervan las relaciones entre obra y documento en el modelo conceptual FRBR. Se describen los FRBRoo como una interpretación con una lógica de objetos de los mismos requerimientos funcionales. Finalmente se plantean las tendencias a futuro, tales como pasar de las modelizaciones de entidad-relación a la de objetos, la explicitación con anotación semántica consistente, el mapeo de bases bibliográficas existentes y el desarrollo de ontologías para que los sistemas documentales se integren en la Web Semática

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the context of the Semantic Web, natural language descriptions associated with ontologies have proven to be of major importance not only to support ontology developers and adopters, but also to assist in tasks such as ontology mapping, information extraction, or natural language generation. In the state-of-the-art we find some attempts to provide guidelines for URI local names in English, and also some disagreement on the use of URIs for describing ontology elements. When trying to extrapolate these ideas to a multilingual scenario, some of these approaches fail to provide a valid solution. On the basis of some real experiences in the translation of ontologies from English into Spanish, we provide a preliminary set of guidelines for naming and labeling ontologies in a multilingual scenario.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In spite of the increasing presence of Semantic Web Facilities, only a limited amount of the available resources in the Internet provide a semantic access. Recent initiatives such as the emerging Linked Data Web are providing semantic access to available data by porting existing resources to the semantic web using different technologies, such as database-semantic mapping and scraping. Nevertheless, existing scraping solutions are based on ad-hoc solutions complemented with graphical interfaces for speeding up the scraper development. This article proposes a generic framework for web scraping based on semantic technologies. This framework is structured in three levels: scraping services, semantic scraping model and syntactic scraping. The first level provides an interface to generic applications or intelligent agents for gathering information from the web at a high level. The second level defines a semantic RDF model of the scraping process, in order to provide a declarative approach to the scraping task. Finally, the third level provides an implementation of the RDF scraping model for specific technologies. The work has been validated in a scenario that illustrates its application to mashup technologies

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Web has witnessed an enormous growth in the amount of semantic information published in recent years. This growth has been stimulated to a large extent by the emergence of Linked Data. Although this brings us a big step closer to the vision of a Semantic Web, it also raises new issues such as the need for dealing with information expressed in different natural languages. Indeed, although the Web of Data can contain any kind of information in any language, it still lacks explicit mechanisms to automatically reconcile such information when it is expressed in different languages. This leads to situations in which data expressed in a certain language is not easily accessible to speakers of other languages. The Web of Data shows the potential for being extended to a truly multilingual web as vocabularies and data can be published in a language-independent fashion, while associated language-dependent (linguistic) information supporting the access across languages can be stored separately. In this sense, the multilingual Web of Data can be realized in our view as a layer of services and resources on top of the existing Linked Data infrastructure adding i) linguistic information for data and vocabularies in different languages, ii) mappings between data with labels in different languages, and iii) services to dynamically access and traverse Linked Data across different languages. In this article we present this vision of a multilingual Web of Data. We discuss challenges that need to be addressed to make this vision come true and discuss the role that techniques such as ontology localization, ontology mapping, and cross-lingual ontology-based information access and presentation will play in achieving this. Further, we propose an initial architecture and describe a roadmap that can provide a basis for the implementation of this vision.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sensor networks are increasingly being deployed in the environment for many different purposes. The observations that they produce are made available with heterogeneous schemas, vocabularies and data formats, making it difficult to share and reuse this data, for other purposes than those for which they were originally set up. The authors propose an ontology-based approach for providing data access and query capabilities to streaming data sources, allowing users to express their needs at a conceptual level, independent of implementation and language-specific details. In this article, the authors describe the theoretical foundations and technologies that enable exposing semantically enriched sensor metadata, and querying sensor observations through SPARQL extensions, using query rewriting and data translation techniques according to mapping languages, and managing both pull and push delivery modes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Vivimos en la era de la información y del internet, tenemos la necesidad cada vez mayor de conseguir y compartir la información que existe. Esta necesidad se da en todos los ámbitos existentes pero con más ahínco probablemente sea en el área de la medicina, razón por la cual se llevan a cabo muchas investigaciones de distinta índole, lo cual ha llevado a generar un cantidad inimaginable de información y esta su vez muy heterogénea, haciendo cada vez más difícil unificarla y sacar conocimiento o valor agregado. Por lo cual se han llevado a cabo distintas investigaciones para dar solución a este problema, quizás la más importante y con más crecimiento es la búsqueda a partir de modelos de ontologías mediante el uso de sistemas que puedan consultarla. Este trabajo de Fin de Master hace hincapié es la generación de las consultas para poder acceder a la información que se encuentra de manera distribuida en distintos sitios y de manera heterogénea, mediante el uso de una API que genera el código SPARQL necesario. La API que se uso fue creada por el grupo de informática biomédica. También se buscó una manera eficiente de publicar esta API para su futuro uso en el proyecto p-medicine, por lo cual se creó un servicio RESTful para permitir generar las consultas deseadas desde cualquier plataforma, haciendo en esto caso más accesible y universal. Se le dio también una interfaz WEB a la API que permitiera hacer uso de la misma de una manera más amigable para el usuario. ---ABSTRACT---We live in the age of information and Internet so we have the need to consult and share the info that exists. This need comes is in every scope of our lives, probably one of the more important is the medicine, because it is the knowledge area that treats diseases and it tries to extents the live of the human beings. For that reason there have been many different researches generating huge amounts of heterogeneous and distributed information around the globe and making the data more difficult to consult. Consequently there have been many researches to look for an answer about to solve the problem of searching heterogeneous and distributed data, perhaps the more important if the one that use ontological models. This work is about the generation of the query statement based on the mapping API created by the biomedical informatics group. At the same time the project looks for the best way to publish and make available the API for its use in the p-medicine project, for that reason a RESTful API was made to allow the generation of consults from within the platform, becoming much more accessible and universal available. A Web interface was also made to the API, to let access to the final user in a friendly

Relevância:

30.00% 30.00%

Publicador:

Resumo:

"June 1998"--P. [4] of cover.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The INTAMAP FP6 project has developed an interoperable framework for real-time automatic mapping of critical environmental variables by extending spatial statistical methods and employing open, web-based, data exchange protocols and visualisation tools. This paper will give an overview of the underlying problem, of the project, and discuss which problems it has solved and which open problems seem to be most relevant to deal with next. The interpolation problem that INTAMAP solves is the generic problem of spatial interpolation of environmental variables without user interaction, based on measurements of e.g. PM10, rainfall or gamma dose rate, at arbitrary locations or over a regular grid covering the area of interest. It deals with problems of varying spatial resolution of measurements, the interpolation of averages over larger areas, and with providing information on the interpolation error to the end-user. In addition, monitoring network optimisation is addressed in a non-automatic context.