125 resultados para Corcho


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The city of Bath is a World Heritage site and its thermal waters, the Roman Baths and new spa development rely on undisturbed flow of the springs (45 °C). The current investigations provide an improved understanding of the residence times and flow regime as basis for the source protection. Trace gas indicators including the noble gases (helium, neon, argon, krypton and xenon) and chlorofluorocarbons (CFCs), together with a more comprehensive examination of chemical and stable isotope tracers are used to characterise the sources of the thermal water and any modern components. It is shown conclusively by the use of 39Ar that the bulk of the thermal water has been in circulation within the Carboniferous Limestone for at least 1000 years. Other stable isotope and noble gas measurements confirm previous findings and strongly suggest recharge within the Holocene time period (i.e. the last 12 kyr). Measurements of dissolved 85Kr and chlorofluorocarbons constrain previous indications from tritium that a small proportion (<5%) of the thermal water originates from modern leakage into the spring pipe passing through Mesozoic valley fill underlying Bath. This introduces small amounts of O2 into the system, resulting in the Fe precipitation seen in the King’s Spring. Silica geothermometry indicates that the water is likely to have reached a maximum temperature of between 69–99 °C, indicating a most probable maximum circulation depth of ∼3 km, which is in line with recent geological models. The rise to the surface of the water is sufficiently indirect that a temperature loss of >20 °C is incurred. There is overwhelming evidence that the water has evolved within the Carboniferous Limestone formation, although the chemistry alone cannot pinpoint the geometry of the recharge area or circulation route. For a likely residence time of 1–12 kyr, volumetric calculations imply a large storage volume and circulation pathway if typical porosities of the limestone at depth are used, indicating that much of the Bath-Bristol basin must be involved in the water storage.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The northern section of the Bohemian Cretaceous Basin has been the site of intensive U exploitation with harmful impacts on groundwater quality. The understanding of groundwater flow and age distribution is crucial for the prediction of the future dispersion and impact of the contamination. State of the art tracer methods (3H, 3He, 4He, 85Kr, 39Ar and 14C) were, therefore, used to obtain insights to ageing and mixing processes of groundwater along a north–south flow line in the centre of the two most important aquifers of Cenomanian and middle Turonian age. Dating of groundwater is particularly complex in this area as: (i) groundwater in the Cenomanian aquifer is locally affected by fluxes of geogenic and biogenic gases (e.g. CO2, CH4, He) and by fossil brines in basement rocks rich in Cl and SO4; (ii) a thick unsaturated zone overlays the Turonian aquifer; (iii) a periglacial climate and permafrost conditions prevailed during the Last Glacial Maximum (LGM), and iv) the wells are mostly screened over large depth intervals. Large disagreements in 85Kr and 3H/3He ages indicate that processes other than ageing have affected the tracer data in the Turonian aquifer. Mixing with older waters (>50 a) was confirmed by 39Ar activities. An inverse modelling approach, which included time lags for tracer transport throughout the unsaturated zone and degassing of 3He, was used to estimate the age of groundwater. Best fits between model and field results were obtained for mean residence times varying from modern up to a few hundred years. The presence of modern water in this aquifer is correlated with the occurrence of elevated pollution (e.g. nitrates). An increase of reactive geochemical indicators (e.g. Na) and radiogenic 4He, and a decrease in 14C along the flow direction confirmed groundwater ageing in the deeper confined Cenomanian aquifer. Radiocarbon ages varied from a few hundred years to more than 20 ka. Initial 14C activity for radiocarbon dating was calibrated by means of 39Ar measurements. The 14C age of a sample recharged during the LGM was further confirmed by depleted stable isotope signatures and near freezing point noble gas temperature. Radiogenic 4He accumulated in groundwater with concentrations increasing linearly with 14C ages. This enabled the use of 4He to validate the dating range of 14C and extend it to other parts of this aquifer. In the proximity of faults, 39Ar in excess of modern concentrations and 14C dead CO2 sources, elevated 3He/4He ratios and volcanic activity in Oligocene to Quaternary demonstrate the influence of gas of deeper origin and impeded the application of 4He, 39Ar and 14C for groundwater dating.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La contaminación del vino con aromas y/o sabores “fúngicos" o “a moho" es consecuencia de la presencia de ciertos compuestos conocidos como haloanisoles, producidos por hongos filamentosos. El objetivo del presente trabajo es realizar una revisión bibliográfica de la información disponible relacionada con los hongos filamentosos productores de haloanisoles; su permanencia, desarrollo y la bioquímica de las reacciones de síntesis de haloanisoles, a fin de poder establecer las medidas sanitarias preventivas correspondientes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present the data structures and algorithms used in the approach for building domain ontologies from folksonomies and linked data. In this approach we extracts domain terms from folksonomies and enrich them with semantic information from the Linked Open Data cloud. As a result, we obtain a domain ontology that combines the emergent knowledge of social tagging systems with formal knowledge from Ontologies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes a method for identifying topics in text published in social media, by applying topic recognition techniques that exploit DBpedia. We evaluate such method for social media in Spanish and we provide the results of the evaluation performed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract Web 2.0 applications enabled users to classify information resources using their own vocabularies. The bottom-up nature of these user-generated classification systems have turned them into interesting knowledge sources, since they provide a rich terminology generated by potentially large user communities. Previous research has shown that it is possible to elicit some emergent semantics from the aggregation of individual classifications in these systems. However the generation of ontologies from them is still an open research problem. In this thesis we address the problem of how to tap into user-generated classification systems for building domain ontologies. Our objective is to design a method to develop domain ontologies from user-generated classifications systems. To do so, we rely on ontologies in the Web of Data to formalize the semantics of the knowledge collected from the classification system. Current ontology development methodologies have recognized the importance of reusing knowledge from existing resources. Thus, our work is framed within the NeOn methodology scenario for building ontologies by reusing and reengineering non-ontological resources. The main contributions of this work are: An integrated method to develop ontologies from user-generated classification systems. With this method we extract a domain terminology from the classification system and then we formalize the semantics of this terminology by reusing ontologies in the Web of Data. Identification and adaptation of existing techniques for implementing the activities in the method so that they can fulfill the requirements of each activity. A novel study about emerging semantics in user-generated lists. Resumen La web 2.0 permitió a los usuarios clasificar recursos de información usando su propio vocabulario. Estos sistemas de clasificación generados por usuarios son recursos interesantes para la extracción de conocimiento debido principalmente a que proveen una extensa terminología generada por grandes comunidades de usuarios. Se ha demostrado en investigaciones previas que es posible obtener una semántica emergente de estos sistemas. Sin embargo la generación de ontologías a partir de ellos es todavía un problema de investigación abierto. Esta tesis trata el problema de cómo aprovechar los sistemas de clasificación generados por usuarios en la construcción de ontologías de dominio. Así el objetivo de la tesis es diseñar un método para desarrollar ontologías de dominio a partir de sistemas de clasificación generados por usuarios. El método propuesto reutiliza conceptualizaciones existentes en ontologías publicadas en la Web de Datos para formalizar la semántica del conocimiento que se extrae del sistema de clasificación. Por tanto, este trabajo está enmarcado dentro del escenario para desarrollar ontologías mediante la reutilización y reingeniería de recursos no ontológicos que se ha definido en la Metodología NeOn. Las principales contribuciones de este trabajo son: Un método integrado para desarrollar una ontología de dominio a partir de sistemas de clasificación generados por usuarios. En este método se extrae una terminología de dominio del sistema de clasificación y posteriormente se formaliza su semántica reutilizando ontologías en la Web de Datos. La identificación y adaptación de un conjunto de técnicas para implementar las actividades propuestas en el método de tal manera que puedan cumplir automáticamente los requerimientos de cada actividad. Un novedoso estudio acerca de la semántica emergente en las listas generadas por usuarios en la Web.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Publishing Linked Data is a process that involves several design decisions and technologies. Although some initial guidelines have been already provided by Linked Data publishers, these are still far from covering all the steps that are necessary (from data source selection to publication) or giving enough details about all these steps, technologies, intermediate products, etc. Furthermore, given the variety of data sources from which Linked Data can be generated, we believe that it is possible to have a single and uni�ed method for publishing Linked Data, but we should rely on di�erent techniques, technologies and tools for particular datasets of a given domain. In this paper we present a general method for publishing Linked Data and the application of the method to cover di�erent sources from di�erent domains.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently we have seen a large increase in the amount of geospatial data that is being published using RDF and Linked Data principles. Eorts such as the W3C Geo XG, and most recently the GeoSPARQL initiative are providing the necessary vocabularies to pub- lish this kind of information on the Web of Data. In this context it is necessary to develop applications that consume and take advantage of these geospatial datasets. In this paper we present map4rdf, a faceted browsing tool for exploring and visualizing RDF datasets enhanced with geospatial information.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Provenance plays a major role when understanding and reusing the methods applied in a scientic experiment, as it provides a record of inputs, the processes carried out and the use and generation of intermediate and nal results. In the specic case of in-silico scientic experiments, a large variety of scientic workflow systems (e.g., Wings, Taverna, Galaxy, Vistrails) have been created to support scientists. All of these systems produce some sort of provenance about the executions of the workflows that encode scientic experiments. However, provenance is normally recorded at a very low level of detail, which complicates the understanding of what happened during execution. In this paper we propose an approach to automatically obtain abstractions from low-level provenance data by finding common workflow fragments on workflow execution provenance and relating them to templates. We have tested our approach with a dataset of workflows published by the Wings workflow system. Our results show that by using these kinds of abstractions we can highlight the most common abstract methods used in the executions of a repository, relating different runs and workflow templates with each other.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sensor networks are increasingly becoming one of the main sources of Big Data on the Web. However, the observations that they produce are made available with heterogeneous schemas, vocabularies and data formats, making it difficult to share and reuse these data for other purposes than those for which they were originally set up. In this thesis we address these challenges, considering how we can transform streaming raw data to rich ontology-based information that is accessible through continuous queries for streaming data. Our main contribution is an ontology-based approach for providing data access and query capabilities to streaming data sources, allowing users to express their needs at a conceptual level, independent of implementation and language-specific details. We introduce novel query rewriting and data translation techniques that rely on mapping definitions relating streaming data models to ontological concepts. Specific contributions include: • The syntax and semantics of the SPARQLStream query language for ontologybased data access, and a query rewriting approach for transforming SPARQLStream queries into streaming algebra expressions. • The design of an ontology-based streaming data access engine that can internally reuse an existing data stream engine, complex event processor or sensor middleware, using R2RML mappings for defining relationships between streaming data models and ontology concepts. Concerning the sensor metadata of such streaming data sources, we have investigated how we can use raw measurements to characterize streaming data, producing enriched data descriptions in terms of ontological models. Our specific contributions are: • A representation of sensor data time series that captures gradient information that is useful to characterize types of sensor data. • A method for classifying sensor data time series and determining the type of data, using data mining techniques, and a method for extracting semantic sensor metadata features from the time series.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

RDB2RDF systems generate RDF from relational databases, operating in two dierent manners: materializing the database content into RDF or acting as virtual RDF datastores that transform SPARQL queries into SQL. In the former, inferences on the RDF data (taking into account the ontologies that they are related to) are normally done by the RDF triple store where the RDF data is materialised and hence the results of the query answering process depend on the store. In the latter, existing RDB2RDF systems do not normally perform such inferences at query time. This paper shows how the algorithm used in the REQUIEM system, focused on handling run-time inferences for query answering, can be adapted to handle such inferences for query answering in combination with RDB2RDF systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

RDB2RDF systems generate RDF from relational databases, operating in two di�erent manners: materializing the database content into RDF or acting as virtual RDF datastores that transform SPARQL queries into SQL. In the former, inferences on the RDF data (taking into account the ontologies that they are related to) are normally done by the RDF triple store where the RDF data is materialised and hence the results of the query answering process depend on the store. In the latter, existing RDB2RDF systems do not normally perform such inferences at query time. This paper shows how the algorithm used in the REQUIEM system, focused on handling run-time inferences for query answering, can be adapted to handle such inferences for query answering in combination with RDB2RDF systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper the authors present an approach for the semantic annotation of RESTful services in the geospatial domain. Their approach automates some stages of the annotation process, by using a combination of resources and services: a cross-domain knowledge base like DBpedia, two domain ontologies like GeoNames and the WGS84 vocabulary, and suggestion and synonym services. The authors’ approach has been successfully evaluated with a set of geospatial RESTful services obtained from ProgrammableWeb.com, where geospatial services account for a third of the total amount of services available in this registry.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Semantic Sensor Web infrastructures use ontology-based models to represent the data that they manage; however, up to now, these ontological models do not allow representing all the characteristics of distributed, heterogeneous, and web-accessible sensor data. This paper describes a core ontological model for Semantic Sensor Web infrastructures that covers these characteristics and that has been built with a focus on reusability. This ontological model is composed of different modules that deal, on the one hand, with infrastructure data and, on the other hand, with data from a specific domain, that is, the coastal flood emergency planning domain. The paper also presents a set of guidelines, followed during the ontological model development, to satisfy a common set of requirements related to modelling domain-specific features of interest and properties. In addition, the paper includes the results obtained after an exhaustive evaluation of the developed ontologies along different aspects (i.e., vocabulary, syntax, structure, semantics, representation, and context).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sensor networks are increasingly being deployed in the environment for many different purposes. The observations that they produce are made available with heterogeneous schemas, vocabularies and data formats, making it difficult to share and reuse this data, for other purposes than those for which they were originally set up. The authors propose an ontology-based approach for providing data access and query capabilities to streaming data sources, allowing users to express their needs at a conceptual level, independent of implementation and language-specific details. In this article, the authors describe the theoretical foundations and technologies that enable exposing semantically enriched sensor metadata, and querying sensor observations through SPARQL extensions, using query rewriting and data translation techniques according to mapping languages, and managing both pull and push delivery modes.