950 resultados para ozone and storage
Resumo:
Sediments from Holes 994C, 995A, 997A, and 997B have been investigated for "combined" gases (adsorbed gas and that portion of free gas that has not escaped from the pore volume during core recovery and sample collection and storage), solvent-extractable organic compounds, and microscopically identifiable organic matter. The soluble materials mainly consist of polar compounds. The saturated hydrocarbons are dominated by n-alkanes with a pronounced odd-even predominance pattern that is derived from higher plant remains. Unsaturated triterpenoids and 17ß, 21ß-pentacyclic triterpenoids are characteristic for a low maturity stage of the organic matter. The low maturity is confirmed by vitrinite reflectance values of 0.3%. The proportion of terrestrial remains (vitrinite) increases with sub-bottom depth. Within the liptinite fraction, marine algae plays a major role in the sections below 180 mbsf, whereas above this depth sporinites and pollen from conifers are dominant. These facies changes are confirmed by the downhole variations of isoprenoid and triterpenoid ratios in the soluble organic matter. The combined gases contain methane, ethane, and propane, which is a mixture of microbial methane and thermal hydrocarbon gases. The variations in the gas ratios C1/(C2+C3) reflect the depth range of the hydrate stability zone. The carbon isotopic contents of ethane and propane indicate an origin from marine organic matter that is in the maturity stage of the oil window.
Resumo:
The state of preservation of natural gas hydrate samples, recovered from 6 sites drilled during ODP Leg 204 at southern summit of Hydrate Ridge, Oregon Margin, has been investigated by X-ray diffraction (XRD) and cryo-scanning-electron-microscopy (cryo-SEM) techniques. A detailed characterization of the state of decomposition of gas hydrates is necessary since no pressurized autoclave tools were used for sampling and partial dissociation must have occurred during recovery prior to the quench and storage in liquid nitrogen. Samples from 16 distinct horizons have been investigated by synchrotron X-ray diffraction measurements at HASYLAB/ Hamburg. A full profile fitting analysis ("Rietveld method") of synchrotron XRD data provides quantitative phase determinations of the major sample constituents such as gas hydrate structure I (sI), hexagonal ice (Ih) and quartz. The ice content (Ih) in each sample is related to frozen water composed of both original existing pore water and the water from decomposed hydrates. Hydrate contents as measured by diffraction vary between 0 and 68 wt.% in the samples we measured. Samples with low hydrate content usually show micro-structural features in cryo-SEM ascribed to extensive decomposition. Comparing the appearance of hydrates at different scales, the grade of preservation seems to be primarily correlated with the contiguous volume of the original existing hydrate; the dissociation front appears to be indicated by micrometer-sized pores in a dense ice matrix.
Resumo:
The organic geochemistry of Sites 1108 and 1109 of the Woodlark Basin, offshore Papua New Guinea, was studied to determine whether thermally mature hydrocarbons were present in the penetrated section and, if present, whether they are genetically related to the penetrated "coaly" interval. Both the organic carbon and pyrolysis data indicate that there is no significant hydrocarbon source-rock potential at Site 1108. The hydrocarbons encountered during drilling appear to be indigenous and not migrated products or contaminants. In contrast, the coaly interval at Site 1109 contains zones with significant hydrocarbon-generation potential. Several independent lines of evidence indicate that the coaly sequence encountered at Site 1109 is thermally immature. The Site 1108 methane stable-carbon isotope composition does not display a clear trend with depth as would be expected if it was solely reflecting a maturation profile. The measured isotopic composition of methane has most probably been altered by fractionation during sample handling and storage. This fractionation would result in isotopically heavier values than would be obtained on free gas. The organic geochemical data gathered indicate that Site 1108 can be safely revisited and that the organic-rich sediments encountered at Site 1109 were not the source of the gas encountered at Site 1108.
Resumo:
The deployment of CCS (carbon capture and storage) at industrial scale implies the development of effective monitoring tools. Noble gases are tracers usually proposed to track CO2. This methodology, combined with the geochemistry of carbon isotopes, has been tested on available analogues. At first, gases from natural analogues were sampled in the Colorado Plateau and in the French carbogaseous provinces, in both well-confined and leaking-sites. Second, we performed a 2-years tracing experience on an underground natural gas storage, sampling gas each month during injection and withdrawal periods. In natural analogues, the geochemical fingerprints are dependent on the containment criterion and on the geological context, giving tools to detect a leakage of deep-CO2 toward surface. This study also provides information on the origin of CO2, as well as residence time of fluids within the crust and clues on the physico-chemical processes occurring during the geological story. The study on the industrial analogue demonstrates the feasibility of using noble gases as tracers of CO2. Withdrawn gases follow geochemical trends coherent with mixing processes between injected gas end-members. Physico-chemical processes revealed by the tracing occur at transient state. These two complementary studies proved the interest of geochemical monitoring to survey the CO2 behaviour, and gave information on its use.
Resumo:
Traditionally, the application of stable isotopes in Carbon Capture and Storage (CCS) projects has focused on d13C values of CO2 to trace the migration of injected CO2 in the subsurface. More recently the use of d18O values of both CO2 and reservoir fluids has been proposed as a method for quantifying in situ CO2 reservoir saturations due to O isotope exchange between CO2 and H2O and subsequent changes in d18OH2O values in the presence of high concentrations of CO2. To verify that O isotope exchange between CO2 and H2O reaches equilibrium within days, and that d18OH2O values indeed change predictably due to the presence of CO2, a laboratory study was conducted during which the isotope composition of H2O, CO2, and dissolved inorganic C (DIC) was determined at representative reservoir conditions (50°C and up to 19 MPa) and varying CO2 pressures. Conditions typical for the Pembina Cardium CO2 Monitoring Pilot in Alberta (Canada) were chosen for the experiments. Results obtained showed that d18O values of CO2 were on average 36.4±2.2 per mil (1 sigma, n=15) higher than those of water at all pressures up to and including reservoir pressure (19 MPa), in excellent agreement with the theoretically predicted isotope enrichment factor of 35.5 per mil for the experimental temperatures of 50°C. By using 18O enriched water for the experiments it was demonstrated that changes in the d18O values of water were predictably related to the fraction of O in the system sourced from CO2 in excellent agreement with theoretical predictions. Since the fraction of O sourced from CO2 is related to the total volumetric saturation of CO2 and water as a fraction of the total volume of the system, it is concluded that changes in d18O values of reservoir fluids can be used to calculate reservoir saturations of CO2 in CCS settings given that the d18O values of CO2 and water are sufficiently distinct.
Resumo:
The carbon (C) sink strength of arctic tundra is under pressure from increasing populations of arctic breeding geese. In this study we examined how CO2 and CH4 fluxes, plant biomass and soil C responded to the removal of vertebrate herbivores in a high arctic wet moss meadow that has been intensively used by barnacle geese (Branta leucopsis) for ca. 20 years. We used 4 and 9 years old grazing exclosures to investigate the potential for recovery of ecosystem function during the growing season (July 2007). The results show greater above- and below-ground vascular plant biomass within the grazing exclosures with graminoid biomass being most responsive to the removal of herbivory whilst moss biomass remained unchanged. The changes in biomass switched the system from net emission to net uptake of CO2 (0.47 and -0.77 µmol/m**2/s in grazed and exclosure plots, respectively) during the growing season and doubled the C storage in live biomass. In contrast, the treatment had no impact on the CH4 fluxes, the total litter C pool or the soil C concentration. The rapid recovery of the above ground biomass and CO2 fluxes demonstrates the plasticity of this high arctic ecosystem in terms of response to changing herbivore pressure.
Resumo:
Significant warming and acidification of the oceans is projected to occur by the end of the century. CO2 vents, areas of upwelling and downwelling, and potential leaks from carbon capture and storage facilities may also cause localised environmental changes, enhancing or depressing the effect of global climate change. Cold-water coral ecosystems are threatened by future changes in carbonate chemistry, yet our knowledge of the response of these corals to high temperature and high CO2 conditions is limited. Dimethylsulphoniopropionate (DMSP), and its breakdown product dimethylsulphide (DMS), are putative antioxidants that may be accumulated by invertebrates via their food or symbionts, although recent research suggests that some invertebrates may also be able to synthesise DMSP. This study provides the first information on the impact of high temperature (12 °C) and high CO2 (817 ppm) on intracellular DMSP in the cold-water coral Lophelia pertusa from the Mingulay Reef Complex, Scotland (56°49' N, 07°23' W), where in situ environmental conditions are meditated by tidally induced downwellings. An increase in intracellular DMSP under high CO2 conditions was observed, whilst water column particulate DMS + DMSP was reduced. In both high temperature treatments, intracellular DMSP was similar to the control treatment, whilst dissolved DMSP + DMS was not significantly different between any of the treatments. These results suggest that L. pertusa accumulates DMSP from the surrounding water column; uptake may be up-regulated under high CO2 conditions, but mediated by high temperature. These results provide new insight into the biotic control of deep-sea biogeochemistry and may impact our understanding of the global sulphur cycle, and the survival of cold-water corals under projected global change.
Resumo:
The approach developed by Fuhrer in 1995 to estimate wheat yield losses induced by ozone and modulated by the soil water content (SWC) was applied to the data on Catalonian wheat yields. The aim of our work was to apply this approach and adjust it to Mediterranean environmental conditions by means of the necessary corrections. The main objective pursued was to prove the importance of soil water availability in the estimation of relative wheat yield losses as a factor that modifies the effects of tropospheric ozone on wheat, and to develop the algorithms required for the estimation of relative yield losses, adapted to the Mediterranean environmental conditions. The results show that this is an easy way to estimate relative yield losses just using meteorological data, without using ozone fluxes, which are much more difficult to calculate. Soil water availability is very important as a modulating factor of the effects of ozone on wheat; when soil water availability decreases, almost twice the amount of accumulated exposure to ozone is required to induce the same percentage of yield loss as in years when soil water availability is high.
Resumo:
In the present uncertain global context of reaching an equal social stability and steady thriving economy, power demand expected to grow and global electricity generation could nearly double from 2005 to 2030. Fossil fuels will remain a significant contribution on this energy mix up to 2050, with an expected part of around 70% of global and ca. 60% of European electricity generation. Coal will remain a key player. Hence, a direct effect on the considered CO2 emissions business-as-usual scenario is expected, forecasting three times the present CO2 concentration values up to 1,200ppm by the end of this century. Kyoto protocol was the first approach to take global responsibility onto CO2 emissions monitoring and cap targets by 2012 with reference to 1990. Some of principal CO2emitters did not ratify the reduction targets. Although USA and China spur are taking its own actions and parallel reduction measures. More efficient combustion processes comprising less fuel consuming, a significant contribution from the electricity generation sector to a CO2 dwindling concentration levels, might not be sufficient. Carbon Capture and Storage (CCS) technologies have started to gain more importance from the beginning of the decade, with research and funds coming out to drive its come in useful. After first researching projects and initial scale testing, three principal capture processes came out available today with first figures showing up to 90% CO2 removal by its standard applications in coal fired power stations. Regarding last part of CO2 reduction chain, two options could be considered worthy, reusing (EOR & EGR) and storage. The study evaluates the state of the CO2 capture technology development, availability and investment cost of the different technologies, with few operation cost analysis possible at the time. Main findings and the abatement potential for coal applications are presented. DOE, NETL, MIT, European universities and research institutions, key technology enterprises and utilities, and key technology suppliers are the main sources of this study. A vision of the technology deployment is presented.
Resumo:
The goal of the RAP-WAM AND-parallel Prolog abstract architecture is to provide inference speeds significantly beyond those of sequential systems, while supporting Prolog semantics and preserving sequential performance and storage efficiency. This paper presents simulation results supporting these claims with special emphasis on memory performance on a two-level sharedmemory multiprocessor organization. Several solutions to the cache coherency problem are analyzed. It is shown that RAP-WAM offers good locality and storage efficiency and that it can effectively take advantage of broadcast caches. It is argued that speeds in excess of 2 ML IPS on real applications exhibiting medium parallelism can be attained with current technology.
Resumo:
A backtracking algorithm for AND-Parallelism and its implementation at the Abstract Machine level are presented: first, a class of AND-Parallelism models based on goal independence is defined, and a generalized version of Restricted AND-Parallelism (RAP) introduced as characteristic of this class. A simple and efficient backtracking algorithm for R A P is then discussed. An implementation scheme is presented for this algorithm which offers minimum overhead, while retaining the performance and storage economy of sequent ial implementations and taking advantage of goal independence to avoid unnecessary backtracking ("restricted intelligent backtracking"). Finally, the implementation of backtracking in sequential and AND-Parallcl systems is explained through a number of examples.
Resumo:
Although the sequential execution speed of logic programs has been greatly improved by the concepts introduced in the Warren Abstract Machine (WAM), parallel execution represents the only way to increase this speed beyond the natural limits of sequential systems. However, most proposed parallel logic programming execution models lack the performance optimizations and storage efficiency of sequential systems. This paper presents a parallel abstract machine which is an extension of the WAM and is thus capable of supporting ANDParallelism without giving up the optimizations present in sequential implementations. A suitable instruction set, which can be used as a target by a variety of logic programming languages, is also included. Special instructions are provided to support a generalized version of "Restricted AND-Parallelism" (RAP), a technique which reduces the overhead traditionally associated with the run-time management of variable binding conflicts to a series of simple run-time checks, which select one out of a series of compiled execution graphs.
Resumo:
Machine learning techniques are used for extracting valuable knowledge from data. Nowa¬days, these techniques are becoming even more important due to the evolution in data ac¬quisition and storage, which is leading to data with different characteristics that must be exploited. Therefore, advances in data collection must be accompanied with advances in machine learning techniques to solve new challenges that might arise, on both academic and real applications. There are several machine learning techniques depending on both data characteristics and purpose. Unsupervised classification or clustering is one of the most known techniques when data lack of supervision (unlabeled data) and the aim is to discover data groups (clusters) according to their similarity. On the other hand, supervised classification needs data with supervision (labeled data) and its aim is to make predictions about labels of new data. The presence of data labels is a very important characteristic that guides not only the learning task but also other related tasks such as validation. When only some of the available data are labeled whereas the others remain unlabeled (partially labeled data), neither clustering nor supervised classification can be used. This scenario, which is becoming common nowadays because of labeling process ignorance or cost, is tackled with semi-supervised learning techniques. This thesis focuses on the branch of semi-supervised learning closest to clustering, i.e., to discover clusters using available labels as support to guide and improve the clustering process. Another important data characteristic, different from the presence of data labels, is the relevance or not of data features. Data are characterized by features, but it is possible that not all of them are relevant, or equally relevant, for the learning process. A recent clustering tendency, related to data relevance and called subspace clustering, claims that different clusters might be described by different feature subsets. This differs from traditional solutions to data relevance problem, where a single feature subset (usually the complete set of original features) is found and used to perform the clustering process. The proximity of this work to clustering leads to the first goal of this thesis. As commented above, clustering validation is a difficult task due to the absence of data labels. Although there are many indices that can be used to assess the quality of clustering solutions, these validations depend on clustering algorithms and data characteristics. Hence, in the first goal three known clustering algorithms are used to cluster data with outliers and noise, to critically study how some of the most known validation indices behave. The main goal of this work is however to combine semi-supervised clustering with subspace clustering to obtain clustering solutions that can be correctly validated by using either known indices or expert opinions. Two different algorithms are proposed from different points of view to discover clusters characterized by different subspaces. For the first algorithm, available data labels are used for searching for subspaces firstly, before searching for clusters. This algorithm assigns each instance to only one cluster (hard clustering) and is based on mapping known labels to subspaces using supervised classification techniques. Subspaces are then used to find clusters using traditional clustering techniques. The second algorithm uses available data labels to search for subspaces and clusters at the same time in an iterative process. This algorithm assigns each instance to each cluster based on a membership probability (soft clustering) and is based on integrating known labels and the search for subspaces into a model-based clustering approach. The different proposals are tested using different real and synthetic databases, and comparisons to other methods are also included when appropriate. Finally, as an example of real and current application, different machine learning tech¬niques, including one of the proposals of this work (the most sophisticated one) are applied to a task of one of the most challenging biological problems nowadays, the human brain model¬ing. Specifically, expert neuroscientists do not agree with a neuron classification for the brain cortex, which makes impossible not only any modeling attempt but also the day-to-day work without a common way to name neurons. Therefore, machine learning techniques may help to get an accepted solution to this problem, which can be an important milestone for future research in neuroscience. Resumen Las técnicas de aprendizaje automático se usan para extraer información valiosa de datos. Hoy en día, la importancia de estas técnicas está siendo incluso mayor, debido a que la evolución en la adquisición y almacenamiento de datos está llevando a datos con diferentes características que deben ser explotadas. Por lo tanto, los avances en la recolección de datos deben ir ligados a avances en las técnicas de aprendizaje automático para resolver nuevos retos que pueden aparecer, tanto en aplicaciones académicas como reales. Existen varias técnicas de aprendizaje automático dependiendo de las características de los datos y del propósito. La clasificación no supervisada o clustering es una de las técnicas más conocidas cuando los datos carecen de supervisión (datos sin etiqueta), siendo el objetivo descubrir nuevos grupos (agrupaciones) dependiendo de la similitud de los datos. Por otra parte, la clasificación supervisada necesita datos con supervisión (datos etiquetados) y su objetivo es realizar predicciones sobre las etiquetas de nuevos datos. La presencia de las etiquetas es una característica muy importante que guía no solo el aprendizaje sino también otras tareas relacionadas como la validación. Cuando solo algunos de los datos disponibles están etiquetados, mientras que el resto permanece sin etiqueta (datos parcialmente etiquetados), ni el clustering ni la clasificación supervisada se pueden utilizar. Este escenario, que está llegando a ser común hoy en día debido a la ignorancia o el coste del proceso de etiquetado, es abordado utilizando técnicas de aprendizaje semi-supervisadas. Esta tesis trata la rama del aprendizaje semi-supervisado más cercana al clustering, es decir, descubrir agrupaciones utilizando las etiquetas disponibles como apoyo para guiar y mejorar el proceso de clustering. Otra característica importante de los datos, distinta de la presencia de etiquetas, es la relevancia o no de los atributos de los datos. Los datos se caracterizan por atributos, pero es posible que no todos ellos sean relevantes, o igualmente relevantes, para el proceso de aprendizaje. Una tendencia reciente en clustering, relacionada con la relevancia de los datos y llamada clustering en subespacios, afirma que agrupaciones diferentes pueden estar descritas por subconjuntos de atributos diferentes. Esto difiere de las soluciones tradicionales para el problema de la relevancia de los datos, en las que se busca un único subconjunto de atributos (normalmente el conjunto original de atributos) y se utiliza para realizar el proceso de clustering. La cercanía de este trabajo con el clustering lleva al primer objetivo de la tesis. Como se ha comentado previamente, la validación en clustering es una tarea difícil debido a la ausencia de etiquetas. Aunque existen muchos índices que pueden usarse para evaluar la calidad de las soluciones de clustering, estas validaciones dependen de los algoritmos de clustering utilizados y de las características de los datos. Por lo tanto, en el primer objetivo tres conocidos algoritmos se usan para agrupar datos con valores atípicos y ruido para estudiar de forma crítica cómo se comportan algunos de los índices de validación más conocidos. El objetivo principal de este trabajo sin embargo es combinar clustering semi-supervisado con clustering en subespacios para obtener soluciones de clustering que puedan ser validadas de forma correcta utilizando índices conocidos u opiniones expertas. Se proponen dos algoritmos desde dos puntos de vista diferentes para descubrir agrupaciones caracterizadas por diferentes subespacios. Para el primer algoritmo, las etiquetas disponibles se usan para bus¬car en primer lugar los subespacios antes de buscar las agrupaciones. Este algoritmo asigna cada instancia a un único cluster (hard clustering) y se basa en mapear las etiquetas cono-cidas a subespacios utilizando técnicas de clasificación supervisada. El segundo algoritmo utiliza las etiquetas disponibles para buscar de forma simultánea los subespacios y las agru¬paciones en un proceso iterativo. Este algoritmo asigna cada instancia a cada cluster con una probabilidad de pertenencia (soft clustering) y se basa en integrar las etiquetas conocidas y la búsqueda en subespacios dentro de clustering basado en modelos. Las propuestas son probadas utilizando diferentes bases de datos reales y sintéticas, incluyendo comparaciones con otros métodos cuando resulten apropiadas. Finalmente, a modo de ejemplo de una aplicación real y actual, se aplican diferentes técnicas de aprendizaje automático, incluyendo una de las propuestas de este trabajo (la más sofisticada) a una tarea de uno de los problemas biológicos más desafiantes hoy en día, el modelado del cerebro humano. Específicamente, expertos neurocientíficos no se ponen de acuerdo en una clasificación de neuronas para la corteza cerebral, lo que imposibilita no sólo cualquier intento de modelado sino también el trabajo del día a día al no tener una forma estándar de llamar a las neuronas. Por lo tanto, las técnicas de aprendizaje automático pueden ayudar a conseguir una solución aceptada para este problema, lo cual puede ser un importante hito para investigaciones futuras en neurociencia.
Resumo:
This article presents a solution to the problem of strong authentication, portable and expandable using a combination of Java technology and storage of X.509 digital certificate in Java cards to access services offered by an institution, in this case, the technology of the University of Panama, ensuring the authenticity, confidentiality, integrity and non repudiation.
Resumo:
CO2 capture and storage (CCS) projects are presently developed to reduce the emission of anthropogenic CO2 into the atmosphere. CCS technologies are expected to account for the 20% of the CO2 reduction by 2050. One of the main concerns of CCS is whether CO2 may remain confined within the geological formation into which it is injected since post-injection CO2 migration in the time scale of years, decades and centuries is not well understood. Theoretically, CO2 can be retained at depth i) as a supercritical fluid (physical trapping), ii) as a fluid slowly migrating in an aquifer due to long flow path (hydrodynamic trapping), iii) dissolved into ground waters (solubility trapping) and iv) precipitated secondary carbonates. Carbon dioxide will be injected in the near future (2012) at Hontomín (Burgos, Spain) in the frame of the Compostilla EEPR project, led by the Fundación Ciudad de la Energía (CIUDEN). In order to detect leakage in the operational stage, a pre-injection geochemical baseline is presently being developed. In this work a geochemical monitoring design is presented to provide information about the feasibility of CO2 storage at depth.