940 resultados para Packing, transportation and storage
Resumo:
The deployment of CCS (carbon capture and storage) at industrial scale implies the development of effective monitoring tools. Noble gases are tracers usually proposed to track CO2. This methodology, combined with the geochemistry of carbon isotopes, has been tested on available analogues. At first, gases from natural analogues were sampled in the Colorado Plateau and in the French carbogaseous provinces, in both well-confined and leaking-sites. Second, we performed a 2-years tracing experience on an underground natural gas storage, sampling gas each month during injection and withdrawal periods. In natural analogues, the geochemical fingerprints are dependent on the containment criterion and on the geological context, giving tools to detect a leakage of deep-CO2 toward surface. This study also provides information on the origin of CO2, as well as residence time of fluids within the crust and clues on the physico-chemical processes occurring during the geological story. The study on the industrial analogue demonstrates the feasibility of using noble gases as tracers of CO2. Withdrawn gases follow geochemical trends coherent with mixing processes between injected gas end-members. Physico-chemical processes revealed by the tracing occur at transient state. These two complementary studies proved the interest of geochemical monitoring to survey the CO2 behaviour, and gave information on its use.
Resumo:
Traditionally, the application of stable isotopes in Carbon Capture and Storage (CCS) projects has focused on d13C values of CO2 to trace the migration of injected CO2 in the subsurface. More recently the use of d18O values of both CO2 and reservoir fluids has been proposed as a method for quantifying in situ CO2 reservoir saturations due to O isotope exchange between CO2 and H2O and subsequent changes in d18OH2O values in the presence of high concentrations of CO2. To verify that O isotope exchange between CO2 and H2O reaches equilibrium within days, and that d18OH2O values indeed change predictably due to the presence of CO2, a laboratory study was conducted during which the isotope composition of H2O, CO2, and dissolved inorganic C (DIC) was determined at representative reservoir conditions (50°C and up to 19 MPa) and varying CO2 pressures. Conditions typical for the Pembina Cardium CO2 Monitoring Pilot in Alberta (Canada) were chosen for the experiments. Results obtained showed that d18O values of CO2 were on average 36.4±2.2 per mil (1 sigma, n=15) higher than those of water at all pressures up to and including reservoir pressure (19 MPa), in excellent agreement with the theoretically predicted isotope enrichment factor of 35.5 per mil for the experimental temperatures of 50°C. By using 18O enriched water for the experiments it was demonstrated that changes in the d18O values of water were predictably related to the fraction of O in the system sourced from CO2 in excellent agreement with theoretical predictions. Since the fraction of O sourced from CO2 is related to the total volumetric saturation of CO2 and water as a fraction of the total volume of the system, it is concluded that changes in d18O values of reservoir fluids can be used to calculate reservoir saturations of CO2 in CCS settings given that the d18O values of CO2 and water are sufficiently distinct.
Resumo:
The carbon (C) sink strength of arctic tundra is under pressure from increasing populations of arctic breeding geese. In this study we examined how CO2 and CH4 fluxes, plant biomass and soil C responded to the removal of vertebrate herbivores in a high arctic wet moss meadow that has been intensively used by barnacle geese (Branta leucopsis) for ca. 20 years. We used 4 and 9 years old grazing exclosures to investigate the potential for recovery of ecosystem function during the growing season (July 2007). The results show greater above- and below-ground vascular plant biomass within the grazing exclosures with graminoid biomass being most responsive to the removal of herbivory whilst moss biomass remained unchanged. The changes in biomass switched the system from net emission to net uptake of CO2 (0.47 and -0.77 µmol/m**2/s in grazed and exclosure plots, respectively) during the growing season and doubled the C storage in live biomass. In contrast, the treatment had no impact on the CH4 fluxes, the total litter C pool or the soil C concentration. The rapid recovery of the above ground biomass and CO2 fluxes demonstrates the plasticity of this high arctic ecosystem in terms of response to changing herbivore pressure.
Resumo:
Significant warming and acidification of the oceans is projected to occur by the end of the century. CO2 vents, areas of upwelling and downwelling, and potential leaks from carbon capture and storage facilities may also cause localised environmental changes, enhancing or depressing the effect of global climate change. Cold-water coral ecosystems are threatened by future changes in carbonate chemistry, yet our knowledge of the response of these corals to high temperature and high CO2 conditions is limited. Dimethylsulphoniopropionate (DMSP), and its breakdown product dimethylsulphide (DMS), are putative antioxidants that may be accumulated by invertebrates via their food or symbionts, although recent research suggests that some invertebrates may also be able to synthesise DMSP. This study provides the first information on the impact of high temperature (12 °C) and high CO2 (817 ppm) on intracellular DMSP in the cold-water coral Lophelia pertusa from the Mingulay Reef Complex, Scotland (56°49' N, 07°23' W), where in situ environmental conditions are meditated by tidally induced downwellings. An increase in intracellular DMSP under high CO2 conditions was observed, whilst water column particulate DMS + DMSP was reduced. In both high temperature treatments, intracellular DMSP was similar to the control treatment, whilst dissolved DMSP + DMS was not significantly different between any of the treatments. These results suggest that L. pertusa accumulates DMSP from the surrounding water column; uptake may be up-regulated under high CO2 conditions, but mediated by high temperature. These results provide new insight into the biotic control of deep-sea biogeochemistry and may impact our understanding of the global sulphur cycle, and the survival of cold-water corals under projected global change.
Resumo:
China is the fastest growing country in the world for last few decades and one of the defining features of China's growth has been investment-led growth. China's sustained high economic growth and increased competitiveness in manufacturing has been underpinned by a massive development of physical infrastructure. In this context, we investigate the role of infrastructure in promoting economic growth in China for the period 1975 to 2007. Overall, the results reveal that infrastructure stock, labour force, public and private investments have played an important role in economic growth in China. More importantly, we find that Infrastructure development in China has significant positive contribution to growth than both private and public investment. Further, there is unidirectional causality from infrastructure development to output growth justifying China's high spending on infrastructure development since the early nineties. The experience from China suggests that it is necessary to design an economic policy that improves the physical infrastructure as well as human capital formation for sustainable economic growth in developing countries.
Resumo:
This paper studies the energy consumption and subsequent CO2 emissions of road highway transportation under three toll systems in Spain for four categories of vehicles: cars, vans, buses and articulated trucks. The influence of toll systems is tested for a section of AP-41 highway between Toledo and Madrid. One system is free flow, other is traditional stop and go and the last toll system operates with an electronic toll collection (ETC) technology. Energy consumption and CO2 emissions were found to be closely related to vehicle mass, wind exposure, engine efficiency and acceleration rate. These parameters affect, directly or indirectly, the external forces which determine the energy consumption. Reducing the magnitude of these forces through an appropriate toll management is an important way of improving the energy performance of vehicles. The type of toll system used can have a major influence on the energy efficiency of highway transportation and therefore it is necessary to consider free flow.
Resumo:
In the present uncertain global context of reaching an equal social stability and steady thriving economy, power demand expected to grow and global electricity generation could nearly double from 2005 to 2030. Fossil fuels will remain a significant contribution on this energy mix up to 2050, with an expected part of around 70% of global and ca. 60% of European electricity generation. Coal will remain a key player. Hence, a direct effect on the considered CO2 emissions business-as-usual scenario is expected, forecasting three times the present CO2 concentration values up to 1,200ppm by the end of this century. Kyoto protocol was the first approach to take global responsibility onto CO2 emissions monitoring and cap targets by 2012 with reference to 1990. Some of principal CO2emitters did not ratify the reduction targets. Although USA and China spur are taking its own actions and parallel reduction measures. More efficient combustion processes comprising less fuel consuming, a significant contribution from the electricity generation sector to a CO2 dwindling concentration levels, might not be sufficient. Carbon Capture and Storage (CCS) technologies have started to gain more importance from the beginning of the decade, with research and funds coming out to drive its come in useful. After first researching projects and initial scale testing, three principal capture processes came out available today with first figures showing up to 90% CO2 removal by its standard applications in coal fired power stations. Regarding last part of CO2 reduction chain, two options could be considered worthy, reusing (EOR & EGR) and storage. The study evaluates the state of the CO2 capture technology development, availability and investment cost of the different technologies, with few operation cost analysis possible at the time. Main findings and the abatement potential for coal applications are presented. DOE, NETL, MIT, European universities and research institutions, key technology enterprises and utilities, and key technology suppliers are the main sources of this study. A vision of the technology deployment is presented.
Resumo:
The goal of the RAP-WAM AND-parallel Prolog abstract architecture is to provide inference speeds significantly beyond those of sequential systems, while supporting Prolog semantics and preserving sequential performance and storage efficiency. This paper presents simulation results supporting these claims with special emphasis on memory performance on a two-level sharedmemory multiprocessor organization. Several solutions to the cache coherency problem are analyzed. It is shown that RAP-WAM offers good locality and storage efficiency and that it can effectively take advantage of broadcast caches. It is argued that speeds in excess of 2 ML IPS on real applications exhibiting medium parallelism can be attained with current technology.
Resumo:
A backtracking algorithm for AND-Parallelism and its implementation at the Abstract Machine level are presented: first, a class of AND-Parallelism models based on goal independence is defined, and a generalized version of Restricted AND-Parallelism (RAP) introduced as characteristic of this class. A simple and efficient backtracking algorithm for R A P is then discussed. An implementation scheme is presented for this algorithm which offers minimum overhead, while retaining the performance and storage economy of sequent ial implementations and taking advantage of goal independence to avoid unnecessary backtracking ("restricted intelligent backtracking"). Finally, the implementation of backtracking in sequential and AND-Parallcl systems is explained through a number of examples.
Resumo:
Although the sequential execution speed of logic programs has been greatly improved by the concepts introduced in the Warren Abstract Machine (WAM), parallel execution represents the only way to increase this speed beyond the natural limits of sequential systems. However, most proposed parallel logic programming execution models lack the performance optimizations and storage efficiency of sequential systems. This paper presents a parallel abstract machine which is an extension of the WAM and is thus capable of supporting ANDParallelism without giving up the optimizations present in sequential implementations. A suitable instruction set, which can be used as a target by a variety of logic programming languages, is also included. Special instructions are provided to support a generalized version of "Restricted AND-Parallelism" (RAP), a technique which reduces the overhead traditionally associated with the run-time management of variable binding conflicts to a series of simple run-time checks, which select one out of a series of compiled execution graphs.
Resumo:
Machine learning techniques are used for extracting valuable knowledge from data. Nowa¬days, these techniques are becoming even more important due to the evolution in data ac¬quisition and storage, which is leading to data with different characteristics that must be exploited. Therefore, advances in data collection must be accompanied with advances in machine learning techniques to solve new challenges that might arise, on both academic and real applications. There are several machine learning techniques depending on both data characteristics and purpose. Unsupervised classification or clustering is one of the most known techniques when data lack of supervision (unlabeled data) and the aim is to discover data groups (clusters) according to their similarity. On the other hand, supervised classification needs data with supervision (labeled data) and its aim is to make predictions about labels of new data. The presence of data labels is a very important characteristic that guides not only the learning task but also other related tasks such as validation. When only some of the available data are labeled whereas the others remain unlabeled (partially labeled data), neither clustering nor supervised classification can be used. This scenario, which is becoming common nowadays because of labeling process ignorance or cost, is tackled with semi-supervised learning techniques. This thesis focuses on the branch of semi-supervised learning closest to clustering, i.e., to discover clusters using available labels as support to guide and improve the clustering process. Another important data characteristic, different from the presence of data labels, is the relevance or not of data features. Data are characterized by features, but it is possible that not all of them are relevant, or equally relevant, for the learning process. A recent clustering tendency, related to data relevance and called subspace clustering, claims that different clusters might be described by different feature subsets. This differs from traditional solutions to data relevance problem, where a single feature subset (usually the complete set of original features) is found and used to perform the clustering process. The proximity of this work to clustering leads to the first goal of this thesis. As commented above, clustering validation is a difficult task due to the absence of data labels. Although there are many indices that can be used to assess the quality of clustering solutions, these validations depend on clustering algorithms and data characteristics. Hence, in the first goal three known clustering algorithms are used to cluster data with outliers and noise, to critically study how some of the most known validation indices behave. The main goal of this work is however to combine semi-supervised clustering with subspace clustering to obtain clustering solutions that can be correctly validated by using either known indices or expert opinions. Two different algorithms are proposed from different points of view to discover clusters characterized by different subspaces. For the first algorithm, available data labels are used for searching for subspaces firstly, before searching for clusters. This algorithm assigns each instance to only one cluster (hard clustering) and is based on mapping known labels to subspaces using supervised classification techniques. Subspaces are then used to find clusters using traditional clustering techniques. The second algorithm uses available data labels to search for subspaces and clusters at the same time in an iterative process. This algorithm assigns each instance to each cluster based on a membership probability (soft clustering) and is based on integrating known labels and the search for subspaces into a model-based clustering approach. The different proposals are tested using different real and synthetic databases, and comparisons to other methods are also included when appropriate. Finally, as an example of real and current application, different machine learning tech¬niques, including one of the proposals of this work (the most sophisticated one) are applied to a task of one of the most challenging biological problems nowadays, the human brain model¬ing. Specifically, expert neuroscientists do not agree with a neuron classification for the brain cortex, which makes impossible not only any modeling attempt but also the day-to-day work without a common way to name neurons. Therefore, machine learning techniques may help to get an accepted solution to this problem, which can be an important milestone for future research in neuroscience. Resumen Las técnicas de aprendizaje automático se usan para extraer información valiosa de datos. Hoy en día, la importancia de estas técnicas está siendo incluso mayor, debido a que la evolución en la adquisición y almacenamiento de datos está llevando a datos con diferentes características que deben ser explotadas. Por lo tanto, los avances en la recolección de datos deben ir ligados a avances en las técnicas de aprendizaje automático para resolver nuevos retos que pueden aparecer, tanto en aplicaciones académicas como reales. Existen varias técnicas de aprendizaje automático dependiendo de las características de los datos y del propósito. La clasificación no supervisada o clustering es una de las técnicas más conocidas cuando los datos carecen de supervisión (datos sin etiqueta), siendo el objetivo descubrir nuevos grupos (agrupaciones) dependiendo de la similitud de los datos. Por otra parte, la clasificación supervisada necesita datos con supervisión (datos etiquetados) y su objetivo es realizar predicciones sobre las etiquetas de nuevos datos. La presencia de las etiquetas es una característica muy importante que guía no solo el aprendizaje sino también otras tareas relacionadas como la validación. Cuando solo algunos de los datos disponibles están etiquetados, mientras que el resto permanece sin etiqueta (datos parcialmente etiquetados), ni el clustering ni la clasificación supervisada se pueden utilizar. Este escenario, que está llegando a ser común hoy en día debido a la ignorancia o el coste del proceso de etiquetado, es abordado utilizando técnicas de aprendizaje semi-supervisadas. Esta tesis trata la rama del aprendizaje semi-supervisado más cercana al clustering, es decir, descubrir agrupaciones utilizando las etiquetas disponibles como apoyo para guiar y mejorar el proceso de clustering. Otra característica importante de los datos, distinta de la presencia de etiquetas, es la relevancia o no de los atributos de los datos. Los datos se caracterizan por atributos, pero es posible que no todos ellos sean relevantes, o igualmente relevantes, para el proceso de aprendizaje. Una tendencia reciente en clustering, relacionada con la relevancia de los datos y llamada clustering en subespacios, afirma que agrupaciones diferentes pueden estar descritas por subconjuntos de atributos diferentes. Esto difiere de las soluciones tradicionales para el problema de la relevancia de los datos, en las que se busca un único subconjunto de atributos (normalmente el conjunto original de atributos) y se utiliza para realizar el proceso de clustering. La cercanía de este trabajo con el clustering lleva al primer objetivo de la tesis. Como se ha comentado previamente, la validación en clustering es una tarea difícil debido a la ausencia de etiquetas. Aunque existen muchos índices que pueden usarse para evaluar la calidad de las soluciones de clustering, estas validaciones dependen de los algoritmos de clustering utilizados y de las características de los datos. Por lo tanto, en el primer objetivo tres conocidos algoritmos se usan para agrupar datos con valores atípicos y ruido para estudiar de forma crítica cómo se comportan algunos de los índices de validación más conocidos. El objetivo principal de este trabajo sin embargo es combinar clustering semi-supervisado con clustering en subespacios para obtener soluciones de clustering que puedan ser validadas de forma correcta utilizando índices conocidos u opiniones expertas. Se proponen dos algoritmos desde dos puntos de vista diferentes para descubrir agrupaciones caracterizadas por diferentes subespacios. Para el primer algoritmo, las etiquetas disponibles se usan para bus¬car en primer lugar los subespacios antes de buscar las agrupaciones. Este algoritmo asigna cada instancia a un único cluster (hard clustering) y se basa en mapear las etiquetas cono-cidas a subespacios utilizando técnicas de clasificación supervisada. El segundo algoritmo utiliza las etiquetas disponibles para buscar de forma simultánea los subespacios y las agru¬paciones en un proceso iterativo. Este algoritmo asigna cada instancia a cada cluster con una probabilidad de pertenencia (soft clustering) y se basa en integrar las etiquetas conocidas y la búsqueda en subespacios dentro de clustering basado en modelos. Las propuestas son probadas utilizando diferentes bases de datos reales y sintéticas, incluyendo comparaciones con otros métodos cuando resulten apropiadas. Finalmente, a modo de ejemplo de una aplicación real y actual, se aplican diferentes técnicas de aprendizaje automático, incluyendo una de las propuestas de este trabajo (la más sofisticada) a una tarea de uno de los problemas biológicos más desafiantes hoy en día, el modelado del cerebro humano. Específicamente, expertos neurocientíficos no se ponen de acuerdo en una clasificación de neuronas para la corteza cerebral, lo que imposibilita no sólo cualquier intento de modelado sino también el trabajo del día a día al no tener una forma estándar de llamar a las neuronas. Por lo tanto, las técnicas de aprendizaje automático pueden ayudar a conseguir una solución aceptada para este problema, lo cual puede ser un importante hito para investigaciones futuras en neurociencia.
Resumo:
Este proyecto consiste en el dimensionamiento del proceso de licuación de una planta offshore para la producción de gas natural licuado, usando únicamente N2 como refrigerante, evitando de este modo riesgos potenciales que podrían surgir con el uso de refrigerantes mixtos compuestos de hidrocarburos. El proceso ha sido diseñado para acomodar 35,23 kg/s (aproximadamente un millón de toneladas por año) de gas natural seco, sin separación de gases licuados de petróleo (GLP) y ajustarlo dentro de los parámetros requeridos en las especificaciones del proceso. Para proceder al dimensionamiento del proceso de licuación de gas natural de la planta se ha empleado el programa Aspen Plus. Los sistemas floating production, storage and offloading para licuar el gas natural (LNG-FPSO), es una nueva unidad conceptual y un modo realista y efectivo para la explotación, recuperación, almacenamiento, transporte y agotamiento de los campos marginales de gas y las fuentes de gas asociadas offshore. En el proyecto se detalla el proceso, equipos necesarios y costes estimados, potencia aproximada requerida y un breve análisis económico. ABSTRACT This project consist of the dimensioning of a liquefaction process in an offshore plant to produce liquefied natural, using only N2 as refrigerant in the cooling cycles to avoid potential hazards of mixed hydrocarbon refrigerants. The process was designed to accommodate 35.23 kg/s (roughly 1 MTPA) of raw natural gas feed without separation of LPG, and fits within all parameters required in the process specifications. The plant has been designed with the computer tool Aspen Plus. The floating production, storage and offloading system for liquefied natural gas (LNGFPSO), is a new conceptual unit and an effective and realistic way for exploitation, recovery, storage, transportation and end-use applications of marginal gas fields and offshore associated-gas resources. The following report details the process, equipment needs and estimated costs, approximated power requirements, and a brief economic analysis.
Resumo:
This article presents a solution to the problem of strong authentication, portable and expandable using a combination of Java technology and storage of X.509 digital certificate in Java cards to access services offered by an institution, in this case, the technology of the University of Panama, ensuring the authenticity, confidentiality, integrity and non repudiation.
Resumo:
CO2 capture and storage (CCS) projects are presently developed to reduce the emission of anthropogenic CO2 into the atmosphere. CCS technologies are expected to account for the 20% of the CO2 reduction by 2050. One of the main concerns of CCS is whether CO2 may remain confined within the geological formation into which it is injected since post-injection CO2 migration in the time scale of years, decades and centuries is not well understood. Theoretically, CO2 can be retained at depth i) as a supercritical fluid (physical trapping), ii) as a fluid slowly migrating in an aquifer due to long flow path (hydrodynamic trapping), iii) dissolved into ground waters (solubility trapping) and iv) precipitated secondary carbonates. Carbon dioxide will be injected in the near future (2012) at Hontomín (Burgos, Spain) in the frame of the Compostilla EEPR project, led by the Fundación Ciudad de la Energía (CIUDEN). In order to detect leakage in the operational stage, a pre-injection geochemical baseline is presently being developed. In this work a geochemical monitoring design is presented to provide information about the feasibility of CO2 storage at depth.
Resumo:
One of the main problems in urban areas is the steady growth in car ownership and traffic levels. Therefore, the challenge of sustainability is focused on a shift of the demand for mobility from cars to collective means of transport. For this end, buses are a key element of the public transport systems. In this respect Real Time Passenger Information (RTPI) systems help citizens change their travel behaviour towards more sustainable transport modes. This paper provides an assessment methodology which evaluates how RTPI systems improve the quality of bus services in two European cities, Madrid and Bremerhaven. In the case of Madrid, bus punctuality has increased by 3%. Regarding the travellers perception, Madrid raised its quality of service by 6% while Bremerhaven increased by 13%. On the other hand, the users ́ perception of Public Transport (PT) image increased by 14%.