947 resultados para Elasticity of output with respect to factors
Resumo:
The paradoxically low infant mortality rates for Mexican Americans in Texas have been attributed to inaccuracies in vital registration and idiosyncracies in Mexican migration in rural areas along the U.S.-Mexico border. This study examined infant (IMR), neonatal (NMR), and postneonatal (PNMR) mortality rates of Mexican Americans in an urban, non-border setting, using linked birth and death records of the 1974-75 single live birth cohort (N = 68,584) in Harris County, Texas, which includes the city of Houston and is reported to have nearly complete birth and death registration. The use of parental nativity with the traditional Spanish surname criterion made it possible to distinguish infants of Mexican-born immigrants from those of Blacks, Anglos, other Hispanics, and later-generation, more Anglicized Mexican Americans. Mortality rates were analyzed by ethnicity, parental nativity, and cause of death, with respect to birth weight, birth order, maternal age, legitimacy status, and time of first prenatal care.^ While overall IMRs showed Spanish surname rates slightly higher than Anglo rates, infants of Mexican-born immigrants had much lower NMRs than did Anglos, even for moderately low birth weight infants. However, among infants under 1500 grams, presumably unable to be discharged home in the neonatal period, Mexican Americans had the highest NMR. The inconsistency suggested unreported deaths for Mexican American low birth weight infants after hospital discharge. The PNMR of infants of Mexican immigrants was also lower than for Anglos, and the usual mortality differentials were reversed: high-risk categories of high birth order, high maternal age, and late/no prenatal care had the lowest PNMRs. Since these groups' characteristics are congruent with those of low-income migrants, the data suggested the possibility of migration losses. Cause of death analysis suggested that prematurity and birth injuries are greater problems than heretofore recognized among Mexican Americans, and that home births and "shoebox burials" may be unrecorded even in an urban setting.^ Caution is advised in the interpretation of infant mortality rates for a Spanish surname population of Mexican origin, even in an urban, non-border area with reportedly excellent birth and death registration. ^
Resumo:
El litoral atlántico bonaerense asiste en la actualidad a numerosas problemáticas ambientales, derivadas principalmente de un proceso de organización territorial que se ha llevado a cabo paralelamente al desarrollo y la creciente masificación de la actividad turística en nuestro país. La ausencia de un ordenamiento en cuanto al avance de la actividad, y el impacto que esta pudiera tener a futuro, avasallada por la lógica de acumulación de capital, han sido los factores desencadenantes de distintos conflictos, entre ellos la erosión costera. En el presente trabajo se hará un abordaje de la problemática en la localidad de Mar del Tuyú, cabecera del Partido de la Costa, enfatizando en indagar cómo se ha llevado a cabo el proceso de organización del espacio litoral, para intentar desentrañar las herencias territoriales que han inducido la problemática mencionada.
Resumo:
El litoral atlántico bonaerense asiste en la actualidad a numerosas problemáticas ambientales, derivadas principalmente de un proceso de organización territorial que se ha llevado a cabo paralelamente al desarrollo y la creciente masificación de la actividad turística en nuestro país. La ausencia de un ordenamiento en cuanto al avance de la actividad, y el impacto que esta pudiera tener a futuro, avasallada por la lógica de acumulación de capital, han sido los factores desencadenantes de distintos conflictos, entre ellos la erosión costera. En el presente trabajo se hará un abordaje de la problemática en la localidad de Mar del Tuyú, cabecera del Partido de la Costa, enfatizando en indagar cómo se ha llevado a cabo el proceso de organización del espacio litoral, para intentar desentrañar las herencias territoriales que han inducido la problemática mencionada.
Resumo:
El litoral atlántico bonaerense asiste en la actualidad a numerosas problemáticas ambientales, derivadas principalmente de un proceso de organización territorial que se ha llevado a cabo paralelamente al desarrollo y la creciente masificación de la actividad turística en nuestro país. La ausencia de un ordenamiento en cuanto al avance de la actividad, y el impacto que esta pudiera tener a futuro, avasallada por la lógica de acumulación de capital, han sido los factores desencadenantes de distintos conflictos, entre ellos la erosión costera. En el presente trabajo se hará un abordaje de la problemática en la localidad de Mar del Tuyú, cabecera del Partido de la Costa, enfatizando en indagar cómo se ha llevado a cabo el proceso de organización del espacio litoral, para intentar desentrañar las herencias territoriales que han inducido la problemática mencionada.
Resumo:
For years, various indices of seasonal West African precipitation have served as useful predictors of the overall tropical cyclone activity in the Atlantic Ocean. Since the mid-1990s, the correlation unexpectedly deteriorated. In the present study, statistical techniques are developed to describe the nonstationary nature of the correlations between annual measures of Atlantic tropical cyclone activity and three selected West African precipitation indices (namely, western Sahelian precipitation in June-September, central Sahelian precipitation in June-September, and Guinean coastal precipitation in the preceding year's August-November period). The correlations between these parameters are found to vary over the period from 1921 to 2007 on a range of time scales. Additionally, considerable year-to-year variability in the strength of these correlations is documented by selecting subsamples of years with respect to various meteorological factors. Broadly, in years when the environment in the main development region is generally favorable for enhanced tropical cyclogenesis (e.g., when sea surface temperatures are high, when there is relatively little wind shear through the depth of the troposphere, or when the relative vorticity in the midtroposphere is anomalously high), the correlations between indices of West African monsoon precipitation and Atlantic tropical cyclone activity are considerably weaker than in years when the overall conditions in the region are less conducive. Other more remote climate parameters, such as the phase of the Southern Oscillation, are less effective at modulating the nature of these interactions.
Resumo:
The predictable in situ production of 230Th from the decay of uranium in seawater, and its subsequent removal by scavenging onto falling particles, provides a valuable tool for normalizing fluxes to the seafloor. We describe a new application, determination of the 232Th that dissolves in the water column and is removed to the seafloor. 232Th is supplied to the ocean in continental minerals, dissolution of which leads to a measurable standing stock in the water column. Sedimentary adsorbed 232Th/230Th ratios have the potential to provide a proxy for estimating the amount of dissolved material that enters the ocean, both today and in the past. Ten core top samples were treated with up to eight different leaching techniques in order to determine the best method for the separating adsorbed from lattice bound thorium. In addition, separate components of the sediments were analyzed to test whether clay dissolution was an important contribution to the final measurement. There was no systematic correlation between the strength of acid used in the leach and the measured 232Th/230Th ratios. In all cases clean foraminifera produced the same ratio as leaches on bulk sediment. In three out of five samples leaches performed on non-carbonate detritus in the <63 µm size fraction were also identical. Without additional water column data it is not yet clear whether there is a simple one to one correlation between the expected deep-water 232Th/230Th and that produced by leaching, especially in carbonate-rich sediments. However, higher ratios, and associated high 232Th adsorbed fluxes, were observed in areas with high expected detrital inputs. The adsorbed fraction was ~35-50% of the total 232Th in seven out of ten samples. Our 230Th normalized 232Th fluxes are reasonable by comparison to global estimates of detrital inputs to the ocean. In nine cases out of ten, the total 230Th-normalized 232Th flux is greater than predicted from the annual dust fall at each specific location, but lower than the average global detrital input from all sources.
Resumo:
Estuarine organisms are exposed to periodic strong fluctuations in seawater pH driven by biological carbon dioxide (CO2) production, which may in the future be further exacerbated by the ocean acidification associated with the global rise in CO2. Calcium carbonate-producing marine species such as mollusks are expected to be vulnerable to acidification of estuarine waters, since elevated CO2 concentration and lower pH lead to a decrease in the degree of saturation of water with respect to calcium carbonate, potentially affecting biomineralization. Our study demonstrates that the increase in CO2 partial pressure (pCO2) in seawater and associated decrease in pH within the environmentally relevant range for estuaries have negative effects on physiology, rates of shell deposition and mechanical properties of the shells of eastern oysters Crassostrea virginica (Gmelin). High CO2 levels (pH ~7.5, pCO2 ~3500 µatm) caused significant increases in juvenile mortality rates and inhibited both shell and soft-body growth compared to the control conditions (pH ~8.2, pCO2 ~380 µatm). Furthermore, elevated CO2 concentrations resulted in higher standard metabolic rates in oyster juveniles, likely due to the higher energy cost of homeostasis. The high CO2 conditions also led to changes in the ultrastructure and mechanical properties of shells, including increased thickness of the calcite laths within the hypostracum and reduced hardness and fracture toughness of the shells, indicating that elevated CO2 levels have negative effects on the biomineralization process. These data strongly suggest that the rise in CO2 can impact physiology and biomineralization in marine calcifiers such as eastern oysters, threatening their survival and potentially leading to profound ecological and economic impacts in estuarine ecosystems.
Resumo:
Anthropogenic elevation of atmospheric carbon dioxide (pCO2) is making the oceans more acidic, thereby reducing their degree of saturation with respect to calcium carbonate (CaCO3). There is mounting concern over the impact that future CO2-induced reductions in the CaCO3 saturation state of seawater will have on marine organisms that construct their shells and skeletons from this mineral. Here, we present the results of 60 d laboratory experiments in which we investigated the effects of CO2-induced ocean acidification on calcification in 18 benthic marine organisms. Species were selected to span a broad taxonomic range (crustacea, cnidaria, echinoidea, rhodophyta, chlorophyta, gastropoda, bivalvia, annelida) and included organisms producing aragonite, low-Mg calcite, and high-Mg calcite forms of CaCO3. We show that 10 of the 18 species studied exhibited reduced rates of net calcification and, in some cases, net dissolution under elevated pCO2. However, in seven species, net calcification increased under the intermediate and/or highest levels of pCO2, and one species showed no response at all. These varied responses may reflect differences amongst organisms in their ability to regulate pH at the site of calcification, in the extent to which their outer shell layer is protected by an organic covering, in the solubility of their shell or skeletal mineral, and whether they utilize photosynthesis. Whatever the specific mechanism(s) involved, our results suggest that the impact of elevated atmospheric pCO2 on marine calcification is more varied than previously thought.
Resumo:
Surface wave tomography, using the fundamental Rayleigh wave velocities and those of higher modes between 1 and 4 and periods between 50 and 160 s, is used to image structures with a horizontal resolution of ~250 km and a vertical resolution of ~50 km to depths of ~300 km in the mantle. A new model, PM_v2_2012, obtained from 3×10**6 seismograms, agrees well with earlier lower resolution models. It is combined with temperature estimates from oceanic plate models and with pressure and temperature estimates from the mineral compositions of garnet peridotite nodules to generate a number of estimates of SV(P,T) based on geophysical and petrological observations alone. These are then used to estimate the unrelaxed shear modulus and its derivatives with respect to pressure and temperature, which agree reasonably with values from laboratory experiments. At high temperatures relaxation occurs, causing the shear wave velocity to depend on frequency. This behaviour is parameterised using a viscosity to obtain a Maxwell relaxation time. The relaxation behaviour is described using a dimensionless frequency, which depends on an activation energy E and volume Va. The values of E and Va obtained from the geophysical models agree with those from laboratory experiments on high temperature creep. The resulting expressions are then used to determine the lithospheric thickness from the shear wave velocity variations. The resolution is improved by about a factor of two with respect to earlier models, and clearly resolves the thick lithosphere beneath active intracontinental belts that are now being shortened. The same expressions allow the three dimensional variations of the shear wave attenuation and viscosity to be estimated.
Resumo:
Five sites were drilled along a transect of the Walvis Ridge. The basement rocks range in age from 69 to 71 m.y., and the deeper sites are slightly younger, in agreement with the sea-floor-spreading magnetic lineations. Geophysical and petrological evidence indicates that the Walvis Ridge was formed at a mid-ocean ridge at anomalously shallow elevations. The basement complex, associated with the relatively smooth acoustic basement in the area, consists of pillowed basalt and massive flows alternating with nannofossil chalk and limestone that contain a significant volcanogenic component. Basalts are quartz tholeiites at the ridge crest and olivine tholeiites downslope. The sediment sections are dominated by carbonate oozes and chalks with volcanogenic material common in the lower parts of the sediment columns. The volcanogenic sediments probably were derived from sources on the Walvis Ridge. Paleodepth estimates based on the benthic fauna are consistent with a normal crustal-cooling rate of subsidence of the Walvis Ridge. The shoalest site in the transect sank below sea level in the late Paleocene, and benthic fauna suggest a rapid sea-level lowering in the mid-Oligocene. Average accumulation rates during the Cenozoic indicate three peaks in the rate of supply of carbonate to the sea floor, that is, early Pliocene, late middle Miocene, and late Paleocene to early Eocene. Carbonate accumulation rates for the rest of the Cenozoic averaged 1 g/cm**2/kyr. Dissolution had a marked effect on sediment accumulation in the deeper sites, particularly during the late Miocene, Oligocene, and middle to late Eocene. Changes in the rates of accumulation as a function of depth demonstrate that the upper part of the water column had a greater degree of undersaturation with respect to carbonate during times of high productivity. Even when the calcium carbonate compensation depth (CCD) was below 4400 m, a significant amount of carbonate was dissolved at the shallower sites. The flora and fauna of the Walvis Ridge are temperate in nature. Warmer-water faunas are found in the uppermost Maastrichtian and lower Eocene sediments, with cooler-water faunas present in the lower Paleocene, Oligocene, and middle Miocene. The boreal elements of the lower Pliocene are replaced by more temperate forms in the middle Pliocene. The Cretaceous-Tertiary boundary was recovered in four sites drilled, with the sediments containing well-preserved nannofossils but poorly preserved foraminifera.
Resumo:
Critical infrastructures support everyday activities in modern societies, facilitating the exchange of services and quantities of various nature. Their functioning is the result of the integration of diverse technologies, systems and organizations into a complex network of interconnections. Benefits from networking are accompanied by new threats and risks. In particular, because of the increased interdependency, disturbances and failures may propagate and render unstable the whole infrastructure network. This paper presents a methodology of resilience analysis of networked systems of systems. Resilience generalizes the concept of stability of a system around a state of equilibrium, with respect to a disturbance and its ability of preventing, resisting and recovery. The methodology provides a tool for the analysis of off-equilibrium conditions that may occur in a single system and propagate through the network of dependencies. The analysis is conducted in two stages. The first stage of the analysis is qualitative. It identifies the resilience scenarios, i.e. the sequence of events, triggered by an initial disturbance, which include failures and the system response. The second stage is quantitative. The most critical scenarios can be simulated, for the desired parameter settings, in order to check if they are successfully handled, i.e recovered to nominal conditions, or they end into the network failure. The proposed methodology aims at providing an effective support to resilience-informed design.
Resumo:
La Universidad Politécnica de Madrid (UPM) y la Università degli Studi di Firenze (UniFi), bajo la coordinación técnica de AMPHOS21, participan desde 2009 en el proyecto de investigación “Estrategias de Monitorización de CO2 y otros gases en el estudio de Análogos Naturales”, financiado por la Fundación Ciudad de la Energía (CIUDEN) en el marco del Proyecto Compostilla OXYCFB300 (http://www.compostillaproject.eu), del Programa “European Energy Program for Recovery - EEPR”. El objetivo principal del proyecto fue el desarrollo y puesta a punto de metodologías de monitorización superficiales para su aplicación en el seguimiento y control de los emplazamientos donde se realice el almacenamiento geológico de CO2, analizando técnicas que permitan detectar y cuantificar las posibles fugas de CO2 a la atmósfera. Los trabajos se realizaron tanto en análogos naturales (españoles e italianos) como en la Planta de Desarrollo Tecnológico de Almacenamiento de CO2 de Hontomín. Las técnicas analizadas se centran en la medición de gases y aguas superficiales (de escorrentía y manantiales). En cuanto a la medición de gases se analizó el flujo de CO2 que emana desde el suelo a la atmósfera y la aplicabilidad de trazadores naturales (como el radón) para la detección e identificación de las fugas de CO2. En cuanto al análisis químico de las aguas se analizaron los datos geoquímicos e isotópicos y los gases disueltos en las aguas de los alrededores de la PDT de Hontomín, con objeto de determinar qué parámetros son los más apropiados para la detección de una posible migración del CO2 inyectado, o de la salmuera, a los ambientes superficiales. Las medidas de flujo de CO2 se realizaron con la técnica de la cámara de acúmulo. A pesar de ser una técnica desarrollada y aplicada en diferentes ámbitos científicos se estimó necesario adaptar un protocolo de medida y de análisis de datos a las características específicas de los proyectos de captura y almacenamiento de CO2 (CAC). Donde los flujos de CO2 esperados son bajos y en caso de producirse una fuga habrá que detectar pequeñas variaciones en los valores flujo con un “ruido” en la señal alto, debido a actividad biológica en el suelo. La medida de flujo de CO2 mediante la técnica de la cámara de acúmulo se puede realizar sin limpiar la superficie donde se coloca la cámara o limpiando y esperando al reequilibrio del flujo después de la distorsión al sistema. Sin embargo, los resultados obtenidos después de limpiar y esperar muestran menor dispersión, lo que nos indica que este procedimiento es el mejor para la monitorización de los complejos de almacenamiento geológico de CO2. El protocolo de medida resultante, utilizado para la obtención de la línea base de flujo de CO2 en Hontomín, sigue los siguiente pasos: a) con una espátula se prepara el punto de medición limpiando y retirando el recubrimiento vegetal o la primera capa compacta de suelo, b) se espera un tiempo para la realización de la medida de flujo, facilitando el reequilibrio del flujo del gas tras la alteración provocada en el suelo y c) se realiza la medida de flujo de CO2. Una vez realizada la medición de flujo de CO2, y detectada si existen zonas de anomalías, se debe estimar la cantidad de CO2 que se está escapando a la atmósfera (emanación total), con el objetivo de cuantificar la posible fuga. Existen un amplio rango de metodologías para realizar dicha estimación, siendo necesario entender cuáles son las más apropiadas para obtener el valor más representativo del sistema. En esta tesis se comparan seis técnicas estadísticas: media aritmética, estimador insegado de la media (aplicando la función de Sichel), remuestreo con reemplazamiento (bootstrap), separación en diferentes poblaciones mediante métodos gráficos y métodos basados en criterios de máxima verosimilitud, y la simulación Gaussiana secuencial. Para este análisis se realizaron ocho campañas de muestreo, tanto en la Planta de Desarrollo Tecnológico de Hontomón como en análogos naturales (italianos y españoles). Los resultados muestran que la simulación Gaussiana secuencial suele ser el método más preciso para realizar el cálculo, sin embargo, existen ocasiones donde otros métodos son más apropiados. Como consecuencia, se desarrolla un procedimiento de actuación para seleccionar el método que proporcione el mejor estimador. Este procedimiento consiste, en primer lugar, en realizar un análisis variográfico. Si existe una autocorrelación entre los datos, modelizada mediante el variograma, la mejor técnica para calcular la emanación total y su intervalo de confianza es la simulación Gaussiana secuencial (sGs). Si los datos son independientes se debe comprobar la distribución muestral, aplicando la media aritmética o el estimador insesgado de la media (Sichel) para datos normales o lognormales respectivamente. Cuando los datos no son normales o corresponden a una mezcla de poblaciones la mejor técnica de estimación es la de remuestreo con reemplazamiento (bootstrap). Siguiendo este procedimiento el máximo valor del intervalo de confianza estuvo en el orden del ±20/25%, con la mayoría de valores comprendidos entre ±3,5% y ±8%. La identificación de las diferentes poblaciones muestrales en los datos de flujo de CO2 puede ayudar a interpretar los resultados obtenidos, toda vez que esta distribución se ve afectada por la presencia de varios procesos geoquímicos como, por ejemplo, una fuente geológica o biológica del CO2. Así pues, este análisis puede ser una herramienta útil en el programa de monitorización, donde el principal objetivo es demostrar que no hay fugas desde el reservorio a la atmósfera y, si ocurren, detectarlas y cuantificarlas. Los resultados obtenidos muestran que el mejor proceso para realizar la separación de poblaciones está basado en criterios de máxima verosimilitud. Los procedimientos gráficos, aunque existen pautas para realizarlos, tienen un cierto grado de subjetividad en la interpretación de manera que los resultados son menos reproducibles. Durante el desarrollo de la tesis se analizó, en análogos naturales, la relación existente entre el CO2 y los isótopos del radón (222Rn y 220Rn), detectándose en todas las zonas de emisión de CO2 una relación positiva entre los valores de concentración de 222Rn en aire del suelo y el flujo de CO2. Comparando la concentración de 220Rn con el flujo de CO2 la relación no es tan clara, mientras que en algunos casos aumenta en otros se detecta una disminución, hecho que parece estar relacionado con la profundidad de origen del radón. Estos resultados confirmarían la posible aplicación de los isótopos del radón como trazadores del origen de los gases y su aplicación en la detección de fugas. Con respecto a la determinación de la línea base de flujo CO2 en la PDT de Hontomín, se realizaron mediciones con la cámara de acúmulo en las proximidades de los sondeos petrolíferos, perforados en los ochenta y denominados H-1, H-2, H-3 y H-4, en la zona donde se instalarán el sondeo de inyección (H-I) y el de monitorización (H-A) y en las proximidades de la falla sur. Desde noviembre de 2009 a abril de 2011 se realizaron siete campañas de muestreo, adquiriéndose más de 4.000 registros de flujo de CO2 con los que se determinó la línea base y su variación estacional. Los valores obtenidos fueron bajos (valores medios entre 5 y 13 g•m-2•d-1), detectándose pocos valores anómalos, principalmente en las proximidades del sondeo H-2. Sin embargo, estos valores no se pudieron asociar a una fuente profunda del CO2 y seguramente estuvieran más relacionados con procesos biológicos, como la respiración del suelo. No se detectaron valores anómalos cerca del sistema de fracturación (falla Ubierna), toda vez que en esta zona los valores de flujo son tan bajos como en el resto de puntos de muestreo. En este sentido, los valores de flujo de CO2 aparentemente están controlados por la actividad biológica, corroborado al obtenerse los menores valores durante los meses de otoño-invierno e ir aumentando en los periodos cálidos. Se calcularon dos grupos de valores de referencia, el primer grupo (UCL50) es 5 g•m-2•d-1 en las zonas no aradas en los meses de otoño-invierno y 3,5 y 12 g•m-2•d-1 en primavera-verano para zonas aradas y no aradas, respectivamente. El segundo grupo (UCL99) corresponde a 26 g•m-2•d- 1 durante los meses de otoño-invierno en las zonas no aradas y 34 y 42 g•m-2•d-1 para los meses de primavera-verano en zonas aradas y no aradas, respectivamente. Flujos mayores a estos valores de referencia podrían ser indicativos de una posible fuga durante la inyección y posterior a la misma. Los primeros datos geoquímicos e isotópicos de las aguas superficiales (de escorrentía y de manantiales) en el área de Hontomín–Huermeces fueron analizados. Los datos sugieren que las aguas estudiadas están relacionadas con aguas meteóricas con un circuito hidrogeológico superficial, caracterizadas por valores de TDS relativamente bajos (menor a 800 mg/L) y una fácie hidrogeoquímica de Ca2+(Mg2+)-HCO3 −. Algunas aguas de manantiales se caracterizan por concentraciones elevadas de NO3 − (concentraciones de hasta 123 mg/l), lo que sugiere una contaminación antropogénica. Se obtuvieron concentraciones anómalas de of Cl−, SO4 2−, As, B y Ba en dos manantiales cercanos a los sondeos petrolíferos y en el rio Ubierna, estos componentes son probablemente indicadores de una posible mezcla entre los acuíferos profundos y superficiales. El estudio de los gases disueltos en las aguas también evidencia el circuito superficial de las aguas. Estando, por lo general, dominado por la componente atmosférica (N2, O2 y Ar). Sin embargo, en algunos casos el gas predominante fue el CO2 (con concentraciones que llegan al 63% v/v), aunque los valores isotópicos del carbono (<-17,7 ‰) muestran que lo más probable es que esté relacionado con un origen biológico. Los datos geoquímicos e isotópicos de las aguas superficiales obtenidos en la zona de Hontomín se pueden considerar como el valor de fondo con el que comparar durante la fase operacional, la clausura y posterior a la clausura. En este sentido, la composición de los elementos mayoritarios y traza, la composición isotópica del carbono del CO2 disuelto y del TDIC (Carbono inorgánico disuelto) y algunos elementos traza se pueden considerar como parámetros adecuados para detectar la migración del CO2 a los ambientes superficiales. ABSTRACT Since 2009, a group made up of Universidad Politécnica de Madrid (UPM; Spain) and Università degli Studi Firenze (UniFi; Italy) has been taking part in a joint project called “Strategies for Monitoring CO2 and other Gases in Natural analogues”. The group was coordinated by AMPHOS XXI, a private company established in Barcelona. The Project was financially supported by Fundación Ciudad de la Energía (CIUDEN; Spain) as a part of the EC-funded OXYCFB300 project (European Energy Program for Recovery -EEPR-; www.compostillaproject.eu). The main objectives of the project were aimed to develop and optimize analytical methodologies to be applied at the surface to Monitor and Verify the feasibility of geologically stored carbon dioxide. These techniques were oriented to detect and quantify possible CO2 leakages to the atmosphere. Several investigations were made in natural analogues from Spain and Italy and in the Tecnchnological Development Plant for CO2 injection al Hontomín (Burgos, Spain). The studying techniques were mainly focused on the measurements of diffuse soil gases and surface and shallow waters. The soil-gas measurements included the determination of CO2 flux and the application to natural trace gases (e.g. radon) that may help to detect any CO2 leakage. As far as the water chemistry is concerned, geochemical and isotopic data related to surface and spring waters and dissolved gases in the area of the PDT of Hontomín were analyzed to determine the most suitable parameters to trace the migration of the injected CO2 into the near-surface environments. The accumulation chamber method was used to measure the diffuse emission of CO2 at the soil-atmosphere interface. Although this technique has widely been applied in different scientific areas, it was considered of the utmost importance to adapt the optimum methodology for measuring the CO2 soil flux and estimating the total CO2 output to the specific features of the site where CO2 is to be stored shortly. During the pre-injection phase CO2 fluxes are expected to be relatively low where in the intra- and post-injection phases, if leakages are to be occurring, small variation in CO2 flux might be detected when the CO2 “noise” is overcoming the biological activity of the soil (soil respiration). CO2 flux measurements by the accumulation chamber method could be performed without vegetation clearance or after vegetation clearance. However, the results obtained after clearance show less dispersion and this suggests that this procedure appears to be more suitable for monitoring CO2 Storage sites. The measurement protocol, applied for the determination of the CO2 flux baseline at Hontomín, has included the following steps: a) cleaning and removal of both the vegetal cover and top 2 cm of soil, b) waiting to reduce flux perturbation due to the soil removal and c) measuring the CO2 flux. Once completing the CO2 flux measurements and detected whether there were anomalies zones, the total CO2 output was estimated to quantify the amount of CO2 released to the atmosphere in each of the studied areas. There is a wide range of methodologies for the estimation of the CO2 output, which were applied to understand which one was the most representative. In this study six statistical methods are presented: arithmetic mean, minimum variances unbiased estimator, bootstrap resample, partitioning of data into different populations with a graphical and a maximum likelihood procedures, and sequential Gaussian simulation. Eight campaigns were carried out in the Hontomín CO2 Storage Technology Development Plant and in natural CO2 analogues. The results show that sequential Gaussian simulation is the most accurate method to estimate the total CO2 output and the confidential interval. Nevertheless, a variety of statistic methods were also used. As a consequence, an application procedure for selecting the most realistic method was developed. The first step to estimate the total emanation rate was the variogram analysis. If the relation among the data can be explained with the variogram, the best technique to calculate the total CO2 output and its confidence interval is the sequential Gaussian simulation method (sGs). If the data are independent, their distribution is to be analyzed. For normal and log-normal distribution the proper methods are the arithmetic mean and minimum variances unbiased estimator, respectively. If the data are not normal (log-normal) or are a mixture of different populations the best approach is the bootstrap resampling. According to these steps, the maximum confidence interval was about ±20/25%, with most of values between ±3.5% and ±8%. Partitioning of CO2 flux data into different populations may help to interpret the data as their distribution can be affected by different geochemical processes, e.g. geological or biological sources of CO2. Consequently, it may be an important tool in a monitoring CCS program, where the main goal is to demonstrate that there are not leakages from the reservoir to the atmosphere and, if occurring, to be able to detect and quantify it. Results show that the partitioning of populations is better performed by maximum likelihood criteria, since graphical procedures have a degree of subjectivity in the interpretation and results may not be reproducible. The relationship between CO2 flux and radon isotopes (222Rn and 220Rn) was studied in natural analogues. In all emissions zones, a positive relation between 222Rn and CO2 was observed. However, the relationship between activity of 220Rn and CO2 flux is not clear. In some cases the 220Rn activity indeed increased with the CO2 flux in other measurements a decrease was recognized. We can speculate that this effect was possibly related to the route (deep or shallow) of the radon source. These results may confirm the possible use of the radon isotopes as tracers for the gas origin and their application in the detection of leakages. With respect to the CO2 flux baseline at the TDP of Hontomín, soil flux measurements in the vicinity of oil boreholes, drilled in the eighties and named H-1 to H-4, and injection and monitoring wells were performed using an accumulation chamber. Seven surveys were carried out from November 2009 to summer 2011. More than 4,000 measurements were used to determine the baseline flux of CO2 and its seasonal variations. The measured values were relatively low (from 5 to 13 g•m-2•day-1) and few outliers were identified, mainly located close to the H-2 oil well. Nevertheless, these values cannot be associated to a deep source of CO2, being more likely related to biological processes, i.e. soil respiration. No anomalies were recognized close to the deep fault system (Ubierna Fault) detected by geophysical investigations. There, the CO2 flux is indeed as low as other measurement stations. CO2 fluxes appear to be controlled by the biological activity since the lowest values were recorded during autumn-winter seasons and they tend to increase in warm periods. Two reference CO2 flux values (UCL50 of 5 g•m-2•d-1 for non-ploughed areas in autumn-winter seasons and 3.5 and 12 g•m-2•d-1 for in ploughed and non-ploughed areas, respectively, in spring-summer time, and UCL99 of 26 g•m-2•d-1 for autumn-winter in not-ploughed areas and 34 and 42 g•m-2•d-1 for spring-summer in ploughed and not-ploughed areas, respectively, were calculated. Fluxes higher than these reference values could be indicative of possible leakage during the operational and post-closure stages of the storage project. The first geochemical and isotopic data related to surface and spring waters and dissolved gases in the area of Hontomín–Huermeces (Burgos, Spain) are presented and discussed. The chemical and features of the spring waters suggest that they are related to a shallow hydrogeological system as the concentration of the Total Dissolved Solids approaches 800 mg/L with a Ca2+(Mg2+)-HCO3 − composition, similar to that of the surface waters. Some spring waters are characterized by relatively high concentrations of NO3 − (up to 123 mg/L), unequivocally suggesting an anthropogenic source. Anomalous concentrations of Cl−, SO4 2−, As, B and Ba were measured in two springs, discharging a few hundred meters from the oil wells, and in the Rio Ubierna. These contents are possibly indicative of mixing processes between deep and shallow aquifers. The chemistry of the dissolved gases also evidences the shallow circuits of the Hontomín– Huermeces, mainly characterized by an atmospheric source as highlighted by the contents of N2, O2, Ar and their relative ratios. Nevertheless, significant concentrations (up to 63% by vol.) of isotopically negative CO2 (<−17.7‰ V-PDB) were found in some water samples, likely related to a biogenic source. The geochemical and isotopic data of the surface and spring waters in the surroundings of Hontomín can be considered as background values when intra- and post-injection monitoring programs will be carried out. In this respect, main and minor solutes, the isotopic carbon of dissolved CO2 and TDIC (Total Dissolved Inorganic Carbon) and selected trace elements can be considered as useful parameters to trace the migration of the injected CO2 into near-surface environments.
Resumo:
An analytical expression is derived for the electron thermionic current from heated metals by using a non equilibrium, modified Kappa energy distribution for electrons. This isotropic distribution characterizes the long high energy tails in the electron energy spectrum for low values of the index ? and also accounts for the Fermi energy for the metal electrons. The limit for large ? recovers the classical equilibrium Fermi-Dirac distribution. The predicted electron thermionic current for low ? increases between four and five orders of magnitude with respect to the predictions of the equilibrium Richardson-Dushmann current. The observed departures from this classical expression, also recovered for large ?, would correspond to moderate values of this index. The strong increments predicted by the thermionic emission currents suggest that, under appropriate conditions, materials with non equilibrium electron populations would become more efficient electron emitters at low temperatures.
Resumo:
El agotamiento, la ausencia o, simplemente, la incertidumbre sobre la cantidad de las reservas de combustibles fósiles se añaden a la variabilidad de los precios y a la creciente inestabilidad en la cadena de aprovisionamiento para crear fuertes incentivos para el desarrollo de fuentes y vectores energéticos alternativos. El atractivo de hidrógeno como vector energético es muy alto en un contexto que abarca, además, fuertes inquietudes por parte de la población sobre la contaminación y las emisiones de gases de efecto invernadero. Debido a su excelente impacto ambiental, la aceptación pública del nuevo vector energético dependería, a priori, del control de los riesgos asociados su manipulación y almacenamiento. Entre estos, la existencia de un innegable riesgo de explosión aparece como el principal inconveniente de este combustible alternativo. Esta tesis investiga la modelización numérica de explosiones en grandes volúmenes, centrándose en la simulación de la combustión turbulenta en grandes dominios de cálculo en los que la resolución que es alcanzable está fuertemente limitada. En la introducción, se aborda una descripción general de los procesos de explosión. Se concluye que las restricciones en la resolución de los cálculos hacen necesario el modelado de los procesos de turbulencia y de combustión. Posteriormente, se realiza una revisión crítica de las metodologías disponibles tanto para turbulencia como para combustión, que se lleva a cabo señalando las fortalezas, deficiencias e idoneidad de cada una de las metodologías. Como conclusión de esta investigación, se obtiene que la única estrategia viable para el modelado de la combustión, teniendo en cuenta las limitaciones existentes, es la utilización de una expresión que describa la velocidad de combustión turbulenta en función de distintos parámetros. Este tipo de modelos se denominan Modelos de velocidad de llama turbulenta y permiten cerrar una ecuación de balance para la variable de progreso de combustión. Como conclusión también se ha obtenido, que la solución más adecuada para la simulación de la turbulencia es la utilización de diferentes metodologías para la simulación de la turbulencia, LES o RANS, en función de la geometría y de las restricciones en la resolución de cada problema particular. Sobre la base de estos hallazgos, el crea de un modelo de combustión en el marco de los modelos de velocidad de la llama turbulenta. La metodología propuesta es capaz de superar las deficiencias existentes en los modelos disponibles para aquellos problemas en los que se precisa realizar cálculos con una resolución moderada o baja. Particularmente, el modelo utiliza un algoritmo heurístico para impedir el crecimiento del espesor de la llama, una deficiencia que lastraba el célebre modelo de Zimont. Bajo este enfoque, el énfasis del análisis se centra en la determinación de la velocidad de combustión, tanto laminar como turbulenta. La velocidad de combustión laminar se determina a través de una nueva formulación capaz de tener en cuenta la influencia simultánea en la velocidad de combustión laminar de la relación de equivalencia, la temperatura, la presión y la dilución con vapor de agua. La formulación obtenida es válida para un dominio de temperaturas, presiones y dilución con vapor de agua más extenso de cualquiera de las formulaciones previamente disponibles. Por otra parte, el cálculo de la velocidad de combustión turbulenta puede ser abordado mediante el uso de correlaciones que permiten el la determinación de esta magnitud en función de distintos parámetros. Con el objetivo de seleccionar la formulación más adecuada, se ha realizado una comparación entre los resultados obtenidos con diversas expresiones y los resultados obtenidos en los experimentos. Se concluye que la ecuación debida a Schmidt es la más adecuada teniendo en cuenta las condiciones del estudio. A continuación, se analiza la importancia de las inestabilidades de la llama en la propagación de los frentes de combustión. Su relevancia resulta significativa para mezclas pobres en combustible en las que la intensidad de la turbulencia permanece moderada. Estas condiciones son importantes dado que son habituales en los accidentes que ocurren en las centrales nucleares. Por ello, se lleva a cabo la creación de un modelo que permita estimar el efecto de las inestabilidades, y en concreto de la inestabilidad acústica-paramétrica, en la velocidad de propagación de llama. El modelado incluye la derivación matemática de la formulación heurística de Bauwebs et al. para el cálculo de la incremento de la velocidad de combustión debido a las inestabilidades de la llama, así como el análisis de la estabilidad de las llamas con respecto a una perturbación cíclica. Por último, los resultados se combinan para concluir el modelado de la inestabilidad acústica-paramétrica. Tras finalizar esta fase, la investigación se centro en la aplicación del modelo desarrollado en varios problemas de importancia para la seguridad industrial y el posterior análisis de los resultados y la comparación de los mismos con los datos experimentales correspondientes. Concretamente, se abordo la simulación de explosiones en túneles y en contenedores, con y sin gradiente de concentración y ventilación. Como resultados generales, se logra validar el modelo confirmando su idoneidad para estos problemas. Como última tarea, se ha realizado un analisis en profundidad de la catástrofe de Fukushima-Daiichi. El objetivo del análisis es determinar la cantidad de hidrógeno que explotó en el reactor número uno, en contraste con los otros estudios sobre el tema que se han centrado en la determinación de la cantidad de hidrógeno generado durante el accidente. Como resultado de la investigación, se determinó que la cantidad más probable de hidrogeno que fue consumida durante la explosión fue de 130 kg. Es un hecho notable el que la combustión de una relativamente pequeña cantidad de hidrogeno pueda causar un daño tan significativo. Esta es una muestra de la importancia de este tipo de investigaciones. Las ramas de la industria para las que el modelo desarrollado será de interés abarca la totalidad de la futura economía de hidrógeno (pilas de combustible, vehículos, almacenamiento energético, etc) con un impacto especial en los sectores del transporte y la energía nuclear, tanto para las tecnologías de fisión y fusión. ABSTRACT The exhaustion, absolute absence or simply the uncertainty on the amount of the reserves of fossil fuels sources added to the variability of their prices and the increasing instability and difficulties on the supply chain are strong incentives for the development of alternative energy sources and carriers. The attractiveness of hydrogen in a context that additionally comprehends concerns on pollution and emissions is very high. Due to its excellent environmental impact, the public acceptance of the new energetic vector will depend on the risk associated to its handling and storage. Fromthese, the danger of a severe explosion appears as the major drawback of this alternative fuel. This thesis investigates the numerical modeling of large scale explosions, focusing on the simulation of turbulent combustion in large domains where the resolution achievable is forcefully limited. In the introduction, a general description of explosion process is undertaken. It is concluded that the restrictions of resolution makes necessary the modeling of the turbulence and combustion processes. Subsequently, a critical review of the available methodologies for both turbulence and combustion is carried out pointing out their strengths and deficiencies. As a conclusion of this investigation, it appears clear that the only viable methodology for combustion modeling is the utilization of an expression for the turbulent burning velocity to close a balance equation for the combustion progress variable, a model of the Turbulent flame velocity kind. Also, that depending on the particular resolution restriction of each problem and on its geometry the utilization of different simulation methodologies, LES or RANS, is the most adequate solution for modeling the turbulence. Based on these findings, the candidate undertakes the creation of a combustion model in the framework of turbulent flame speed methodology which is able to overcome the deficiencies of the available ones for low resolution problems. Particularly, the model utilizes a heuristic algorithm to maintain the thickness of the flame brush under control, a serious deficiency of the Zimont model. Under the approach utilized by the candidate, the emphasis of the analysis lays on the accurate determination of the burning velocity, both laminar and turbulent. On one side, the laminar burning velocity is determined through a newly developed correlation which is able to describe the simultaneous influence of the equivalence ratio, temperature, steam dilution and pressure on the laminar burning velocity. The formulation obtained is valid for a larger domain of temperature, steam dilution and pressure than any of the previously available formulations. On the other side, a certain number of turbulent burning velocity correlations are available in the literature. For the selection of the most suitable, they have been compared with experiments and ranked, with the outcome that the formulation due to Schmidt was the most adequate for the conditions studied. Subsequently, the role of the flame instabilities on the development of explosions is assessed. Their significance appears to be of importance for lean mixtures in which the turbulence intensity remains moderate. These are important conditions which are typical for accidents on Nuclear Power Plants. Therefore, the creation of a model to account for the instabilities, and concretely, the acoustic parametric instability is undertaken. This encloses the mathematical derivation of the heuristic formulation of Bauwebs et al. for the calculation of the burning velocity enhancement due to flame instabilities as well as the analysis of the stability of flames with respect to a cyclic velocity perturbation. The results are combined to build a model of the acoustic-parametric instability. The following task in this research has been to apply the model developed to several problems significant for the industrial safety and the subsequent analysis of the results and comparison with the corresponding experimental data was performed. As a part of such task simulations of explosions in a tunnel and explosions in large containers, with and without gradient of concentration and venting have been carried out. As a general outcome, the validation of the model is achieved, confirming its suitability for the problems addressed. As a last and final undertaking, a thorough study of the Fukushima-Daiichi catastrophe has been carried out. The analysis performed aims at the determination of the amount of hydrogen participating on the explosion that happened in the reactor one, in contrast with other analysis centered on the amount of hydrogen generated during the accident. As an outcome of the research, it was determined that the most probable amount of hydrogen exploding during the catastrophe was 130 kg. It is remarkable that the combustion of such a small quantity of material can cause tremendous damage. This is an indication of the importance of these types of investigations. The industrial branches that can benefit from the applications of the model developed in this thesis include the whole future hydrogen economy, as well as nuclear safety both in fusion and fission technology.
Resumo:
Catalytic antibodies have shown great promise for catalyzing a tremendously diverse set of natural and unnatural chemical transformations. However, few catalytic antibodies have efficiencies that approach those of natural enzymes. In principle, random mutagenesis procedures such as phage display could be used to improve the catalytic activities of existing antibodies; however, these studies have been hampered by difficulties in the recombinant expression of antibodies. Here, we have grafted the antigen binding loops from a murine-derived catalytic antibody, 17E8, onto a human antibody framework in an effort to overcome difficulties associated with recombinant expression and phage display of this antibody. “Humanized” 17E8 retained similar catalytic and hapten binding properties as the murine antibody while levels of functional Fab displayed on phage were 200-fold higher than for a murine variable region/human constant region chimeric Fab. This construct was used to prepare combinatorial libraries. Affinity panning of these resulted in the selection of variants with 2- to 8-fold improvements in binding affinity for a phosphonate transition-state analog. Surprisingly, none of the affinity-matured variants was more catalytically active than the parent antibody and some were significantly less active. By contrast, a weaker binding variant was identified with 2-fold greater catalytic activity and incorporation of a single substitution (Tyr-100aH → Asn) from this variant into the parent antibody led to a 5-fold increase in catalytic efficiency. Thus, phage display methods can be readily used to optimize binding of catalytic antibodies to transition-state analogs, and when used in conjunction with limited screening for catalysis can identify variants with higher catalytic efficiencies.