934 resultados para Capture probability
Resumo:
Prediction at ungauged sites is essential for water resources planning and management. Ungauged sites have no observations about the magnitude of floods, but some site and basin characteristics are known. Regression models relate physiographic and climatic basin characteristics to flood quantiles, which can be estimated from observed data at gauged sites. However, these models assume linear relationships between variables Prediction intervals are estimated by the variance of the residuals in the estimated model. Furthermore, the effect of the uncertainties in the explanatory variables on the dependent variable cannot be assessed. This paper presents a methodology to propagate the uncertainties that arise in the process of predicting flood quantiles at ungauged basins by a regression model. In addition, Bayesian networks were explored as a feasible tool for predicting flood quantiles at ungauged sites. Bayesian networks benefit from taking into account uncertainties thanks to their probabilistic nature. They are able to capture non-linear relationships between variables and they give a probability distribution of discharges as result. The methodology was applied to a case study in the Tagus basin in Spain.
Resumo:
The aim of this research is to obtain the absorption rate of CO2 into aqueous solution of N,N- di methyl ethanolamine and into aqueous solution of Triethylene diamine and to demonstrate the importance of absorption of CO2 in nowadays by discussing global warming and greenhouse effect. It is also discussed the current situation of China focusing in the latest steps this country has recently made. In the experimental part of this work, the two tertiary amine solutions will absorb CO2 in a Lewis type cell, measuring the pressure change during the reactions take place. The temperature will be between 35 degree and 70 degree Celsius. The results of both solutions, concentrations of 0.5 and 1.0 mol per liter, are discussed and a single value of the rate constant is given for the first time along with some others parameters.
Resumo:
Motivated by these difficulties, Castillo et al. (2012) made some suggestions on how to build consistent stochastic models avoiding the selection of easy to use mathematical functions, which were replaced by those resulting from a set of properties to be satisfied by the model.
Resumo:
La innovación en Sistemas Intesivos en Software está alcanzando relevancia por múltiples razones: el software está presente en sectores como automóvil, teléfonos móviles o salud. Las empresas necesitan conocer aquellos factores que afectan a la innovación para incrementar las probabilidades de éxito en el desarrollo de sus productos y, la evaluación de productos sofware es un mecanismo potente para capturar este conocimiento. En consecuencia, las empresas necesitan evaluar sus productos desde la perpectiva de innovación para reducir la distancia entre los productos desarrollados y el mercado. Esto es incluso más relevante en el caso de los productos intensivos en software, donde el tiempo real, la oportunidad, complejidad, interoperabilidad, capacidad de respuesta y compartción de recursos son características críticas de los nuevos sistemas. La evaluación de la innovación de productos ya ha sido estudiada y se han definido algunos esquemas de evaluación pero no son específicos para Sistemas intensivos en Sofwtare; además, no se ha alcanzado consenso en los factores ni el procedimiento de evaluación. Por lo tanto, tiene sentido trabajar en la definición de un marco de evaluación de innovación enfocado a Sistemas intesivos en Software. Esta tesis identifica los elementos necesarios para construir in marco para la evaluación de de Sistemas intensivos en Software desde el punto de vista de la innovación. Se han identificado dos componentes como partes del marco de evaluación: un modelo de referencia y una herramienta adaptativa y personalizable para la realización de la evaluación y posicionamiento de la innovación. El modelo de referencia está compuesto por cuatro elementos principales que caracterizan la evaluación de innovación de productos: los conceptos, modelos de innovación, cuestionarios de evaluación y la evaluación de productos. El modelo de referencia aporta las bases para definir instancias de los modelos de evaluación de innovación de productos que pueden se evaluados y posicionados en la herramienta a través de cuestionarios y que de forma automatizada aporta los resultados de la evaluación y el posicionamiento respecto a la innovación de producto. El modelo de referencia ha sido rigurosamente construido aplicando modelado conceptual e integración de vistas junto con la aplicación de métodos cualitativos de investigación. La herramienta ha sido utilizada para evaluar productos como Skype a través de la instanciación del modelo de referencia. ABSTRACT Innovation in Software intensive Systems is becoming relevant for several reasons: software is present embedded in many sectors like automotive, robotics, mobile phones or heath care. Firms need to have knowledge about factors affecting the innovation to increase the probability of success in their product development and the assessment of innovation in software products is a powerful mechanism to capture this knowledge. Therefore, companies need to assess products from an innovation perspective to reduce the gap between their developed products and the market. This is even more relevant in the case of SiSs, where real time, timeliness, complexity, interoperability, reactivity, and resource sharing are critical features of a new system. Many authors have analysed product innovation assessment and some schemas have been developed but they are not specific to SiSs; in addition, there is no consensus about the factors or the procedures for performing an assessment. Therefore, it has sense to work in the definition of a customized software product innovation evaluation framework. This thesis identifies the elements needed to build a framework to assess software products from the innovation perspective. Two components have been identified as part of the framework to assess Software intensive Systems from the innovation perspective: a reference-model and an adaptive and customizable tool to perform the assessment and to position product innovation. The reference-model is composed by four main elements characterizing product innovation assessment: concepts, innovation models, assessment questionnaires and product assessment. The reference model provides the umbrella to define instances of product innovation assessment models that can be assessed and positioned through questionnaires in the proposed tool that also provides automation in the assessment and positioning of innovation. The reference-model has been rigorously built by applying conceptual modelling and view integration integrated with qualitative research methods. The tool has been used to assess products like Skype through models instantiated from the reference-model.
Resumo:
In this paper we want to point out, by means of a case study, the importance of incorporating some knowledge engineering techniques to the processes of software engineering. Precisely, we are referring to the knowledge eduction techniques. We know the difficulty of requirements acquisition and its importance to minimise the risks of a software project, both in the development phase and in the maintenance phase. To capture the functional requirements use cases are generally used. However, as we will show in this paper, this technique is insufficient when the problem domain knowledge is only in the "experts? mind". In this situation, the combination of the use case with eduction techniques, in every development phase, will let us to discover the correct requirements.
Resumo:
This paper proposes and analyzes the use of a nonrotating tethered system for a direct capture in Jovian orbit using the electrodynamic force generated along the cable. A detailed dynamical model is developed showing a strong gravitational and electrodynamic coupling between the center of mass and the attitude motions. This paper shows the feasibility of a direct capture in Jovian orbit of a rigid tethered system preventing the tether from rotating. Additional mechanical–thermal requirements are explored, and preliminary operational limits are defined to complete the maneuver. In particular, to ensure that the system remains nonrotating, a nominal attitude profile for a self-balanced electrodynamic tether is proposed, as well as a simple feedback control.
Resumo:
In previous papers, the type-I intermittent phenomenon with continuous reinjection probability density (RPD) has been extensively studied. However, in this paper type-I intermittency considering discontinuous RPD function in one-dimensional maps is analyzed. To carry out the present study the analytic approximation presented by del Río and Elaskar (Int. J. Bifurc. Chaos 20:1185-1191, 2010) and Elaskar et al. (Physica A. 390:2759-2768, 2011) is extended to consider discontinuous RPD functions. The results of this analysis show that the characteristic relation only depends on the position of the lower bound of reinjection (LBR), therefore for the LBR below the tangent point the relation {Mathematical expression}, where {Mathematical expression} is the control parameter, remains robust regardless the form of the RPD, although the average of the laminar phases {Mathematical expression} can change. Finally, the study of discontinuous RPD for type-I intermittency which occurs in a three-wave truncation model for the derivative nonlinear Schrodinger equation is presented. In all tests the theoretical results properly verify the numerical data
Resumo:
Computers and multimedia equipment have improved a lot in the last years. They have reduced their cost and size while at the same time increased their capabilities. These improvements allowed us to design and implement a portable recording system that also integrates the teacher´s tablet PC to capture what he/she writes on the slides and all that happens in it. This paper explains this system in detail and the validation of the recordings that we did after using it to record all the lectures the “Communications Software” course in our university. The results show that pupils used the recordings for different purposes and consider them useful for a variety of things, especially after missing a lecture.
Resumo:
El objetivo de esta tesis es estudiar la dinámica de la capa logarítmica de flujos turbulentos de pared. En concreto, proponemos un nuevo modelo estructural utilizando diferentes tipos de estructuras coherentes: sweeps, eyecciones, grupos de vorticidad y streaks. La herramienta utilizada es la simulación numérica directa de canales turbulentos. Desde los primeros trabajos de Theodorsen (1952), las estructuras coherentes han jugado un papel fundamental para entender la organización y dinámica de los flujos turbulentos. A día de hoy, datos procedentes de simulaciones numéricas directas obtenidas en instantes no contiguos permiten estudiar las propiedades fundamentales de las estructuras coherentes tridimensionales desde un punto de vista estadístico. Sin embargo, la dinámica no puede ser entendida en detalle utilizando sólo instantes aislados en el tiempo, sino que es necesario seguir de forma continua las estructuras. Aunque existen algunos estudios sobre la evolución temporal de las estructuras más pequeñas a números de Reynolds moderados, por ejemplo Robinson (1991), todavía no se ha realizado un estudio completo a altos números de Reynolds y para todas las escalas presentes de la capa logarítmica. El objetivo de esta tesis es llevar a cabo dicho análisis. Los problemas más interesantes los encontramos en la región logarítmica, donde residen las cascadas de vorticidad, energía y momento. Existen varios modelos que intentan explicar la organización de los flujos turbulentos en dicha región. Uno de los más extendidos fue propuesto por Adrian et al. (2000) a través de observaciones experimentales y considerando como elemento fundamental paquetes de vórtices con forma de horquilla que actúan de forma cooperativa para generar rampas de bajo momento. Un modelo alternativo fué ideado por del Álamo & Jiménez (2006) utilizando datos numéricos. Basado también en grupos de vorticidad, planteaba un escenario mucho más desorganizado y con estructuras sin forma de horquilla. Aunque los dos modelos son cinemáticamente similares, no lo son desde el punto de vista dinámico, en concreto en lo que se refiere a la importancia que juega la pared en la creación y vida de las estructuras. Otro punto importante aún sin resolver se refiere al modelo de cascada turbulenta propuesto por Kolmogorov (1941b), y su relación con estructuras coherentes medibles en el flujo. Para dar respuesta a las preguntas anteriores, hemos desarrollado un nuevo método que permite seguir estructuras coherentes en el tiempo y lo hemos aplicado a simulaciones numéricas de canales turbulentos con números de Reynolds lo suficientemente altos como para tener un rango de escalas no trivial y con dominios computacionales lo suficientemente grandes como para representar de forma correcta la dinámica de la capa logarítmica. Nuestros esfuerzos se han desarrollado en cuatro pasos. En primer lugar, hemos realizado una campaña de simulaciones numéricas directas a diferentes números de Reynolds y tamaños de cajas para evaluar el efecto del dominio computacional en las estadísticas de primer orden y el espectro. A partir de los resultados obtenidos, hemos concluido que simulaciones con cajas de longitud 2vr y ancho vr veces la semi-altura del canal son lo suficientemente grandes para reproducir correctamente las interacciones entre estructuras coherentes de la capa logarítmica y el resto de escalas. Estas simulaciones son utilizadas como punto de partida en los siguientes análisis. En segundo lugar, las estructuras coherentes correspondientes a regiones con esfuerzos de Reynolds tangenciales intensos (Qs) en un canal turbulento han sido estudiadas extendiendo a tres dimensiones el análisis de cuadrantes, con especial énfasis en la capa logarítmica y la región exterior. Las estructuras coherentes han sido identificadas como regiones contiguas del espacio donde los esfuerzos de Reynolds tangenciales son más intensos que un cierto nivel. Los resultados muestran que los Qs separados de la pared están orientados de forma isótropa y su contribución neta al esfuerzo de Reynolds medio es nula. La mayor contribución la realiza una familia de estructuras de mayor tamaño y autosemejantes cuya parte inferior está muy cerca de la pared (ligada a la pared), con una geometría compleja y dimensión fractal « 2. Estas estructuras tienen una forma similar a una ‘esponja de placas’, en comparación con los grupos de vorticidad que tienen forma de ‘esponja de cuerdas’. Aunque el número de objetos decae al alejarnos de la pared, la fracción de esfuerzos de Reynolds que contienen es independiente de su altura, y gran parte reside en unas pocas estructuras que se extienden más allá del centro del canal, como en las grandes estructuras propuestas por otros autores. Las estructuras dominantes en la capa logarítmica son parejas de sweeps y eyecciones uno al lado del otro y con grupos de vorticidad asociados que comparten las dimensiones y esfuerzos con los remolinos ligados a la pared propuestos por Townsend. En tercer lugar, hemos estudiado la evolución temporal de Qs y grupos de vorticidad usando las simulaciones numéricas directas presentadas anteriormente hasta números de Reynolds ReT = 4200 (Reynolds de fricción). Las estructuras fueron identificadas siguiendo el proceso descrito en el párrafo anterior y después seguidas en el tiempo. A través de la interseción geométrica de estructuras pertenecientes a instantes de tiempo contiguos, hemos creado gratos de conexiones temporales entre todos los objetos y, a partir de ahí, definido ramas primarias y secundarias, de tal forma que cada rama representa la evolución temporal de una estructura coherente. Una vez que las evoluciones están adecuadamente organizadas, proporcionan toda la información necesaria para caracterizar la historia de las estructuras desde su nacimiento hasta su muerte. Los resultados muestran que las estructuras nacen a todas las distancias de la pared, pero con mayor probabilidad cerca de ella, donde la cortadura es más intensa. La mayoría mantienen tamaños pequeños y no viven mucho tiempo, sin embargo, existe una familia de estructuras que crecen lo suficiente como para ligarse a la pared y extenderse a lo largo de la capa logarítmica convirtiéndose en las estructuras observas anteriormente y descritas por Townsend. Estas estructuras son geométricamente autosemejantes con tiempos de vida proporcionales a su tamaño. La mayoría alcanzan tamaños por encima de la escala de Corrsin, y por ello, su dinámica está controlada por la cortadura media. Los resultados también muestran que las eyecciones se alejan de la pared con velocidad media uT (velocidad de fricción) y su base se liga a la pared muy rápidamente al inicio de sus vidas. Por el contrario, los sweeps se mueven hacia la pared con velocidad -uT y se ligan a ella más tarde. En ambos casos, los objetos permanecen ligados a la pared durante 2/3 de sus vidas. En la dirección de la corriente, las estructuras se desplazan a velocidades cercanas a la convección media del flujo y son deformadas por la cortadura. Finalmente, hemos interpretado la cascada turbulenta, no sólo como una forma conceptual de organizar el flujo, sino como un proceso físico en el cual las estructuras coherentes se unen y se rompen. El volumen de una estructura cambia de forma suave, cuando no se une ni rompe, o lo hace de forma repentina en caso contrario. Los procesos de unión y rotura pueden entenderse como una cascada directa (roturas) o inversa (uniones), siguiendo el concepto de cascada de remolinos ideado por Richardson (1920) y Obukhov (1941). El análisis de los datos muestra que las estructuras con tamaños menores a 30η (unidades de Kolmogorov) nunca se unen ni rompen, es decir, no experimentan el proceso de cascada. Por el contrario, aquellas mayores a 100η siempre se rompen o unen al menos una vez en su vida. En estos casos, el volumen total ganado y perdido es una fracción importante del volumen medio de la estructura implicada, con una tendencia ligeramente mayor a romperse (cascada directa) que a unirse (cascade inversa). La mayor parte de interacciones entre ramas se debe a roturas o uniones de fragmentos muy pequeños en la escala de Kolmogorov con estructuras más grandes, aunque el efecto de fragmentos de mayor tamaño no es despreciable. También hemos encontrado que las roturas tienen a ocurrir al final de la vida de la estructura y las uniones al principio. Aunque los resultados para la cascada directa e inversa no son idénticos, son muy simétricos, lo que sugiere un alto grado de reversibilidad en el proceso de cascada. ABSTRACT The purpose of the present thesis is to study the dynamics of the logarithmic layer of wall-bounded turbulent flows. Specifically, to propose a new structural model based on four different coherent structures: sweeps, ejections, clusters of vortices and velocity streaks. The tool used is the direct numerical simulation of time-resolved turbulent channels. Since the first work by Theodorsen (1952), coherent structures have played an important role in the understanding of turbulence organization and its dynamics. Nowadays, data from individual snapshots of direct numerical simulations allow to study the threedimensional statistical properties of those objects, but their dynamics can only be fully understood by tracking them in time. Although the temporal evolution has already been studied for small structures at moderate Reynolds numbers, e.g., Robinson (1991), a temporal analysis of three-dimensional structures spanning from the smallest to the largest scales across the logarithmic layer has yet to be performed and is the goal of the present thesis. The most interesting problems lie in the logarithmic region, which is the seat of cascades of vorticity, energy, and momentum. Different models involving coherent structures have been proposed to represent the organization of wall-bounded turbulent flows in the logarithmic layer. One of the most extended ones was conceived by Adrian et al. (2000) and built on packets of hairpins that grow from the wall and work cooperatively to gen- ´ erate low-momentum ramps. A different view was presented by del Alamo & Jim´enez (2006), who extracted coherent vortical structures from DNSs and proposed a less organized scenario. Although the two models are kinematically fairly similar, they have important dynamical differences, mostly regarding the relevance of the wall. Another open question is whether such a model can be used to explain the cascade process proposed by Kolmogorov (1941b) in terms of coherent structures. The challenge would be to identify coherent structures undergoing a turbulent cascade that can be quantified. To gain a better insight into the previous questions, we have developed a novel method to track coherent structures in time, and used it to characterize the temporal evolutions of eddies in turbulent channels with Reynolds numbers high enough to include a non-trivial range of length scales, and computational domains sufficiently long and wide to reproduce correctly the dynamics of the logarithmic layer. Our efforts have followed four steps. First, we have conducted a campaign of direct numerical simulations of turbulent channels at different Reynolds numbers and box sizes, and assessed the effect of the computational domain in the one-point statistics and spectra. From the results, we have concluded that computational domains with streamwise and spanwise sizes 2vr and vr times the half-height of the channel, respectively, are large enough to accurately capture the dynamical interactions between structures in the logarithmic layer and the rest of the scales. These simulations are used in the subsequent chapters. Second, the three-dimensional structures of intense tangential Reynolds stress in plane turbulent channels (Qs) have been studied by extending the classical quadrant analysis to three dimensions, with emphasis on the logarithmic and outer layers. The eddies are identified as connected regions of intense tangential Reynolds stress. Qs are then classified according to their streamwise and wall-normal fluctuating velocities as inward interactions, outward interactions, sweeps and ejections. It is found that wall-detached Qs are isotropically oriented background stress fluctuations, common to most turbulent flows, and do not contribute to the mean stress. Most of the stress is carried by a selfsimilar family of larger wall-attached Qs, increasingly complex away from the wall, with fractal dimensions « 2. They have shapes similar to ‘sponges of flakes’, while vortex clusters resemble ‘sponges of strings’. Although their number decays away from the wall, the fraction of the stress that they carry is independent of their heights, and a substantial part resides in a few objects extending beyond the centerline, reminiscent of the very large scale motions of several authors. The predominant logarithmic-layer structures are sideby- side pairs of sweeps and ejections, with an associated vortex cluster, and dimensions and stresses similar to Townsend’s conjectured wall-attached eddies. Third, the temporal evolution of Qs and vortex clusters are studied using time-resolved DNS data up to ReT = 4200 (friction Reynolds number). The eddies are identified following the procedure presented above, and then tracked in time. From the geometric intersection of structures in consecutive fields, we have built temporal connection graphs of all the objects, and defined main and secondary branches in a way that each branch represents the temporal evolution of one coherent structure. Once these evolutions are properly organized, they provide the necessary information to characterize eddies from birth to death. The results show that the eddies are born at all distances from the wall, although with higher probability near it, where the shear is strongest. Most of them stay small and do not last for long times. However, there is a family of eddies that become large enough to attach to the wall while they reach into the logarithmic layer, and become the wall-attached structures previously observed in instantaneous flow fields. They are geometrically self-similar, with sizes and lifetimes proportional to their distance from the wall. Most of them achieve lengths well above the Corrsin’ scale, and hence, their dynamics are controlled by the mean shear. Eddies associated with ejections move away from the wall with an average velocity uT (friction velocity), and their base attaches very fast at the beginning of their lives. Conversely, sweeps move towards the wall at -uT, and attach later. In both cases, they remain attached for 2/3 of their lives. In the streamwise direction, eddies are advected and deformed by the local mean velocity. Finally, we interpret the turbulent cascade not only as a way to conceptualize the flow, but as an actual physical process in which coherent structures merge and split. The volume of an eddy can change either smoothly, when they are not merging or splitting, or through sudden changes. The processes of merging and splitting can be thought of as a direct (when splitting) or an inverse (when merging) cascade, following the ideas envisioned by Richardson (1920) and Obukhov (1941). It is observed that there is a minimum length of 30η (Kolmogorov units) above which mergers and splits begin to be important. Moreover, all eddies above 100η split and merge at least once in their lives. In those cases, the total volume gained and lost is a substantial fraction of the average volume of the structure involved, with slightly more splits (direct cascade) than mergers. Most branch interactions are found to be the shedding or absorption of Kolmogorov-scale fragments by larger structures, but more balanced splits or mergers spanning a wide range of scales are also found to be important. The results show that splits are more probable at the end of the life of the eddy, while mergers take place at the beginning of the life. Although the results for the direct and the inverse cascades are not identical, they are found to be very symmetric, which suggests a high degree of reversibility of the cascade process.
Resumo:
La conciencia de la crisis de la modernidad -que comienza ya a finales del siglo XIX- ha cobrado más experiencia debido al conocimiento de los límites del desarrollo económico, ya que como parecía razonable pensar, también los recursos naturales son finitos. En 1972, el Club de Roma analizó las distintas opciones disponibles para conseguir armonizar el desarrollo sostenible y las limitaciones medioambientales. Fue en 1987 cuando la Comisión Mundial para el Medio Ambiente y el Desarrollo de la ONU definía por primera vez el concepto de desarrollo sostenible. Definición que posteriormente fue incorporada en todos los programas de la ONU y sirvió de eje, por ejemplo, a la Cumbre de la Tierra celebrada en Río de Janeiro en 1992. Parece evidente que satisfacer la demanda energética, fundamentalmente desde la Revolución Industrial en el s XIX, trajo consigo un creciente uso de los combustibles fósiles, con la consiguiente emisión de los gases de efecto invernadero (GEI) y el aumento de la temperatura global media terrestre. Esta temperatura se incrementó en los últimos cien años en una media de 0.74ºC. La mayor parte del incremento observado desde la mitad del siglo XX en esta temperatura media se debe, con una probabilidad de al menos el 90%, al aumento observado en los GEI antropogénicos, siendo uno de ellos el CO2 que proviene de la transformación del carbono de los combustibles fósiles durante su combustión. Ante el creciente uso de los combustibles fósiles, los proyectos CAC, proyectos de captura, transporte y almacenamiento, se presentan como una contribución al desarrollo sostenible ya que se trata de una tecnología que permite mitigar el cambio climático. Para valorar si la tecnología CAC es sostenible, habrá que comprobar si existe o no capacidad para almacenar el CO2 en una cantidad mayor a la de producción y durante el tiempo necesario que impone la evolución de la concentración de CO2 en la atmósfera para mantenerla por debajo de las 450ppmv (concentración de CO2 que propone el Panel Intergubernamental para el Cambio Climático). El desarrollo de los proyectos CAC completos pasa por la necesaria selección de adecuados almacenes de CO2 que sean capaces de soportar los efectos de las presiones de inyección, así como asegurar la capacidad de dichos almacenes y la estanqueidad del CO2 en los mismos. La caracterización geológica de un acuífero susceptible de ser almacén de CO2 debe conducir a determinar las propiedades que dicho almacén posee para asegurar un volumen adecuado de almacenamiento, una inyectabilidad del CO2 en el mismo a un ritmo adecuado y la estanqueidad del CO2 en dicho acuífero a largo plazo. El presente trabajo pretende estudiar los parámetros que tienen influencia en el cálculo de la capacidad del almacén, para lo que en primer lugar se ha desarrollado la tecnología necesaria para llevar a cabo la investigación mediante ensayos de laboratorio. Así, se ha desarrollado una patente, "ATAP, equipo para ensayos petrofísicos (P201231913)", con la que se ha llevado a cabo la parte experimental de este trabajo para la caracterización de los parámetros que tienen influencia en el cálculo de la capacidad del almacén. Una vez desarrollada la tecnología, se aborda el estudio de los distintos parámetros que tienen influencia en la capacidad del almacén realizando ensayos con ATAP. Estos ensayos definen el volumen del almacenamiento, llegándose a la conclusión de que en la determinación de este volumen, juegan un papel importante el alcance de los mecanismos trampa, físicos o químicos, del CO2 en el almacén. Ensayos que definen la capacidad del almacén de "aceptar" o "rechazar" el CO2 inyectado, la inyectabilidad, y por último, ensayos encaminados a determinar posibles fugas que se pueden dar a través de los pozos de inyección, definidos estos como caminos preferenciales de fugas en un almacén subterráneo de CO2. Queda de este modo caracterizada la estanqueidad del CO2 en el acuífero a largo plazo y su influencia obvia en la determinación de la capacidad del almacén. Unido al propósito de la estimación de la capacidad del almacén, se encuentra el propósito de asegurar la estanqueidad de dichos almacenes en el tiempo, y adelantarse a la evolución de la pluma de CO2 en el interior de dichos almacenes. Para cumplir este propósito, se ha desarrollado un modelo dinámico a escala de laboratorio, mediante el programa ECLIPSE 300, con el fin de establecer una metodología para el cálculo de la capacidad estimada del almacén, así como el estudio de la evolución de la pluma de CO2 dentro del acuífero a lo largo del tiempo, partiendo de los resultados obtenidos en los ensayos realizados en ATAP y con la modelización de la probeta de roca almacén empleada en dichos ensayos. Presentamos por tanto un trabajo que establece las bases metodológicas para el estudio de la influencia de distintos parámetros petrofísicos en el cálculo de la capacidad del almacén unidos al desarrollo tecnológico de ATAP y su utilización para la determinación de dichos parámetros aplicables a cada acuífero concreto de estudio. ABSTRACT The crisis of modernity –which begins at the end of 19th Century- has been more important due to the knowledge of the limits of economic development, since it appeared to be thought reasonable, the natural resources are finite. In 1972, The Club of Rome analyzed the different options available in order to harmonize the sustainability and the environment development. It was in 1987 when The Global Commission on The Environment and the Development of UN, defined for the first time the concept of Sustainable Development. This definition that was fully incorporated in all the UN programs and it was useful as an axis, for example, in La Cumbre de la Tierra summit in Río de Janeiro in 1992. It seems obvious to satisfy energetic demand, basically after The Industrial Revolution in 19th Century, which represented an increasing use of fossil fuels, therefore greenhouse gases emission and the increasing of global average temperature. This temperature increased in the last 100 years up to 0.74ºC. The major part of the temperature increase is due to the increase observed in Greenhouse gases with human origin, at least with 90% of probability. The most important gas is the CO2 because of its quantity. In the face of the increasing use of fossil fuels, the CCS projects, Carbon Capture and Storage projects, appear as a contribution of sustainable development since it is a technology for avoiding the climate change. In order to evaluate if CCS technology is sustainable, it will be necessary to prove if the capacity for CO2 storage is available or not in a quantity greater than the production one and during the time necessary to keep the CO2 concentration in the atmosphere lower than 450ppmv (concentration imposed by IPCC). The development of full CCS projects goes through the selection of good CO2 storages that are able to support the effects of pressure injection, and assure the capacity of such storages and the watertightness of CO2. The geological characterization of the aquifer that could be potential CO2 storage should lead to determine the properties that such storage has in order to assure the adequate storage volume, the CO2 injectivity in a good rate, and the watertightness of the CO2 in the long term. The present work aims to study the parameters that have influence on the calculation of storage capacity, and for that purpose the appropriate technology has been developed for carrying out the research by mean of laboratory tests. Thus, a patent has been developed, "ATAP, equipo para ensayos petrofísicos (P201231913)", that has been used for developing the experimental part of this work. Once the technology has been developed, the study of different parameters, that have influence on the capacity of the storage, has been addressed developing different tests in ATAP. These tests define the storage volume which is related to the scope of different CO2 trap mechanisms, physical or chemical, in the storage. Tests that define the capacity of the storage to “accept” or “reject” the injected CO2, the injectivity, and tests led to determine possible leakages through injection wells. In this way we could talk about the watertightness in the aquifer in the long term and its influence on the storage capacity estimation. Together with the purpose of the storage capacity estimation, is the purpose of assuring the watertightness of such storages in the long term and anticipating the evolution of CO2 plume inside such aquifers. In order to fulfill this purpose, a dynamic model has been developed with ECLIPSE 300, for stablishing the methodology for the calculation of storage capacity estimation and the evolution of the CO2 plume, starting out with the tests carried out in ATAP. We present this work that establishes the methodology bases for the study of the influence of different petrophysics parameters in the calculation of the capacity of the storage together with the technological development of ATAP and its utilization for the determination of such parameters applicable to each aquifer.
Resumo:
A Space tether is a thin, multi-kilometers long conductive wire, joining a satellite and some opposite end mass, and keeping vertical in orbit by the gravity-gradient. The ambient plasma, being highly conductive, is equipotential in its own co-moving frame. In the tether frame, in relative motion however, there is in the plasma a motional electric field of order of 100 V/km, product of (near) orbital velocity and geomagnetic field. The electromotive force established over the tether length allows plasma contactor devices to collect electrons at one polarized-positive (anodic) end and eject electrons at the opposite end, setting up a current along a standard, fully insulated tether. The Lorentz force exerted on the current by the geomagnetic field itself is always drag; this relies on just thermodynamics, like air drag. The bare tether concept, introduced in 1992 at the Universidad Politécnica de Madrid (UPM), takes away the insulation and has electrons collected over the tether segment coming out polarized positive; the concept rests on 2D (Langmuir probe) current-collection in plasmas being greatly more efficient than 3D collection. A Plasma Contactor ejects electrons at the cathodic end. A bare tether with a thin-tape cross section has much greater perimeter and de-orbits much faster than a (corresponding) round bare tether of equal length and mass. Further, tethers being long and thin, they are prone to cuts by abundant small space debris, but BETs has shown that the tape has a probability of being cut per unit time smaller by more than one order of magnitude than the corresponding round tether (debris comparable to its width are much less abundant than debris comparable to the radius of the corresponding round tether). Also, the tape collects much more current, and de-orbits much faster, than a corresponding multi-line “tape” made of thin round wires cross-connected to survive debris cuts. Tethers use a dissipative mechanism quite different from air drag and can de-orbit in just a few months; also, tape tethers are much lighter than round tethers of equal length and perimeter, which can capture equal current. The 3 disparate tape dimensions allow easily scalable design. Switching the cathodic Contactor off-on allows maneuvering to avoid catastrophic collisions with big tracked debris. Lorentz braking is as reliable as air drag. Tethers are still reasonably effective at high inclinations, where the motional field is small, because the geomagnetic field is not just a dipole along the Earth polar axis. BETs is the EC FP7/Space Project 262972, financed in about 1.8 million euros, from 1 November 2010 to 31 January 2014, and carrying out RTD work on de-orbiting space debris. Coordinated by UPM, it has partners Università di Padova, ONERA-Toulouse, Colorado State University, SME Emxys, DLR–Bremen, and Fundación Tecnalia. BETs work involves 1) Designing, building, and ground-testing basic hardware subsystems Cathodic Plasma Contactor, Tether Deployment Mechanism, Power Control Module, and Tape with crosswise and lengthwise structure. 2) Testing current collection and verifying tether dynamical stability. 3) Preliminary design of tape dimensions for a generic mission, conducive to low system-to-satellite mass ratio and probability of cut by small debris, and ohmic-effects regime of tether current for fast de-orbiting. Reaching TRL 4-5, BETs appears ready for in-orbit demostration.
Resumo:
1st ed.
Resumo:
Acknowledgements The authors acknowledge L. Wicks and B. de Francisco for helping in coral sampling and coral care in the aquaria facilities at SAMS. Thanks to C. Campbell and the CCAP for kind support and help. Scientific party and crew on board the RVs Calanus and Seol Mara, as well as on board the RRS James Cook during the Changing Oceans cruise (JC_073) are greatly acknowledged. Thanks to colleagues at SAMS for their support during our stay at SAMS. We are in debt with A. Olariaga for his help modifying the cylindrical experimental chambers used in the experiments, and C.C. Suckling for assistance with the flume experiment. Many thanks go to G. Kazadinis for preparing the POM used in the feeding experiments. We also thank two anonymous reviewers and the editor for their constructive comments, which contribute to improve the manuscript. This work has been supported by the European Commission through two ASSEMBLE projects (grant agreement no. 227799) conducted in 2010 and 2011 at SAMS, as well as by the UK Ocean Acidification Research Programme's Benthic Consortium project (awards NE/H01747X/1 and NE/H017305/1) funded by NERC. [SS]
Resumo:
The application of immunoprotein-based targeting strategies to the boron neutron-capture therapy of cancer poses an exceptional challenge, because viable boron neutron-capture therapy by this method will require the efficient delivery of 103 boron-10 atoms by each antigen-binding protein. Our recent investigations in this area have been focused on the development of efficient methods for the assembly of homogeneous immunoprotein conjugates containing the requisite boron load. In this regard, engineered immunoproteins fitted with unique, exposed cysteine residues provide attractive vehicles for site-specific modification. Additionally, homogeneous oligomeric boron-rich phosphodiesters (oligophosphates) have been identified as promising conjugation reagents. The coupling of two such boron-rich oligophosphates to sulfhydryls introduced to the CH2 domain of a chimeric IgG3 has been demonstrated. The resulting boron-rich immunoconjugates are formed efficiently, are readily purified, and have promising in vitro and in vivo characteristics. Encouragingly, these studies showed subtle differences in the properties of the conjugates derived from the two oligophosphate molecules studied, providing a basis for the application of rational design to future work. Such subtle details would not have been as readily discernible in heterogeneous conjugates, thus validating the rigorous experimental design employed here.