889 resultados para Operational feasibility
Resumo:
This paper presents the Expectation Maximization algorithm (EM) applied to operational modal analysis of structures. The EM algorithm is a general-purpose method for maximum likelihood estimation (MLE) that in this work is used to estimate state space models. As it is well known, the MLE enjoys some optimal properties from a statistical point of view, which make it very attractive in practice. However, the EM algorithm has two main drawbacks: its slow convergence and the dependence of the solution on the initial values used. This paper proposes two different strategies to choose initial values for the EM algorithm when used for operational modal analysis: to begin with the parameters estimated by Stochastic Subspace Identification method (SSI) and to start using random points. The effectiveness of the proposed identification method has been evaluated through numerical simulation and measured vibration data in the context of a benchmark problem. Modal parameters (natural frequencies, damping ratios and mode shapes) of the benchmark structure have been estimated using SSI and the EM algorithm. On the whole, the results show that the application of the EM algorithm starting from the solution given by SSI is very useful to identify the vibration modes of a structure, discarding the spurious modes that appear in high order models and discovering other hidden modes. Similar results are obtained using random starting values, although this strategy allows us to analyze the solution of several starting points what overcome the dependence on the initial values used.
Resumo:
The estimation of modal parameters of a structure from ambient measurements has attracted the attention of many researchers in the last years. The procedure is now well established and the use of state space models, stochastic system identification methods and stabilization diagrams allows to identify the modes of the structure. In this paper the contribution of each identified mode to the measured vibration is discussed. This modal contribution is computed using the Kalman filter and it is an indicator of the importance of the modes. Also the variation of the modal contribution with the order of the model is studied. This analysis suggests selecting the order for the state space model as the order that includes the modes with higher contribution. The order obtained using this method is compared to those obtained using other well known methods, like Akaike criteria for time series or the singular values of the weighted projection matrix in the Stochastic Subspace Identification method. Finally, both simulated and measured vibration data are used to show the practicability of the derived technique. Finally, it is important to remark that the method can be used with any identification method working in the state space model.
Resumo:
The growth of wind power as an electric energy source is profitable from an environmental point of view and improves the energetic independence of countries with little fossil fuel resources. However, the wind resource randomness poses a great challenge in the management of electric grids. This study raises the possibility of using hydrogen as a mean to damp the variability of the wind resource. Thus, it is proposed the use of all the energy produced by a typical wind farm for hydrogen generation, that will in turn be used after for suitable generation of electric energy according to the operation rules in a liberalized electric market.
Resumo:
Intraoral devices for bite-force sensing have several applications in odontology and maxillofacial surgery, as bite-force measurements provide additional information to help understand the characteristics of bruxism disorders and can also be of help for the evaluation of post-surgical evolution and for comparison of alternative treatments. A new system for measuring human bite forces is proposed in this work. This system has future applications for the monitoring of bruxism events and as a complement for its conventional diagnosis. Bruxism is a pathology consisting of grinding or tight clenching of the upper and lower teeth, which leads to several problems such as lesions to the teeth, headaches, orofacial pain and important disorders of the temporomandibular joint. The prototype uses a magnetic field communication scheme similar to low-frequency radio frequency identification (RFID) technology (NFC). The reader generates a low-frequency magnetic field that is used as the information carrier and powers the sensor. The system is notable because it uses an intra-mouth passive sensor and an external interrogator, which remotely records and processes information regarding a patient?s dental activity. This permits a quantitative assessment of bite-force, without requiring intra-mouth batteries, and can provide supplementary information to polysomnographic recordings, current most adequate early diagnostic method, so as to initiate corrective actions before irreversible dental wear appears. In addition to describing the system?s operational principles and the manufacture of personalized prototypes, this report will also demonstrate the feasibility of the system and results from the first in vitro and in vivo trials.
Resumo:
CO2 capture and storage (CCS) projects are presently developed to reduce the emission of anthropogenic CO2 into the atmosphere. CCS technologies are expected to account for the 20% of the CO2 reduction by 2050. One of the main concerns of CCS is whether CO2 may remain confined within the geological formation into which it is injected since post-injection CO2 migration in the time scale of years, decades and centuries is not well understood. Theoretically, CO2 can be retained at depth i) as a supercritical fluid (physical trapping), ii) as a fluid slowly migrating in an aquifer due to long flow path (hydrodynamic trapping), iii) dissolved into ground waters (solubility trapping) and iv) precipitated secondary carbonates. Carbon dioxide will be injected in the near future (2012) at Hontomín (Burgos, Spain) in the frame of the Compostilla EEPR project, led by the Fundación Ciudad de la Energía (CIUDEN). In order to detect leakage in the operational stage, a pre-injection geochemical baseline is presently being developed. In this work a geochemical monitoring design is presented to provide information about the feasibility of CO2 storage at depth.
Resumo:
In pressure irrigation-water distribution networks, pressure regulating devices for controlling the discharged flow rate by irrigation units are needed due to the variability of flow rate. In addition, applied water volume is used controlled operating the valve during a calculated time interval, and assuming constant flow rate. In general, a pressure regulating valve PRV is the commonly used pressure regulating device in a hydrant, which, also, executes the open and close function. A hydrant feeds several irrigation units, requiring a wide range in flow rate. In addition, some flow meters are also available, one as a component of the hydrant and the rest are placed downstream. Every land owner has one flow meter for each group of field plots downstream the hydrant. Its lecture could be used for refining the water balance but its accuracy must be taken into account. Ideal PRV performance would maintain a constant downstream pressure. However, the true performance depends on both upstream pressure and the discharged flow rate. The objective of this work is to asses the influence of the performance on the applied volume during the whole irrigation events in a year. The results of the study have been obtained introducing the flow rate into a PRV model. Variations on flow rate are simulated by taking into account the consequences of variations on climate conditions and also decisions in irrigation operation, such us duration and frequency application. The model comprises continuity, dynamic and energy equations of the components of the PRV.
Resumo:
A piece of research is presented that was conducted on the Guayanes Farmhouse Telita Cheese Producers Network located in the Piar and Padre Chien rural municipalities of Bolivar state in Venezuela. Guayanes telita cheese is a regional dairy product. The producers are to be found in a rural area with a high potential for marketing the label in the Southern Common Market (MERCOSUR). This market is the focal point of the strategic importance of this study for the Region and the Country. The research is of a descriptive scope conducted in the field. A questionnaire based on good food production practice was used as a data gathering technique. The final sample comprised 30 production units. Statistical processing was performed with version 15.2 of the STATGRAPHICS Centurion computational tool. The results would appear to confirm previous studies that point to the existence of factors that prevent these Micro-SMEs from guaranteeing the food safety of the product. The results indicate that new lines of research need to be opened up. These are oriented towards formulating strategies for the continuous improvement of these micro-SMEs, including quality control indicators.
Resumo:
In the last years significant efforts have been devoted to the development of advanced data analysis tools to both predict the occurrence of disruptions and to investigate the operational spaces of devices, with the long term goal of advancing the understanding of the physics of these events and to prepare for ITER. On JET the latest generation of the disruption predictor called APODIS has been deployed in the real time network during the last campaigns with the new metallic wall. Even if it was trained only with discharges with the carbon wall, it has reached very good performance, with both missed alarms and false alarms in the order of a few percent (and strategies to improve the performance have already been identified). Since for the optimisation of the mitigation measures, predicting also the type of disruption is considered to be also very important, a new clustering method, based on the geodesic distance on a probabilistic manifold, has been developed. This technique allows automatic classification of an incoming disruption with a success rate of better than 85%. Various other manifold learning tools, particularly Principal Component Analysis and Self Organised Maps, are also producing very interesting results in the comparative analysis of JET and ASDEX Upgrade (AUG) operational spaces, on the route to developing predictors capable of extrapolating from one device to another.
Resumo:
This article shows software that allows determining the statistical behavior of qualitative data originating surveys previously transformed with a Likert’s scale to quantitative data. The main intention is offer to users a useful tool to know statistics' characteristics and forecasts of financial risks in a fast and simple way. Additionally,this paper presents the definition of operational risk. On the other hand, the article explains different techniques to do surveys with a Likert’s scale (Avila, 2008) to know expert’s opinion with the transformation of qualitative data to quantitative data. In addition, this paper will show how is very easy to distinguish an expert’s opinion related to risk, but when users have a lot of surveys and matrices is very difficult to obtain results because is necessary to compare common data. On the other hand, statistical value representative must be extracted from common data to get weight of each risk. In the end, this article exposes the development of “Qualitative Operational Risk Software” or QORS by its acronym, which has been designed to determine the root of risks in organizations and its value at operational risk OpVaR (Jorion, 2008; Chernobai et al, 2008) when input data comes from expert’s opinion and their associated matrices.
Resumo:
Although numerous modelling efforts have integrated food and water considerations at the farm or river basin level, very few agro-economic models are able to jointly assess water and food policies at the global level. The present report explores the feasibility of integrating water considerations into the CAPRI model. First, a literature review of modelling approaches integrating food and water issues has been conducted. Three agro-economic models, IMPACT, WATERSIM and GLOBIOM, have been analysed in detail. In addition, biophysical and hydrological models estimating agricultural water use have also been studied, in particular the global hydrological model WATERGAP and the LISFLOOD model. Thanks to the programming approach of its supply module, CAPRI shows a high potentiality to integrate environmental indicators as well as to enter new resource constraints (land potentially irrigated, irrigation water) and input-output relationships. At least in theory, the activity-based approach of the regional programming model in CAPRI allows differentiating between rainfed and irrigated activities. The suggested approach to include water into the CAPRI model involves creating an irrigation module and a water use module. The development of the CAPRI water module will enable to provide scientific assessment on agricultural water use within the EU and to analyze agricultural pressures on water resources. The feasibility of the approach has been tested in a pilot case study including two NUTS 2 regions (Andalucia in Spain and Midi-Pyrenees in France). Preliminary results are presented, highlighting the interrelations between water and agricultural developments in Europe. As a next step, it is foreseen to further develop the CAPRI water module to account for competition between agricultural and non-agricultural water use. This will imply building a water use sub-module to compute water use balances.
Resumo:
The modal analysis of a structural system consists on computing its vibrational modes. The experimental way to estimate these modes requires to excite the system with a measured or known input and then to measure the system output at different points using sensors. Finally, system inputs and outputs are used to compute the modes of vibration. When the system refers to large structures like buildings or bridges, the tests have to be performed in situ, so it is not possible to measure system inputs such as wind, traffic, . . .Even if a known input is applied, the procedure is usually difficult and expensive, and there are still uncontrolled disturbances acting at the time of the test. These facts led to the idea of computing the modes of vibration using only the measured vibrations and regardless of the inputs that originated them, whether they are ambient vibrations (wind, earthquakes, . . . ) or operational loads (traffic, human loading, . . . ). This procedure is usually called Operational Modal Analysis (OMA), and in general consists on to fit a mathematical model to the measured data assuming the unobserved excitations are realizations of a stationary stochastic process (usually white noise processes). Then, the modes of vibration are computed from the estimated model. The first issue investigated in this thesis is the performance of the Expectation- Maximization (EM) algorithm for the maximum likelihood estimation of the state space model in the field of OMA. The algorithm is described in detail and it is analysed how to apply it to vibration data. After that, it is compared to another well known method, the Stochastic Subspace Identification algorithm. The maximum likelihood estimate enjoys some optimal properties from a statistical point of view what makes it very attractive in practice, but the most remarkable property of the EM algorithm is that it can be used to address a wide range of situations in OMA. In this work, three additional state space models are proposed and estimated using the EM algorithm: • The first model is proposed to estimate the modes of vibration when several tests are performed in the same structural system. Instead of analyse record by record and then compute averages, the EM algorithm is extended for the joint estimation of the proposed state space model using all the available data. • The second state space model is used to estimate the modes of vibration when the number of available sensors is lower than the number of points to be tested. In these cases it is usual to perform several tests changing the position of the sensors from one test to the following (multiple setups of sensors). Here, the proposed state space model and the EM algorithm are used to estimate the modal parameters taking into account the data of all setups. • And last, a state space model is proposed to estimate the modes of vibration in the presence of unmeasured inputs that cannot be modelled as white noise processes. In these cases, the frequency components of the inputs cannot be separated from the eigenfrequencies of the system, and spurious modes are obtained in the identification process. The idea is to measure the response of the structure corresponding to different inputs; then, it is assumed that the parameters common to all the data correspond to the structure (modes of vibration), and the parameters found in a specific test correspond to the input in that test. The problem is solved using the proposed state space model and the EM algorithm. Resumen El análisis modal de un sistema estructural consiste en calcular sus modos de vibración. Para estimar estos modos experimentalmente es preciso excitar el sistema con entradas conocidas y registrar las salidas del sistema en diferentes puntos por medio de sensores. Finalmente, los modos de vibración se calculan utilizando las entradas y salidas registradas. Cuando el sistema es una gran estructura como un puente o un edificio, los experimentos tienen que realizarse in situ, por lo que no es posible registrar entradas al sistema tales como viento, tráfico, . . . Incluso si se aplica una entrada conocida, el procedimiento suele ser complicado y caro, y todavía están presentes perturbaciones no controladas que excitan el sistema durante el test. Estos hechos han llevado a la idea de calcular los modos de vibración utilizando sólo las vibraciones registradas en la estructura y sin tener en cuenta las cargas que las originan, ya sean cargas ambientales (viento, terremotos, . . . ) o cargas de explotación (tráfico, cargas humanas, . . . ). Este procedimiento se conoce en la literatura especializada como Análisis Modal Operacional, y en general consiste en ajustar un modelo matemático a los datos registrados adoptando la hipótesis de que las excitaciones no conocidas son realizaciones de un proceso estocástico estacionario (generalmente ruido blanco). Posteriormente, los modos de vibración se calculan a partir del modelo estimado. El primer problema que se ha investigado en esta tesis es la utilización de máxima verosimilitud y el algoritmo EM (Expectation-Maximization) para la estimación del modelo espacio de los estados en el ámbito del Análisis Modal Operacional. El algoritmo se describe en detalle y también se analiza como aplicarlo cuando se dispone de datos de vibraciones de una estructura. A continuación se compara con otro método muy conocido, el método de los Subespacios. Los estimadores máximo verosímiles presentan una serie de propiedades que los hacen óptimos desde un punto de vista estadístico, pero la propiedad más destacable del algoritmo EM es que puede utilizarse para resolver un amplio abanico de situaciones que se presentan en el Análisis Modal Operacional. En este trabajo se proponen y estiman tres modelos en el espacio de los estados: • El primer modelo se utiliza para estimar los modos de vibración cuando se dispone de datos correspondientes a varios experimentos realizados en la misma estructura. En lugar de analizar registro a registro y calcular promedios, se utiliza algoritmo EM para la estimación conjunta del modelo propuesto utilizando todos los datos disponibles. • El segundo modelo en el espacio de los estados propuesto se utiliza para estimar los modos de vibración cuando el número de sensores disponibles es menor que vi Resumen el número de puntos que se quieren analizar en la estructura. En estos casos es usual realizar varios ensayos cambiando la posición de los sensores de un ensayo a otro (múltiples configuraciones de sensores). En este trabajo se utiliza el algoritmo EM para estimar los parámetros modales teniendo en cuenta los datos de todas las configuraciones. • Por último, se propone otro modelo en el espacio de los estados para estimar los modos de vibración en la presencia de entradas al sistema que no pueden modelarse como procesos estocásticos de ruido blanco. En estos casos, las frecuencias de las entradas no se pueden separar de las frecuencias del sistema y se obtienen modos espurios en la fase de identificación. La idea es registrar la respuesta de la estructura correspondiente a diferentes entradas; entonces se adopta la hipótesis de que los parámetros comunes a todos los registros corresponden a la estructura (modos de vibración), y los parámetros encontrados en un registro específico corresponden a la entrada en dicho ensayo. El problema se resuelve utilizando el modelo propuesto y el algoritmo EM.
Resumo:
Digital services and communications in vehicular scenarios provide the essential assets to improve road transport in several ways like reducing accidents, improving traffic efficiency and optimizing the transport of goods and people. Vehicular communications typically rely on VANET (Vehicular Ad hoc Networks). In these networks vehicles communicate with each other without the need of infrastructure. VANET are mainly oriented to disseminate information to the vehicles in certain geographic area for time critical services like safety warnings but present very challenging requirements that have not been successfully fulfilled nowadays. Some of these challenges are; channel saturation due to simultaneous radio access of many vehicles, routing protocols in topologies that vary rapidly, minimum quality of service assurance and security mechanisms to efficiently detect and neutralize malicious attacks. Vehicular services can be classified in four important groups: Safety, Efficiency, Sustainability and Infotainment. The benefits of these services for the transport sector are clear but many technological and business challenges need to be faced before a real mass market deployment. Service delivery platforms are not prepared for fulfilling the needs of this complex environment with restrictive requirements due to the criticism of some services To overcome this situation, we propose a solution called VISIONS “Vehicular communication Improvement: Solution based on IMS Operational Nodes and Services”. VISIONS leverages on IMS subsystem and NGN enablers, and follows the CALM reference Architecture standardized by ISO. It also avoids the use of Road Side Units (RSUs), reducing complexity and high costs in terms of deployment and maintenance. We demonstrate the benefits in the following areas: 1. VANET networks efficiency. VISIONS provide a mechanism for the vehicles to access valuable information from IMS and its capabilities through a cellular channel. This efficiency improvement will occur in two relevant areas: a. Routing mechanisms. These protocols are responsible of carrying information from a vehicle to another (or a group of vehicles) using multihop mechanisms. We do not propose a new algorithm but the use of VANET topology information provided through our solution to enrich the performance of these protocols. b. Security. Many aspects of security (privacy, key, authentication, access control, revocation mechanisms, etc) are not resolved in vehicular communications. Our solution efficiently disseminates revocation information to neutralize malicious nodes in the VANET. 2. Service delivery platform. It is based on extended enablers, reference architectures, standard protocols and open APIs. By following this approach, we reduce costs and resources for service development, deployment and maintenance. To quantify these benefits in VANET networks, we provide an analytical model of the system and simulate our solution in realistic scenarios. The simulations results demonstrate how VISIONS improves the performance of relevant routing protocols and is more efficient neutralizing security attacks than the widely proposed solutions based on RSUs. Finally, we design an innovative Social Network service based in our platform, explaining how VISIONS facilitate the deployment and usage of complex capabilities. RESUMEN Los servicios digitales y comunicaciones en entornos vehiculares proporcionan herramientas esenciales para mejorar el transporte por carretera; reduciendo el número de accidentes, mejorando la eficiencia del tráfico y optimizando el transporte de mercancías y personas. Las comunicaciones vehiculares generalmente están basadas en redes VANET (Vehicular Ad hoc Networks). En dichas redes, los vehículos se comunican entre sí sin necesidad de infraestructura. Las redes VANET están principalmente orientadas a difundir información (por ejemplo advertencias de seguridad) a los vehículos en determinadas zonas geográficas, pero presentan unos requisitos muy exigentes que no se han resuelto con éxito hasta la fecha. Algunos de estos retos son; saturación del canal de acceso de radio debido al acceso simultáneo de múltiples vehículos, la eficiencia de protocolos de encaminamiento en topologías que varían rápidamente, la calidad de servicio (QoS) y los mecanismos de seguridad para detectar y neutralizar los ataques maliciosos de manera eficiente. Los servicios vehiculares pueden clasificarse en cuatro grupos: Seguridad, Eficiencia del tráfico, Sostenibilidad, e Infotainment (información y entretenimiento). Los beneficios de estos servicios para el sector son claros, pero es necesario resolver muchos desafíos tecnológicos y de negocio antes de una implementación real. Las actuales plataformas de despliegue de servicios no están preparadas para satisfacer las necesidades de este complejo entorno con requisitos muy restrictivos debido a la criticidad de algunas aplicaciones. Con el objetivo de mejorar esta situación, proponemos una solución llamada VISIONS “Vehicular communication Improvement: Solution based on IMS Operational Nodes and Services”. VISIONS se basa en el subsistema IMS, las capacidades NGN y es compatible con la arquitectura de referencia CALM estandarizado por ISO para sistemas de transporte. También evita el uso de elementos en las carreteras, conocidos como Road Side Units (RSU), reduciendo la complejidad y los altos costes de despliegue y mantenimiento. A lo largo de la tesis, demostramos los beneficios en las siguientes áreas: 1. Eficiencia en redes VANET. VISIONS proporciona un mecanismo para que los vehículos accedan a información valiosa proporcionada por IMS y sus capacidades a través de un canal de celular. Dicho mecanismo contribuye a la mejora de dos áreas importantes: a. Mecanismos de encaminamiento. Estos protocolos son responsables de llevar información de un vehículo a otro (o a un grupo de vehículos) utilizando múltiples saltos. No proponemos un nuevo algoritmo de encaminamiento, sino el uso de información topológica de la red VANET a través de nuestra solución para enriquecer el funcionamiento de los protocolos más relevantes. b. Seguridad. Muchos aspectos de la seguridad (privacidad, gestión de claves, autenticación, control de acceso, mecanismos de revocación, etc) no están resueltos en las comunicaciones vehiculares. Nuestra solución difunde de manera eficiente la información de revocación para neutralizar los nodos maliciosos en la red. 2. Plataforma de despliegue de servicios. Está basada en capacidades NGN, arquitecturas de referencia, protocolos estándar y APIs abiertos. Siguiendo este enfoque, reducimos costes y optimizamos procesos para el desarrollo, despliegue y mantenimiento de servicios vehiculares. Para cuantificar estos beneficios en las redes VANET, ofrecemos un modelo de analítico del sistema y simulamos nuestra solución en escenarios realistas. Los resultados de las simulaciones muestran cómo VISIONS mejora el rendimiento de los protocolos de encaminamiento relevantes y neutraliza los ataques a la seguridad de forma más eficientes que las soluciones basadas en RSU. Por último, diseñamos un innovador servicio de red social basado en nuestra plataforma, explicando cómo VISIONS facilita el despliegue y el uso de las capacidades NGN.
Resumo:
La Universidad Politécnica de Madrid (UPM) y la Università degli Studi di Firenze (UniFi), bajo la coordinación técnica de AMPHOS21, participan desde 2009 en el proyecto de investigación “Estrategias de Monitorización de CO2 y otros gases en el estudio de Análogos Naturales”, financiado por la Fundación Ciudad de la Energía (CIUDEN) en el marco del Proyecto Compostilla OXYCFB300 (http://www.compostillaproject.eu), del Programa “European Energy Program for Recovery - EEPR”. El objetivo principal del proyecto fue el desarrollo y puesta a punto de metodologías de monitorización superficiales para su aplicación en el seguimiento y control de los emplazamientos donde se realice el almacenamiento geológico de CO2, analizando técnicas que permitan detectar y cuantificar las posibles fugas de CO2 a la atmósfera. Los trabajos se realizaron tanto en análogos naturales (españoles e italianos) como en la Planta de Desarrollo Tecnológico de Almacenamiento de CO2 de Hontomín. Las técnicas analizadas se centran en la medición de gases y aguas superficiales (de escorrentía y manantiales). En cuanto a la medición de gases se analizó el flujo de CO2 que emana desde el suelo a la atmósfera y la aplicabilidad de trazadores naturales (como el radón) para la detección e identificación de las fugas de CO2. En cuanto al análisis químico de las aguas se analizaron los datos geoquímicos e isotópicos y los gases disueltos en las aguas de los alrededores de la PDT de Hontomín, con objeto de determinar qué parámetros son los más apropiados para la detección de una posible migración del CO2 inyectado, o de la salmuera, a los ambientes superficiales. Las medidas de flujo de CO2 se realizaron con la técnica de la cámara de acúmulo. A pesar de ser una técnica desarrollada y aplicada en diferentes ámbitos científicos se estimó necesario adaptar un protocolo de medida y de análisis de datos a las características específicas de los proyectos de captura y almacenamiento de CO2 (CAC). Donde los flujos de CO2 esperados son bajos y en caso de producirse una fuga habrá que detectar pequeñas variaciones en los valores flujo con un “ruido” en la señal alto, debido a actividad biológica en el suelo. La medida de flujo de CO2 mediante la técnica de la cámara de acúmulo se puede realizar sin limpiar la superficie donde se coloca la cámara o limpiando y esperando al reequilibrio del flujo después de la distorsión al sistema. Sin embargo, los resultados obtenidos después de limpiar y esperar muestran menor dispersión, lo que nos indica que este procedimiento es el mejor para la monitorización de los complejos de almacenamiento geológico de CO2. El protocolo de medida resultante, utilizado para la obtención de la línea base de flujo de CO2 en Hontomín, sigue los siguiente pasos: a) con una espátula se prepara el punto de medición limpiando y retirando el recubrimiento vegetal o la primera capa compacta de suelo, b) se espera un tiempo para la realización de la medida de flujo, facilitando el reequilibrio del flujo del gas tras la alteración provocada en el suelo y c) se realiza la medida de flujo de CO2. Una vez realizada la medición de flujo de CO2, y detectada si existen zonas de anomalías, se debe estimar la cantidad de CO2 que se está escapando a la atmósfera (emanación total), con el objetivo de cuantificar la posible fuga. Existen un amplio rango de metodologías para realizar dicha estimación, siendo necesario entender cuáles son las más apropiadas para obtener el valor más representativo del sistema. En esta tesis se comparan seis técnicas estadísticas: media aritmética, estimador insegado de la media (aplicando la función de Sichel), remuestreo con reemplazamiento (bootstrap), separación en diferentes poblaciones mediante métodos gráficos y métodos basados en criterios de máxima verosimilitud, y la simulación Gaussiana secuencial. Para este análisis se realizaron ocho campañas de muestreo, tanto en la Planta de Desarrollo Tecnológico de Hontomón como en análogos naturales (italianos y españoles). Los resultados muestran que la simulación Gaussiana secuencial suele ser el método más preciso para realizar el cálculo, sin embargo, existen ocasiones donde otros métodos son más apropiados. Como consecuencia, se desarrolla un procedimiento de actuación para seleccionar el método que proporcione el mejor estimador. Este procedimiento consiste, en primer lugar, en realizar un análisis variográfico. Si existe una autocorrelación entre los datos, modelizada mediante el variograma, la mejor técnica para calcular la emanación total y su intervalo de confianza es la simulación Gaussiana secuencial (sGs). Si los datos son independientes se debe comprobar la distribución muestral, aplicando la media aritmética o el estimador insesgado de la media (Sichel) para datos normales o lognormales respectivamente. Cuando los datos no son normales o corresponden a una mezcla de poblaciones la mejor técnica de estimación es la de remuestreo con reemplazamiento (bootstrap). Siguiendo este procedimiento el máximo valor del intervalo de confianza estuvo en el orden del ±20/25%, con la mayoría de valores comprendidos entre ±3,5% y ±8%. La identificación de las diferentes poblaciones muestrales en los datos de flujo de CO2 puede ayudar a interpretar los resultados obtenidos, toda vez que esta distribución se ve afectada por la presencia de varios procesos geoquímicos como, por ejemplo, una fuente geológica o biológica del CO2. Así pues, este análisis puede ser una herramienta útil en el programa de monitorización, donde el principal objetivo es demostrar que no hay fugas desde el reservorio a la atmósfera y, si ocurren, detectarlas y cuantificarlas. Los resultados obtenidos muestran que el mejor proceso para realizar la separación de poblaciones está basado en criterios de máxima verosimilitud. Los procedimientos gráficos, aunque existen pautas para realizarlos, tienen un cierto grado de subjetividad en la interpretación de manera que los resultados son menos reproducibles. Durante el desarrollo de la tesis se analizó, en análogos naturales, la relación existente entre el CO2 y los isótopos del radón (222Rn y 220Rn), detectándose en todas las zonas de emisión de CO2 una relación positiva entre los valores de concentración de 222Rn en aire del suelo y el flujo de CO2. Comparando la concentración de 220Rn con el flujo de CO2 la relación no es tan clara, mientras que en algunos casos aumenta en otros se detecta una disminución, hecho que parece estar relacionado con la profundidad de origen del radón. Estos resultados confirmarían la posible aplicación de los isótopos del radón como trazadores del origen de los gases y su aplicación en la detección de fugas. Con respecto a la determinación de la línea base de flujo CO2 en la PDT de Hontomín, se realizaron mediciones con la cámara de acúmulo en las proximidades de los sondeos petrolíferos, perforados en los ochenta y denominados H-1, H-2, H-3 y H-4, en la zona donde se instalarán el sondeo de inyección (H-I) y el de monitorización (H-A) y en las proximidades de la falla sur. Desde noviembre de 2009 a abril de 2011 se realizaron siete campañas de muestreo, adquiriéndose más de 4.000 registros de flujo de CO2 con los que se determinó la línea base y su variación estacional. Los valores obtenidos fueron bajos (valores medios entre 5 y 13 g•m-2•d-1), detectándose pocos valores anómalos, principalmente en las proximidades del sondeo H-2. Sin embargo, estos valores no se pudieron asociar a una fuente profunda del CO2 y seguramente estuvieran más relacionados con procesos biológicos, como la respiración del suelo. No se detectaron valores anómalos cerca del sistema de fracturación (falla Ubierna), toda vez que en esta zona los valores de flujo son tan bajos como en el resto de puntos de muestreo. En este sentido, los valores de flujo de CO2 aparentemente están controlados por la actividad biológica, corroborado al obtenerse los menores valores durante los meses de otoño-invierno e ir aumentando en los periodos cálidos. Se calcularon dos grupos de valores de referencia, el primer grupo (UCL50) es 5 g•m-2•d-1 en las zonas no aradas en los meses de otoño-invierno y 3,5 y 12 g•m-2•d-1 en primavera-verano para zonas aradas y no aradas, respectivamente. El segundo grupo (UCL99) corresponde a 26 g•m-2•d- 1 durante los meses de otoño-invierno en las zonas no aradas y 34 y 42 g•m-2•d-1 para los meses de primavera-verano en zonas aradas y no aradas, respectivamente. Flujos mayores a estos valores de referencia podrían ser indicativos de una posible fuga durante la inyección y posterior a la misma. Los primeros datos geoquímicos e isotópicos de las aguas superficiales (de escorrentía y de manantiales) en el área de Hontomín–Huermeces fueron analizados. Los datos sugieren que las aguas estudiadas están relacionadas con aguas meteóricas con un circuito hidrogeológico superficial, caracterizadas por valores de TDS relativamente bajos (menor a 800 mg/L) y una fácie hidrogeoquímica de Ca2+(Mg2+)-HCO3 −. Algunas aguas de manantiales se caracterizan por concentraciones elevadas de NO3 − (concentraciones de hasta 123 mg/l), lo que sugiere una contaminación antropogénica. Se obtuvieron concentraciones anómalas de of Cl−, SO4 2−, As, B y Ba en dos manantiales cercanos a los sondeos petrolíferos y en el rio Ubierna, estos componentes son probablemente indicadores de una posible mezcla entre los acuíferos profundos y superficiales. El estudio de los gases disueltos en las aguas también evidencia el circuito superficial de las aguas. Estando, por lo general, dominado por la componente atmosférica (N2, O2 y Ar). Sin embargo, en algunos casos el gas predominante fue el CO2 (con concentraciones que llegan al 63% v/v), aunque los valores isotópicos del carbono (<-17,7 ‰) muestran que lo más probable es que esté relacionado con un origen biológico. Los datos geoquímicos e isotópicos de las aguas superficiales obtenidos en la zona de Hontomín se pueden considerar como el valor de fondo con el que comparar durante la fase operacional, la clausura y posterior a la clausura. En este sentido, la composición de los elementos mayoritarios y traza, la composición isotópica del carbono del CO2 disuelto y del TDIC (Carbono inorgánico disuelto) y algunos elementos traza se pueden considerar como parámetros adecuados para detectar la migración del CO2 a los ambientes superficiales. ABSTRACT Since 2009, a group made up of Universidad Politécnica de Madrid (UPM; Spain) and Università degli Studi Firenze (UniFi; Italy) has been taking part in a joint project called “Strategies for Monitoring CO2 and other Gases in Natural analogues”. The group was coordinated by AMPHOS XXI, a private company established in Barcelona. The Project was financially supported by Fundación Ciudad de la Energía (CIUDEN; Spain) as a part of the EC-funded OXYCFB300 project (European Energy Program for Recovery -EEPR-; www.compostillaproject.eu). The main objectives of the project were aimed to develop and optimize analytical methodologies to be applied at the surface to Monitor and Verify the feasibility of geologically stored carbon dioxide. These techniques were oriented to detect and quantify possible CO2 leakages to the atmosphere. Several investigations were made in natural analogues from Spain and Italy and in the Tecnchnological Development Plant for CO2 injection al Hontomín (Burgos, Spain). The studying techniques were mainly focused on the measurements of diffuse soil gases and surface and shallow waters. The soil-gas measurements included the determination of CO2 flux and the application to natural trace gases (e.g. radon) that may help to detect any CO2 leakage. As far as the water chemistry is concerned, geochemical and isotopic data related to surface and spring waters and dissolved gases in the area of the PDT of Hontomín were analyzed to determine the most suitable parameters to trace the migration of the injected CO2 into the near-surface environments. The accumulation chamber method was used to measure the diffuse emission of CO2 at the soil-atmosphere interface. Although this technique has widely been applied in different scientific areas, it was considered of the utmost importance to adapt the optimum methodology for measuring the CO2 soil flux and estimating the total CO2 output to the specific features of the site where CO2 is to be stored shortly. During the pre-injection phase CO2 fluxes are expected to be relatively low where in the intra- and post-injection phases, if leakages are to be occurring, small variation in CO2 flux might be detected when the CO2 “noise” is overcoming the biological activity of the soil (soil respiration). CO2 flux measurements by the accumulation chamber method could be performed without vegetation clearance or after vegetation clearance. However, the results obtained after clearance show less dispersion and this suggests that this procedure appears to be more suitable for monitoring CO2 Storage sites. The measurement protocol, applied for the determination of the CO2 flux baseline at Hontomín, has included the following steps: a) cleaning and removal of both the vegetal cover and top 2 cm of soil, b) waiting to reduce flux perturbation due to the soil removal and c) measuring the CO2 flux. Once completing the CO2 flux measurements and detected whether there were anomalies zones, the total CO2 output was estimated to quantify the amount of CO2 released to the atmosphere in each of the studied areas. There is a wide range of methodologies for the estimation of the CO2 output, which were applied to understand which one was the most representative. In this study six statistical methods are presented: arithmetic mean, minimum variances unbiased estimator, bootstrap resample, partitioning of data into different populations with a graphical and a maximum likelihood procedures, and sequential Gaussian simulation. Eight campaigns were carried out in the Hontomín CO2 Storage Technology Development Plant and in natural CO2 analogues. The results show that sequential Gaussian simulation is the most accurate method to estimate the total CO2 output and the confidential interval. Nevertheless, a variety of statistic methods were also used. As a consequence, an application procedure for selecting the most realistic method was developed. The first step to estimate the total emanation rate was the variogram analysis. If the relation among the data can be explained with the variogram, the best technique to calculate the total CO2 output and its confidence interval is the sequential Gaussian simulation method (sGs). If the data are independent, their distribution is to be analyzed. For normal and log-normal distribution the proper methods are the arithmetic mean and minimum variances unbiased estimator, respectively. If the data are not normal (log-normal) or are a mixture of different populations the best approach is the bootstrap resampling. According to these steps, the maximum confidence interval was about ±20/25%, with most of values between ±3.5% and ±8%. Partitioning of CO2 flux data into different populations may help to interpret the data as their distribution can be affected by different geochemical processes, e.g. geological or biological sources of CO2. Consequently, it may be an important tool in a monitoring CCS program, where the main goal is to demonstrate that there are not leakages from the reservoir to the atmosphere and, if occurring, to be able to detect and quantify it. Results show that the partitioning of populations is better performed by maximum likelihood criteria, since graphical procedures have a degree of subjectivity in the interpretation and results may not be reproducible. The relationship between CO2 flux and radon isotopes (222Rn and 220Rn) was studied in natural analogues. In all emissions zones, a positive relation between 222Rn and CO2 was observed. However, the relationship between activity of 220Rn and CO2 flux is not clear. In some cases the 220Rn activity indeed increased with the CO2 flux in other measurements a decrease was recognized. We can speculate that this effect was possibly related to the route (deep or shallow) of the radon source. These results may confirm the possible use of the radon isotopes as tracers for the gas origin and their application in the detection of leakages. With respect to the CO2 flux baseline at the TDP of Hontomín, soil flux measurements in the vicinity of oil boreholes, drilled in the eighties and named H-1 to H-4, and injection and monitoring wells were performed using an accumulation chamber. Seven surveys were carried out from November 2009 to summer 2011. More than 4,000 measurements were used to determine the baseline flux of CO2 and its seasonal variations. The measured values were relatively low (from 5 to 13 g•m-2•day-1) and few outliers were identified, mainly located close to the H-2 oil well. Nevertheless, these values cannot be associated to a deep source of CO2, being more likely related to biological processes, i.e. soil respiration. No anomalies were recognized close to the deep fault system (Ubierna Fault) detected by geophysical investigations. There, the CO2 flux is indeed as low as other measurement stations. CO2 fluxes appear to be controlled by the biological activity since the lowest values were recorded during autumn-winter seasons and they tend to increase in warm periods. Two reference CO2 flux values (UCL50 of 5 g•m-2•d-1 for non-ploughed areas in autumn-winter seasons and 3.5 and 12 g•m-2•d-1 for in ploughed and non-ploughed areas, respectively, in spring-summer time, and UCL99 of 26 g•m-2•d-1 for autumn-winter in not-ploughed areas and 34 and 42 g•m-2•d-1 for spring-summer in ploughed and not-ploughed areas, respectively, were calculated. Fluxes higher than these reference values could be indicative of possible leakage during the operational and post-closure stages of the storage project. The first geochemical and isotopic data related to surface and spring waters and dissolved gases in the area of Hontomín–Huermeces (Burgos, Spain) are presented and discussed. The chemical and features of the spring waters suggest that they are related to a shallow hydrogeological system as the concentration of the Total Dissolved Solids approaches 800 mg/L with a Ca2+(Mg2+)-HCO3 − composition, similar to that of the surface waters. Some spring waters are characterized by relatively high concentrations of NO3 − (up to 123 mg/L), unequivocally suggesting an anthropogenic source. Anomalous concentrations of Cl−, SO4 2−, As, B and Ba were measured in two springs, discharging a few hundred meters from the oil wells, and in the Rio Ubierna. These contents are possibly indicative of mixing processes between deep and shallow aquifers. The chemistry of the dissolved gases also evidences the shallow circuits of the Hontomín– Huermeces, mainly characterized by an atmospheric source as highlighted by the contents of N2, O2, Ar and their relative ratios. Nevertheless, significant concentrations (up to 63% by vol.) of isotopically negative CO2 (<−17.7‰ V-PDB) were found in some water samples, likely related to a biogenic source. The geochemical and isotopic data of the surface and spring waters in the surroundings of Hontomín can be considered as background values when intra- and post-injection monitoring programs will be carried out. In this respect, main and minor solutes, the isotopic carbon of dissolved CO2 and TDIC (Total Dissolved Inorganic Carbon) and selected trace elements can be considered as useful parameters to trace the migration of the injected CO2 into near-surface environments.
Resumo:
Direct Steam Generation (DSG) in Linear Fresnel (LF) solar collectors is being consolidated as a feasible technology for Concentrating Solar Power (CSP) plants. The competitiveness of this technology relies on the following main features: water as heat transfer fluid (HTF) in Solar Field (SF), obtaining high superheated steam temperatures and pressures at turbine inlet (500ºC and 90 bar), no heat tracing required to avoid HTF freezing, no HTF degradation, no environmental impacts, any heat exchanger between SF and Balance Of Plant (BOP), and low cost installation and maintenance. Regarding to LF solar collectors, were recently developed as an alternative to Parabolic Trough Collector (PTC) technology. The main advantages of LF are: the reduced collector manufacturing cost and maintenance, linear mirrors shapes versus parabolic mirror, fixed receiver pipes (no ball joints reducing leaking for high pressures), lower susceptibility to wind damages, and light supporting structures allowing reduced driving devices. Companies as Novatec, Areva, Solar Euromed, etc., are investing in LF DSG technology and constructing different pilot plants to demonstrate the benefits and feasibility of this solution for defined locations and conditions (Puerto Errado 1 and 2 in Murcia Spain, Lidellin Newcastle Australia, Kogran Creek in South West Queensland Australia, Kimberlina in Bakersfield California USA, Llo Solar in Pyrénées France,Dhursar in India,etc). There are several critical decisions that must be taken in order to obtain a compromise and optimization between plant performance, cost, and durability. Some of these decisions go through the SF design: proper thermodynamic operational parameters, receiver material selection for high pressures, phase separators and recirculation pumps number and location, pipes distribution to reduce the amount of tubes (reducing possible leaks points and transient time, etc.), etc. Attending to these aspects, the correct design parameters selection and its correct assessment are the main target for designing DSG LF power plants. For this purpose in the recent few years some commercial software tools were developed to simulatesolar thermal power plants, the most focused on LF DSG design are Thermoflex and System Advisor Model (SAM). Once the simulation tool is selected,it is made the study of the proposed SFconfiguration that constitutes the main innovation of this work, and also a comparison with one of the most typical state-of-the-art configuration. The transient analysis must be simulated with high detail level, mainly in the BOP during start up, shut down, stand by, and partial loads are crucial, to obtain the annual plant performance. An innovative SF configurationwas proposed and analyzed to improve plant performance. Finally it was demonstrated thermal inertia and BOP regulation mode are critical points in low sun irradiation day plant behavior, impacting in annual performance depending on power plant location.
Resumo:
The mineral price assigned in mining project design is critical to determining the economic feasibility of a project. Nevertheless, although it is not difficult to find literature about market metal prices, it is much more complicated to achieve a specific methodology for calculating the value or which justifications are appropriate to include. This study presents an analysis of various methods for selecting metal prices and investigates the mechanisms and motives underlying price selections. The results describe various attitudes adopted by the designers of mining investment projects, and how the price can be determined not just by means of forecasting but also by consideration of other relevant parameters.