20 resultados para In-network storage

em Universidad Politécnica de Madrid


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In just a few years cloud computing has become a very popular paradigm and a business success story, with storage being one of the key features. To achieve high data availability, cloud storage services rely on replication. In this context, one major challenge is data consistency. In contrast to traditional approaches that are mostly based on strong consistency, many cloud storage services opt for weaker consistency models in order to achieve better availability and performance. This comes at the cost of a high probability of stale data being read, as the replicas involved in the reads may not always have the most recent write. In this paper, we propose a novel approach, named Harmony, which adaptively tunes the consistency level at run-time according to the application requirements. The key idea behind Harmony is an intelligent estimation model of stale reads, allowing to elastically scale up or down the number of replicas involved in read operations to maintain a low (possibly zero) tolerable fraction of stale reads. As a result, Harmony can meet the desired consistency of the applications while achieving good performance. We have implemented Harmony and performed extensive evaluations with the Cassandra cloud storage on Grid?5000 testbed and on Amazon EC2. The results show that Harmony can achieve good performance without exceeding the tolerated number of stale reads. For instance, in contrast to the static eventual consistency used in Cassandra, Harmony reduces the stale data being read by almost 80% while adding only minimal latency. Meanwhile, it improves the throughput of the system by 45% while maintaining the desired consistency requirements of the applications when compared to the strong consistency model in Cassandra.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The application of Rheology to study biological systems is a new and very extensive matter, in which melon is absolutely unknown. The goal of this work is to determine some physical characteristics of this fruit, immediately after harvest and during its conservation in cold storage. Portugal and Spain are the most interested countries in these studies, as they are important producers of melon. The varieties Branco da Leziria and Piel de sapo were chosen because they are the most popular in both countries. The fruit were studied on the day they were harvested, and then were conserved in cold storage in the "Instituto del Frio" in Madrid, and they were periodically tested again. Thus during seven days the same fruits, and new fruits, were picked up and tested. On the first day of testing we had 20 fruits to study and at the end of the testing period we had used 80 fruits. The results from the non-destructive impact test were very significant and they may contribute to standardise methods to measure fruit maturity. These results were confirmed by those obtained from compression tests. The results obtained during the Impact tests with melon were similar to those obtained previously with other fruits. There is a close relationship between the results of the Impact tests and Compression tests. Tests like Impact and Compression can be adapted to melon, varieties 'Piel de Sapo" and 'Branco de Leziria', allowing us to continue further work with this species. The great number of data obtained during performance of the tests allowed us to go on with this work and to contribute to standardise methods of measurement and expression of characteristics of a new biological product. During the "Impact damage in fruits and vegetables" workshop, held in Zaragoza in 1990, these matters were included in the priority list.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, an innovative approach to perform distributed Bayesian inference using a multi-agent architecture is presented. The final goal is dealing with uncertainty in network diagnosis, but the solution can be of applied in other fields. The validation testbed has been a P2P streaming video service. An assessment of the work is presented, in order to show its advantages when it is compared with traditional manual processes and other previous systems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Nuestro cerebro contiene cerca de 1014 sinapsis neuronales. Esta enorme cantidad de conexiones proporciona un entorno ideal donde distintos grupos de neuronas se sincronizan transitoriamente para provocar la aparición de funciones cognitivas, como la percepción, el aprendizaje o el pensamiento. Comprender la organización de esta compleja red cerebral en base a datos neurofisiológicos, representa uno de los desafíos más importantes y emocionantes en el campo de la neurociencia. Se han propuesto recientemente varias medidas para evaluar cómo se comunican las diferentes partes del cerebro a diversas escalas (células individuales, columnas corticales, o áreas cerebrales). Podemos clasificarlos, según su simetría, en dos grupos: por una parte, la medidas simétricas, como la correlación, la coherencia o la sincronización de fase, que evalúan la conectividad funcional (FC); mientras que las medidas asimétricas, como la causalidad de Granger o transferencia de entropía, son capaces de detectar la dirección de la interacción, lo que denominamos conectividad efectiva (EC). En la neurociencia moderna ha aumentado el interés por el estudio de las redes funcionales cerebrales, en gran medida debido a la aparición de estos nuevos algoritmos que permiten analizar la interdependencia entre señales temporales, además de la emergente teoría de redes complejas y la introducción de técnicas novedosas, como la magnetoencefalografía (MEG), para registrar datos neurofisiológicos con gran resolución. Sin embargo, nos hallamos ante un campo novedoso que presenta aun varias cuestiones metodológicas sin resolver, algunas de las cuales trataran de abordarse en esta tesis. En primer lugar, el creciente número de aproximaciones para determinar la existencia de FC/EC entre dos o más señales temporales, junto con la complejidad matemática de las herramientas de análisis, hacen deseable organizarlas todas en un paquete software intuitivo y fácil de usar. Aquí presento HERMES (http://hermes.ctb.upm.es), una toolbox en MatlabR, diseñada precisamente con este fin. Creo que esta herramienta será de gran ayuda para todos aquellos investigadores que trabajen en el campo emergente del análisis de conectividad cerebral y supondrá un gran valor para la comunidad científica. La segunda cuestión practica que se aborda es el estudio de la sensibilidad a las fuentes cerebrales profundas a través de dos tipos de sensores MEG: gradiómetros planares y magnetómetros, esta aproximación además se combina con un enfoque metodológico, utilizando dos índices de sincronización de fase: phase locking value (PLV) y phase lag index (PLI), este ultimo menos sensible a efecto la conducción volumen. Por lo tanto, se compara su comportamiento al estudiar las redes cerebrales, obteniendo que magnetómetros y PLV presentan, respectivamente, redes más densamente conectadas que gradiómetros planares y PLI, por los valores artificiales que crea el problema de la conducción de volumen. Sin embargo, cuando se trata de caracterizar redes epilépticas, el PLV ofrece mejores resultados, debido a la gran dispersión de las redes obtenidas con PLI. El análisis de redes complejas ha proporcionado nuevos conceptos que mejoran caracterización de la interacción de sistemas dinámicos. Se considera que una red está compuesta por nodos, que simbolizan sistemas, cuyas interacciones se representan por enlaces, y su comportamiento y topología puede caracterizarse por un elevado número de medidas. Existe evidencia teórica y empírica de que muchas de ellas están fuertemente correlacionadas entre sí. Por lo tanto, se ha conseguido seleccionar un pequeño grupo que caracteriza eficazmente estas redes, y condensa la información redundante. Para el análisis de redes funcionales, la selección de un umbral adecuado para decidir si un determinado valor de conectividad de la matriz de FC es significativo y debe ser incluido para un análisis posterior, se convierte en un paso crucial. En esta tesis, se han obtenido resultados más precisos al utilizar un test de subrogadas, basado en los datos, para evaluar individualmente cada uno de los enlaces, que al establecer a priori un umbral fijo para la densidad de conexiones. Finalmente, todas estas cuestiones se han aplicado al estudio de la epilepsia, caso práctico en el que se analizan las redes funcionales MEG, en estado de reposo, de dos grupos de pacientes epilépticos (generalizada idiopática y focal frontal) en comparación con sujetos control sanos. La epilepsia es uno de los trastornos neurológicos más comunes, con más de 55 millones de afectados en el mundo. Esta enfermedad se caracteriza por la predisposición a generar ataques epilépticos de actividad neuronal anormal y excesiva o bien síncrona, y por tanto, es el escenario perfecto para este tipo de análisis al tiempo que presenta un gran interés tanto desde el punto de vista clínico como de investigación. Los resultados manifiestan alteraciones especificas en la conectividad y un cambio en la topología de las redes en cerebros epilépticos, desplazando la importancia del ‘foco’ a la ‘red’, enfoque que va adquiriendo relevancia en las investigaciones recientes sobre epilepsia. ABSTRACT There are about 1014 neuronal synapses in the human brain. This huge number of connections provides the substrate for neuronal ensembles to become transiently synchronized, producing the emergence of cognitive functions such as perception, learning or thinking. Understanding the complex brain network organization on the basis of neuroimaging data represents one of the most important and exciting challenges for systems neuroscience. Several measures have been recently proposed to evaluate at various scales (single cells, cortical columns, or brain areas) how the different parts of the brain communicate. We can classify them, according to their symmetry, into two groups: symmetric measures, such as correlation, coherence or phase synchronization indexes, evaluate functional connectivity (FC); and on the other hand, the asymmetric ones, such as Granger causality or transfer entropy, are able to detect effective connectivity (EC) revealing the direction of the interaction. In modern neurosciences, the interest in functional brain networks has increased strongly with the onset of new algorithms to study interdependence between time series, the advent of modern complex network theory and the introduction of powerful techniques to record neurophysiological data, such as magnetoencephalography (MEG). However, when analyzing neurophysiological data with this approach several questions arise. In this thesis, I intend to tackle some of the practical open problems in the field. First of all, the increase in the number of time series analysis algorithms to study brain FC/EC, along with their mathematical complexity, creates the necessity of arranging them into a single, unified toolbox that allow neuroscientists, neurophysiologists and researchers from related fields to easily access and make use of them. I developed such a toolbox for this aim, it is named HERMES (http://hermes.ctb.upm.es), and encompasses several of the most common indexes for the assessment of FC and EC running for MatlabR environment. I believe that this toolbox will be very helpful to all the researchers working in the emerging field of brain connectivity analysis and will entail a great value for the scientific community. The second important practical issue tackled in this thesis is the evaluation of the sensitivity to deep brain sources of two different MEG sensors: planar gradiometers and magnetometers, in combination with the related methodological approach, using two phase synchronization indexes: phase locking value (PLV) y phase lag index (PLI), the latter one being less sensitive to volume conduction effect. Thus, I compared their performance when studying brain networks, obtaining that magnetometer sensors and PLV presented higher artificial values as compared with planar gradiometers and PLI respectively. However, when it came to characterize epileptic networks it was the PLV which gives better results, as PLI FC networks where very sparse. Complex network analysis has provided new concepts which improved characterization of interacting dynamical systems. With this background, networks could be considered composed of nodes, symbolizing systems, whose interactions with each other are represented by edges. A growing number of network measures is been applied in network analysis. However, there is theoretical and empirical evidence that many of these indexes are strongly correlated with each other. Therefore, in this thesis I reduced them to a small set, which could more efficiently characterize networks. Within this framework, selecting an appropriate threshold to decide whether a certain connectivity value of the FC matrix is significant and should be included in the network analysis becomes a crucial step, in this thesis, I used the surrogate data tests to make an individual data-driven evaluation of each of the edges significance and confirmed more accurate results than when just setting to a fixed value the density of connections. All these methodologies were applied to the study of epilepsy, analysing resting state MEG functional networks, in two groups of epileptic patients (generalized and focal epilepsy) that were compared to matching control subjects. Epilepsy is one of the most common neurological disorders, with more than 55 million people affected worldwide, characterized by its predisposition to generate epileptic seizures of abnormal excessive or synchronous neuronal activity, and thus, this scenario and analysis, present a great interest from both the clinical and the research perspective. Results revealed specific disruptions in connectivity and network topology and evidenced that networks’ topology is changed in epileptic brains, supporting the shift from ‘focus’ to ‘networks’ which is gaining importance in modern epilepsy research.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Los sistemas de telealimentación han tomado gran importancia en diferentes campos, incluido el de las telecomunicaciones, algunos ejemplos pueden ser: En la red conmutada telefónica junto con la señal de información y llamada existe una alimentación de 48v que se transmite a través de toda la línea de transmisión hasta los terminales. En algunos ferrocarriles eléctricos, se aprovecha la producción de energía eléctrica cuando un tren baja una cuesta y el motor funciona como generador, devolviendo la energía excedente a la propia catenaria por medio de superposición, y siendo esta recuperada en otro lugar y aprovechada por ejemplo por otro tren que requiere energía. Otro uso en ferrocarriles de la telealimentación es la llamada "tecnología del transpondedor magnético", en la que el tren transmite a las balizas una señal en 27MHz además de otras de información propias, que se convierte en energía útil para estas balizas. En este proyecto pretendemos implementar un pequeño ejemplo de sistema de telealimentación trabajando en 5 MHz (RF). Este sistema transforma una señal de CC en una señal de potencia de CA que podría ser, por ejemplo, transmitida a lo largo de una línea de transmisión o radiada por medio de una antena. Después, en el extremo receptor, esta señal RF se transforma finalmente en DC. El objetivo es lograr el mejor rendimiento de conversión de energía, DC a AC y AC a DC. El sistema se divide en dos partes: El inversor, que es la cadena de conversión DC-AC y el rectificador, que es la cadena de conversión AC-DC. Cada parte va a ser calculada, simulada, implementada físicamente y medida aparte. Finalmente el sistema de telealimentación completo se va a medir mediante la interconexión de cada parte por medio de un adaptador o una línea de transmisión. Por último, se mostrarán los resultados obtenidos. ABSTRACT. Remote powering systems have become very important in different fields, including telecommunications, some examples include: In the switched telephone network with the information signal and call there is a 48v supply that is transmitted across the transmission line to the terminals. In some electric railways, the production of electrical energy is used when a train is coming down a hill and the motor acts as a generator, returning the surplus energy to the catenary itself by overlapping, and this being recovered elsewhere and used by other train. Home TV amplifiers that are located in places (storage, remote locations ..) where there is no outlet, remote power allows to carry information and power signal by the same physical medium, for instance a coax. The AC power signal is transformed into DC at the end to feed the amplifier. In medicine, photovoltaic converters and fiber optics can be used as means for feeding devices implanted in patients. Another use of the remote powering systems on railways is the "magnetic transponder technology", in which the station transmits a beacon signal at 27MHz own as well as other information, which is converted into useful energy to these beacons. In this Project we are pretending to implement a little example of remote powering system working in 5 MHz (RF). This system transform DC into an AC-RF power signal which could be, for instance, transmitted throughout a transmission line or radiated by means of an aerial. At the receiving end, this RF signal is then transformed to DC. The objective is to achieve the best power conversion performance, DC to AC and AC to DC. The system is divided in two parts: The inverter, that is the DC-AC conversion chain and the rectifier that is the AC-DC conversion chain. Each part is going to be calculated, simulated, implemented physically and measured apart. Then the complete remote-powering system is to be measured by interconnecting each part by means of a interconnector or a transmission line. Finally, obtained results will be shown.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Due to its excellent mechanical, termal, optical and electrical properties, graphene has recently attracted increasing attention. It provides a huge surface area (2630m2 g-1) and high electrical conductivity, making it an attractive material for applications in energy-storage systems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

LLas nuevas tecnologías orientadas a la nube, el internet de las cosas o las tendencias "as a service" se basan en el almacenamiento y procesamiento de datos en servidores remotos. Para garantizar la seguridad en la comunicación de dichos datos al servidor remoto, y en el manejo de los mismos en dicho servidor, se hace uso de diferentes esquemas criptográficos. Tradicionalmente, dichos sistemas criptográficos se centran en encriptar los datos mientras no sea necesario procesarlos (es decir, durante la comunicación y almacenamiento de los mismos). Sin embargo, una vez es necesario procesar dichos datos encriptados (en el servidor remoto), es necesario desencriptarlos, momento en el cual un intruso en dicho servidor podría a acceder a datos sensibles de usuarios del mismo. Es más, este enfoque tradicional necesita que el servidor sea capaz de desencriptar dichos datos, teniendo que confiar en la integridad de dicho servidor de no comprometer los datos. Como posible solución a estos problemas, surgen los esquemas de encriptación homomórficos completos. Un esquema homomórfico completo no requiere desencriptar los datos para operar con ellos, sino que es capaz de realizar las operaciones sobre los datos encriptados, manteniendo un homomorfismo entre el mensaje cifrado y el mensaje plano. De esta manera, cualquier intruso en el sistema no podría robar más que textos cifrados, siendo imposible un robo de los datos sensibles sin un robo de las claves de cifrado. Sin embargo, los esquemas de encriptación homomórfica son, actualmente, drás-ticamente lentos comparados con otros esquemas de encriptación clásicos. Una op¬eración en el anillo del texto plano puede conllevar numerosas operaciones en el anillo del texto encriptado. Por esta razón, están surgiendo distintos planteamientos sobre como acelerar estos esquemas para un uso práctico. Una de las propuestas para acelerar los esquemas homomórficos consiste en el uso de High-Performance Computing (HPC) usando FPGAs (Field Programmable Gate Arrays). Una FPGA es un dispositivo semiconductor que contiene bloques de lógica cuya interconexión y funcionalidad puede ser reprogramada. Al compilar para FPGAs, se genera un circuito hardware específico para el algorithmo proporcionado, en lugar de hacer uso de instrucciones en una máquina universal, lo que supone una gran ventaja con respecto a CPUs. Las FPGAs tienen, por tanto, claras difrencias con respecto a CPUs: -Arquitectura en pipeline: permite la obtención de outputs sucesivos en tiempo constante -Posibilidad de tener multiples pipes para computación concurrente/paralela. Así, en este proyecto: -Se realizan diferentes implementaciones de esquemas homomórficos en sistemas basados en FPGAs. -Se analizan y estudian las ventajas y desventajas de los esquemas criptográficos en sistemas basados en FPGAs, comparando con proyectos relacionados. -Se comparan las implementaciones con trabajos relacionados New cloud-based technologies, the internet of things or "as a service" trends are based in data storage and processing in a remote server. In order to guarantee a secure communication and handling of data, cryptographic schemes are used. Tradi¬tionally, these cryptographic schemes focus on guaranteeing the security of data while storing and transferring it, not while operating with it. Therefore, once the server has to operate with that encrypted data, it first decrypts it, exposing unencrypted data to intruders in the server. Moreover, the whole traditional scheme is based on the assumption the server is reliable, giving it enough credentials to decipher data to process it. As a possible solution for this issues, fully homomorphic encryption(FHE) schemes is introduced. A fully homomorphic scheme does not require data decryption to operate, but rather operates over the cyphertext ring, keeping an homomorphism between the cyphertext ring and the plaintext ring. As a result, an outsider could only obtain encrypted data, making it impossible to retrieve the actual sensitive data without its associated cypher keys. However, using homomorphic encryption(HE) schemes impacts performance dras-tically, slowing it down. One operation in the plaintext space can lead to several operations in the cyphertext space. Because of this, different approaches address the problem of speeding up these schemes in order to become practical. One of these approaches consists in the use of High-Performance Computing (HPC) using FPGAs (Field Programmable Gate Array). An FPGA is an integrated circuit designed to be configured by a customer or a designer after manufacturing - hence "field-programmable". Compiling into FPGA means generating a circuit (hardware) specific for that algorithm, instead of having an universal machine and generating a set of machine instructions. FPGAs have, thus, clear differences compared to CPUs: - Pipeline architecture, which allows obtaining successive outputs in constant time. -Possibility of having multiple pipes for concurrent/parallel computation. Thereby, In this project: -We present different implementations of FHE schemes in FPGA-based systems. -We analyse and study advantages and drawbacks of the implemented FHE schemes, compared to related work.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

To date, big data applications have focused on the store-and-process paradigm. In this paper we describe an initiative to deal with big data applications for continuous streams of events. In many emerging applications, the volume of data being streamed is so large that the traditional ‘store-then-process’ paradigm is either not suitable or too inefficient. Moreover, soft-real time requirements might severely limit the engineering solutions. Many scenarios fit this description. In network security for cloud data centres, for instance, very high volumes of IP packets and events from sensors at firewalls, network switches and routers and servers need to be analyzed and should detect attacks in minimal time, in order to limit the effect of the malicious activity over the IT infrastructure. Similarly, in the fraud department of a credit card company, payment requests should be processed online and need to be processed as quickly as possible in order to provide meaningful results in real-time. An ideal system would detect fraud during the authorization process that lasts hundreds of milliseconds and deny the payment authorization, minimizing the damage to the user and the credit card company.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

El manejo pre-sacrificio es de vital importancia en acuicultura, ya que afecta tanto a las reacciones fisiológicas como a los procesos bioquímicos post mortem, y por tanto al bienestar y a la calidad del producto. El ayuno pre-sacrificio se lleva a cabo de forma habitual en acuicultura, ya que permite el vaciado del aparato digestivo de restos de alimento y heces, reduciendo de esta manera la carga bacteriana en el intestino y la dispersión de enzimas digestivos y potenciales patógenos a la carne. Sin embargo, la duración óptima de este ayuno sin que el pez sufra un estrés innecesario no está clara. Además, se sabe muy poco sobre la mejor hora del día para realizar el sacrificio, lo que a su vez está regido por los ritmos diarios de los parámetros fisiológicos de estrés. Finalmente, se sabe que la temperatura del agua juega un papel muy importante en la fisiología del estrés pero no se ha determinado su efecto en combinación con el ayuno. Además, las actuales recomendaciones en relación a la duración óptima del ayuno previo al sacrificio en peces no suelen considerar la temperatura del agua y se basan únicamente en días y no en grados día (ºC d). Se determinó el efecto del ayuno previo al sacrificio (1, 2 y 3 días, equivalente a 11,1-68,0 grados día) y la hora de sacrificio (08h00, 14h00 y 20h00) en trucha arco iris (Oncorhynchus mykiss) de tamaño comercial en cuatro pruebas usando diferentes temperaturas de agua (Prueba 1: 11,8 ºC; Prueba 2: 19,2 ºC; Prueba 3: 11,1 ºC; y Prueba 4: 22,7 ºC). Se midieron indicadores biométricos, hematológicos, metabólicos y de calidad de la carne. En cada prueba, los valores de los animales ayunados (n=90) se compararon con 90 animales control mantenidos bajo condiciones similares pero nos ayunados. Los resultados sugieren que el ayuno tuvo un efecto significativo sobre los indicadores biométricos. El coeficiente de condición en los animales ayunados fue menor que en los controles después de 2 días de ayuno. El vaciado del aparato digestivo se produjo durante las primeras 24 h de ayuno, encontrándose pequeñas cantidades de alimento después de 48 h. Por otra parte, este vaciado fue más rápido cuando las temperaturas fueron más altas. El peso del hígado de los animales ayunados fue menor y las diferencias entre truchas ayunadas y controles fueron más evidentes a medida que el vaciado del aparato digestivo fue más rápido. El efecto del ayuno hasta 3 días en los indicadores hematológicos no fue significativo. Los niveles de cortisol en plasma resultaron ser altos tanto en truchas ayunadas como en las alimentadas en todas las pruebas realizadas. La concentración media de glucosa varió entre pruebas pero mostró una tendencia a disminuir en animales ayunados a medida que el ayuno progresaba. En cualquier caso, parece que la temperatura del agua jugó un papel muy importante, ya que se encontraron concentraciones más altas durante los días 2 y 3 de ayuno en animales mantenidos a temperaturas más bajas previamente al sacrificio. Los altos niveles de lactato obtenidos en sangre parecen sugerir episodios de intensa actividad muscular pero no se pudo encontrar relación con el ayuno. De la misma manera, el nivel de hematocrito no mostró efecto alguno del ayuno y los leucocitos tendieron a ser más altos cuando los animales estaban menos estresados y cuando su condición corporal fue mayor. Finalmente, la disminución del peso del hígado (índice hepatosomático) en la Prueba 3 no se vio acompañada de una reducción del glucógeno hepático, lo que sugiere que las truchas emplearon una estrategia diferente para mantener constantes los niveles de glucosa durante el periodo de ayuno en esa prueba. En relación a la hora de sacrificio, se obtuvieron niveles más bajos de cortisol a las 20h00, lo que indica que las truchas estaban menos estresadas y que el manejo pre-sacrificio podría resultar menos estresante por la noche. Los niveles de hematocrito fueron también más bajos a las 20h00 pero solo con temperaturas más bajas, sugiriendo que las altas temperaturas incrementan el metabolismo. Ni el ayuno ni la hora de sacrificio tuvieron un efecto significativo sobre la evolución de la calidad de la carne durante los 3 días de almacenamiento. Por el contrario, el tiempo de almacenamiento sí que parece tener un efecto claro sobre los parámetros de calidad del producto final. Los niveles más bajos de pH se alcanzaron a las 24-48 h post mortem, con una lata variabilidad entre duraciones del ayuno (1, 2 y 3 días) en animales sacrificados a las 20h00, aunque no se pudo distinguir ningún patrón común. Por otra parte, la mayor rigidez asociada al rigor mortis se produjo a las 24 h del sacrificio. La capacidad de retención de agua se mostró muy estable durante el período de almacenamiento y parece ser independiente de los cambios en el pH. El parámetro L* de color se incrementó a medida que avanzaba el período de almacenamiento de la carne, mientras que los valores a* y b* no variaron en gran medida. En conclusión, basándose en los resultados hematológicos, el sacrificio a última hora del día parece tener un efecto menos negativo en el bienestar. De manera general, nuestros resultados sugieren que la trucha arco iris puede soportar un período de ayuno previo al sacrificio de hasta 3 días o 68 ºC d sin que su bienestar se vea seriamente comprometido. Es probable que con temperaturas más bajas las truchas pudieran ser ayunadas durante más tiempo sin ningún efecto negativo sobre su bienestar. En cualquier caso, se necesitan más estudios para determinar la relación entre la temperatura del agua y la duración óptima del ayuno en términos de pérdida de peso vivo y la disminución de los niveles de glucosa en sangre y otros indicadores metabólicos. SUMMARY Pre-slaughter handling in fish is important because it affects both physiological reactions and post mortem biochemical processes, and thus welfare and product quality. Pre-slaughter fasting is regularly carried out in aquaculture, as it empties the viscera of food and faeces, thus reducing the intestinal bacteria load and the spread of gut enzymes and potential pathogens to the flesh. However, it is unclear how long rainbow trout can be fasted before suffering unnecessary stress. In addition, very little is known about the best time of the day to slaughter fish, which may in turn be dictated by diurnal rhythms in physiological stress parameters. Water temperature is also known to play a very important role in stress physiology in fish but the combined effect with fasting is unclear. Current recommendations regarding the optimal duration of pre-slaughter fasting do not normally consider water temperature and are only based on days, not degree days (ºC d). The effects of short-term fasting prior to slaughter (1, 2 and 3 days, between 11.1 and 68.0 ºC days) and hour of slaughter (08h00, 14h00 and 20h00) were determined in commercial-sized rainbow trout (Oncorhynchus mykiss) over four trials at different water temperatures (TRIAL 1, 11.8 ºC; TRIAL 2, 19.2 ºC; TRIAL 3, 11.1 ºC; and TRIAL 4, 22.7 ºC). We measured biometric, haematological, metabolic and product quality indicators. In each trial, the values of fasted fish (n=90) were compared with 90 control fish kept under similar conditions but not fasted. Results show that fasting affected biometric indicators. The coefficient of condition in fasted trout was lower than controls 2 days after food deprivation. Gut emptying occurred within the first 24 h after the cessation of feeding, with small traces of digesta after 48 h. Gut emptying was faster at higher water temperatures. Liver weight decreased in food deprived fish and differences between fasted and fed trout were more evident when gut clearance was faster. The overall effect of fasting for up to three days on haematological indicators was small. Plasma cortisol levels were high in both fasted and fed fish in all trials. Plasma glucose response to fasting varied among trials, but it tended to be lower in fasted fish as the days of fasting increased. In any case, it seems that water temperature played a more important role, with higher concentrations at lower temperatures on days 2 and 3 after the cessation of feeding. Plasma lactate levels indicate moments of high muscular activity and were also high, but no variation related to fasting could be found. Haematocrit did not show any significant effect of fasting, but leucocytes tended to be higher when trout were less stressed and when their body condition was higher. Finally, the loss of liver weight was not accompanied by a decrease in liver glycogen (only measured in TRIAL 3), suggesting that a different strategy to maintain plasma glucose levels was used. Regarding the hour of slaughter, lower cortisol levels were found at 20h00, suggesting that trout were less stressed later in the day and that pre-slaughter handling may be less stressful at night. Haematocrit levels were also lower at 20h00 but only at lower temperatures, indicating that higher temperatures increase metabolism. Neither fasting nor the hour of slaughter had a significant effect on the evolution of meat quality during 3 days of storage. In contrast, storage time seemed to have a more important effect on meat quality parameters. The lowest pH was reached 24-48 h post mortem, with a higher variability among fasting durations at 20h00, although no clear pattern could be discerned. Maximum stiffening from rigor mortis occurred after 24 h. The water holding capacity was very stable throughout storage and seemed to be independent of pH changes. Meat lightness (L*) slightly increased during storage and a* and b*-values were relatively stable. In conclusion, based on the haematological results, slaughtering at night may have less of a negative effect on welfare than at other times of the day. Overall, our results suggest that rainbow trout can cope well with fasting up to three days or 68 ºC d prior to slaughter and that their welfare is therefore not seriously compromised. At low water temperatures, trout could probably be fasted for longer periods without negative effects on welfare but more research is needed to determine the relationship between water temperature and days of fasting in terms of loss of live weight and the decrease in plasma glucose and other metabolic indicators.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Result of impact and compression tests on Chojuro, Twentieth Century, Tsu Li, and Ya Li varieties of Asian pears indicate that Chojuro pears are the firmest and most resistant to mechanical damage. At the time of harvest, Tsu Li and Ya Li pears could resist mechanical damage nearly as well as Chojuro pears, but they become more susceptible to bruising in cold storage. Twentieth Century pears are most sensitive to impact and compression bruising. Increased time in the ripening room produces more softening and increased bruise resistance of Chojuro and Twentieth Century pears than of Tsu Li and Ya Li pears.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Novel carbon fiber (CF)-reinforced poly(phenylene sulphide) (PPS) laminates incorporating inorganic fullerene-like tungsten disulfide (IF-WS2) nanoparticles were prepared via melt-blending and hot-press processing. The influence of the IF-WS2 on the morphology, thermal, mechanical and tribological properties of PPS/CF composites was investigated. Efficient nanoparticle dispersion within the matrix was attained without using surfactants. A progressive rise in thermal stability was found with increasing IF-WS2 loading, as revealed by thermogravimetric analysis. The addition of low nanoparticle contents retarded the crystallization of the matrix, whereas concentrations equal or higher than 1.0 wt% increased both the crystallization temperature and degree of crystallinity compared to those of PPS/CF. Mechanical tests indicated that with only 1.0 wt% IF-WS2 the flexural modulus and strength of PPS/CF improved by 17 and 14%, respectively, without loss in toughness, ascribed to a synergistic effect between the two fillers. A significant enhancement in the storage modulus and glass transition temperature was also observed. Moreover, the wear rate and coefficient of friction strongly decreased, attributed to the lubricant role of the IF-WS2 combined with their reinforcing effect. These inorganic nanoparticles show great potential to improve the mechanical and tribological properties of conventional thermoplastic/CF composites for structural applications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We analyze the properties of networks obtained from the trajectories of unimodal maps at the transi- tion to chaos via the horizontal visibility (HV) algorithm. We find that the network degrees fluctuate at all scales with amplitude that increases as the size of the network grows, and can be described by a spectrum of graph-theoretical generalized Lyapunov exponents. We further define an entropy growth rate that describes the amount of information created along paths in network space, and find that such en- tropy growth rate coincides with the spectrum of generalized graph-theoretical exponents, constituting a set of Pesin-like identities for the network.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fresh-cut or minimally processed fruit and vegetables have been physically modified from its original form (by peeling, trimming, washing and cutting) to obtain a 100% edible product that is subsequently packaged (usually under modified atmosphere packaging –MAP) and kept in refrigerated storage. In fresh-cut products, physiological activity and microbiological spoilage, determine their deterioration and shelf-life. The major preservation techniques applied to delay spoilage are chilling storage and MAP, combined with chemical treatments antimicrobial solutions antibrowning, acidulants, antioxidants, etc.). The industry looks for safer alternatives. Consequently, the sector is asking for innovative, fast, cheap and objective techniques to evaluate the overall quality and safety of fresh-cut products in order to obtain decision tools for implementing new packaging materials and procedures. In recent years, hyperspectral imaging technique has been regarded as a tool for analyses conducted for quality evaluation of food products in research, control and industries. The hyperspectral imaging system allows integrating spectroscopic and imaging techniques to enable direct identification of different components or quality characteristics and their spatial distribution in the tested sample. The objective of this work is to develop hyperspectral image processing methods for the supervision through plastic films of changes related to quality deterioration in packed readyto-use leafy vegetables during shelf life. The evolutions of ready-to-use spinach and watercress samples covered with three different common transparent plastic films were studied. Samples were stored at 4 ºC during the monitoring period (until 21 days). More than 60 hyperspectral images (from 400 to 1000 nm) per species were analyzed using ad hoc routines and commercial toolboxes of MatLab®. Besides common spectral treatments for removing additive and multiplicative effects, additional correction, previously to any other correction, was performed in the images of leaves in order to avoid the modification in their spectra due to the presence of the plastic transparent film. Findings from this study suggest that the developed images analysis system is able to deal with the effects caused in the images by the presence of plastic films in the supervision of shelf-life in leafy vegetables, in which different stages of quality has been identified.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Natural analogs offer a valuable opportunity to investigate the long-term impacts associated with thepotential leakage in geological storage of CO2.Degassing of CO2and radon isotopes (222Rn?220Rn) from soil, gas vents and thermal water dischargeswas investigated in the natural analog of Campo de Calatrava Volcanic Field (CCVF; Central Spain) todetermine the CO2?Rn relationships and to assess the role of CO2as carrier gas for radon. Furthermore,radon measurements to discriminate between shallow and deep gas sources were evaluated under theperspective of their applicability in monitoring programs of carbon storage projects.CO2flux as high as 5000 g m?2d?1and222Rn activities up to 430 kBq m?3were measured;220Rn activi-ties were one order of magnitude lower than those of222Rn. The222Rn/220Rn ratios were used to constrainthe source of the Campo de Calatrava soil gases since a positive correlation between radon isotopic ratiosand CO2fluxes was observed. Thus, in agreement with previous studies, our results indicate a deepmantle-related origin of CO2for both free and soil gases, suggesting that carbon dioxide is an efficientcarrier for Rn. Furthermore, it was ascertained that the increase of222Rn in the soil gases was likely pro-duced by two main processes: (i) direct transport by a carrier gas, i.e., CO2and (ii) generation at shallowlevel due to the presence of relatively high concentrations of dissolved U and Ra in the thermal aquiferof Campo de Calatrava.The diffuse CO2soil flux and radon isotopic surveys carried out in the Campo de Calatrava VolcanicFields can also be applicable to geochemical monitoring programs in CCS (Carbon Capture and Storage)areas as these parameters are useful to: (i) constrain CO2leakages once detected and (ii) monitor both theevolution of the leakages and the effectiveness of subsequent remediation activities. These measurementscan also conveniently be used to detect diffuse leakages.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

El aumento de la temperatura media de la Tierra durante el pasado siglo en casi 1 ºC; la subida del nivel medio del mar; la disminución del volumen de hielo y nieve terrestres; la fuerte variabilidad del clima y los episodios climáticos extremos que se vienen sucediendo durante las ultimas décadas; y el aumento de las epidemias y enfermedades infecciosas son solo algunas de las evidencias del cambio climático actual, causado, principalmente, por la acumulación de gases de efecto invernadero en la atmósfera por actividades antropogénicas. La problemática y preocupación creciente surgida a raíz de estos fenómenos, motivo que, en 1997, se adoptara el denominado “Protocolo de Kyoto” (Japón), por el que los países firmantes adoptaron diferentes medidas destinadas a controlar y reducir las emisiones de los citados gases. Entre estas medidas cabe destacar las tecnologías CAC, enfocadas a la captura, transporte y almacenamiento de CO2. En este contexto se aprobó, en octubre de 2008, el Proyecto Singular Estratégico “Tecnologías avanzadas de generación, captura y almacenamiento de CO2” (PSE-120000-2008-6), cofinanciado por el Ministerio de Ciencia e Innovación y el FEDER, el cual abordaba, en su Subproyecto “Almacenamiento Geológico de CO2” (PSS-120000-2008-31), el estudio detallado, entre otros, del Análogo Natural de Almacenamiento y Escape de CO2 de la cuenca de Ganuelas-Mazarrón (Murcia). Es precisamente en el marco de dicho Proyecto en el que se ha realizado este trabajo, cuyo objetivo final ha sido el de predecir el comportamiento y evaluar la seguridad, a corto, medio y largo plazo, de un Almacenamiento Geológico Profundo de CO2 (AGP-CO2), mediante el estudio integral del citado análogo natural. Este estudio ha comprendido: i) la contextualización geológica e hidrogeológica de la cuenca, así como la investigación geofísica de la misma; ii) la toma de muestras de aguas de algunos acuíferos seleccionados con el fin de realizar su estudio hidrogeoquímico e isotópico; iii) la caracterización mineralógica, petrográfica, geoquímica e isotópica de los travertinos precipitados a partir de las aguas de algunos de los sondeos de la cuenca; y iv) la medida y caracterización química e isotópica de los gases libres y disueltos detectados en la cuenca, con especial atención al CO2 y 222Rn. Esta información, desarrollada en capítulos independientes, ha permitido realizar un modelo conceptual de funcionamiento del sistema natural que constituye la cuenca de Ganuelas-Mazarrón, así como establecer las analogías entre este y un AGP-CO2, con posibles escapes naturales y/o antropogénicos. La aplicación de toda esta información ha servido, por un lado, para predecir el comportamiento y evaluar la seguridad, a corto, medio y largo plazo, de un AGP-CO2 y, por otro, proponer una metodología general aplicable al estudio de posibles emplazamientos de AGP-CO2 desde la perspectiva de los reservorios naturales de CO2. Los resultados más importantes indican que la cuenca de Ganuelas-Mazarrón se trata de una cubeta o fosa tectónica delimitada por fallas normales, con importantes saltos verticales, que hunden al substrato rocoso (Complejo Nevado-Filabride), y rellenas, generalmente, por materiales volcánicos-subvolcánicos ácidos. Además, esta cuenca se encuentra rellena por formaciones menos resistivas que son, de muro a techo, las margas miocenas, predominantes y casi exclusivas de la cuenca, y los conglomerados y gravas pliocuaternarias. El acuífero salino profundo y enriquecido en CO2, puesto de manifiesto por la xx exploración geotérmica realizada en dicha cuenca durante la década de los 80 y objeto principal de este estudio, se encuentra a techo de los materiales del Complejo Nevado-Filabride, a una profundidad que podría superar los 800 m, según los datos de la investigación mediante sondeos y geofísica. Por ello, no se descarta la posibilidad de que el CO2 se encuentre en estado supe critico, por lo que la citada cuenca reuniría las características principales de un almacenamiento geológico natural y profundo de CO2, o análogo natural de un AGP-CO2 en un acuífero salino profundo. La sobreexplotación de los acuíferos mas someros de la cuenca, con fines agrícolas, origino, por el descenso de sus niveles piezométricos y de la presión hidrostática, el ascenso de las aguas profundas, salinas y enriquecidas en CO2, las cuales son las responsables de la contaminación de dichos acuíferos. El estudio hidrogeoquímico de las aguas de los acuíferos investigados muestra una gran variedad de hidrofacies, incluso en aquellos de litología similar. La alta salinidad de estas aguas las hace inservibles tanto para el consumo humano como para fines agrícolas. Además, el carácter ligeramente ácido de la mayoría de estas aguas determina que tengan gran capacidad para disolver y transportar, hacia la superficie, elementos pesados y/o tóxicos, entre los que destaca el U, elemento abundante en las rocas volcánicas ácidas de la cuenca, con contenidos de hasta 14 ppm, y en forma de uraninita submicroscópica. El estudio isotópico ha permitido discernir el origen, entre otros, del C del DIC de las aguas (δ13C-DIC), explicándose como una mezcla de dos componentes principales: uno, procedente de la descomposición térmica de las calizas y mármoles del substrato y, otro, de origen edáfico, sin descartar una aportación menor de C de origen mantélico. El estudio de los travertinos que se están formando a la salida de las aguas de algunos sondeos, por la desgasificación rápida de CO2 y el consiguiente aumento de pH, ha permitido destacar este fenómeno, por analogía, como alerta de escapes de CO2 desde un AGP-CO2. El análisis de los gases disueltos y libres, con especial atención al CO2 y al 222Rn asociado, indican que el C del CO2, tanto disuelto como en fase libre, tiene un origen similar al del DIC, confirmándose la menor contribución de CO2 de origen mantélico, dada la relación R/Ra del He existente en estos gases. El 222Rn sería el generado por el decaimiento radiactivo del U, particularmente abundante en las rocas volcánicas de la cuenca, y/o por el 226Ra procedente del U o del existente en los yesos mesinienses de la cuenca. Además, el CO2 actúa como carrier del 222Rn, hecho evidenciado en las anomalías positivas de ambos gases a ~ 1 m de profundidad y relacionadas principalmente con perturbaciones naturales (fallas y contactos) y antropogénicas (sondeos). La signatura isotópica del C a partir del DIC, de los carbonatos (travertinos), y del CO2 disuelto y libre, sugiere que esta señal puede usarse como un excelente trazador de los escapes de CO2 desde un AGPCO2, en el cual se inyectara un CO2 procedente, generalmente, de la combustión de combustibles fósiles, con un δ13C(V-PDB) de ~ -30 ‰. Estos resultados han permitido construir un modelo conceptual de funcionamiento del sistema natural de la cuenca de Ganuelas-Mazarrón como análogo natural de un AGP-CO2, y establecer las relaciones entre ambos. Así, las analogías mas importantes, en cuanto a los elementos del sistema, serian la existencia de: i) un acuífero salino profundo enriquecido en CO2, que seria análoga a la formación almacén de un AGPxxi CO2; ii) una formación sedimentaria margosa que, con una potencia superior a 500 m, se correspondería con la formación sello de un AGP-CO2; y iii) acuíferos mas someros con aguas dulces y aptas para el consumo humano, rocas volcánicas ricas en U y fallas que se encuentran selladas por yesos y/o margas; elementos que también podrían concurrir en un emplazamiento de un AGP-CO2. Por otro lado, los procesos análogos mas importantes identificados serian: i) la inyección ascendente del CO2, que seria análoga a la inyección de CO2 de origen antropogénico, pero este con una signatura isotópica δ13C(V-PDB) de ~ -30 ‰; ii) la disolución de CO2 y 222Rn en las aguas del acuífero profundo, lo que seria análogo a la disolución de dichos gases en la formación almacén de un AGP-CO2; iii) la contaminación de los acuíferos mas someros por el ascenso de las aguas sobresaturadas en CO2, proceso que seria análogo a la contaminación que se produciría en los acuíferos existentes por encima de un AGP-CO2, siempre que este se perturbara natural (reactivación de fallas) o artificialmente (sondeos); iv) la desgasificación (CO2 y gases asociados, entre los que destaca el 222Rn) del acuífero salino profundo a través de sondeos, proceso análogo al que pudiera ocurrir en un AGP-CO2 perturbado; y v) la formación rápida de travertinos, proceso análogo indicativo de que el AGP-CO2 ha perdido su estanqueidad. La identificación de las analogías más importantes ha permitido, además, analizar y evaluar, de manera aproximada, el comportamiento y la seguridad, a corto, medio y largo plazo, de un AGP-CO2 emplazado en un contexto geológico similar al sistema natural estudiado. Para ello se ha seguido la metodología basada en el análisis e identificación de los FEPs (Features, Events and Processes), los cuales se han combinado entre sí para generar y analizar diferentes escenarios de evolución del sistema (scenario analysis). Estos escenarios de evolución identificados en el sistema natural perturbado, relacionados con la perforación de sondeos, sobreexplotación de acuíferos, precipitación rápida de travertinos, etc., serian análogos a los que podrían ocurrir en un AGP-CO2 que también fuera perturbado antropogénicamente, por lo que resulta totalmente necesario evitar la perturbación artificial de la formación sello del AGPCO2. Por último, con toda la información obtenida se ha propuesto una metodología de estudio que pueda aplicarse al estudio de posibles emplazamientos de un AGP-CO2 desde la perspectiva de los reservorios naturales de CO2, sean estancos o no. Esta metodología comprende varias fases de estudio, que comprendería la caracterización geológico-estructural del sitio y de sus componentes (agua, roca y gases), la identificación de las analogías entre un sistema natural de almacenamiento de CO2 y un modelo conceptual de un AGP-CO2, y el establecimiento de las implicaciones para el comportamiento y la seguridad de un AGP-CO2. ABSTRACT The accumulation of the anthropogenic greenhouse gases in the atmosphere is the main responsible for: i) the increase in the average temperature of the Earth over the past century by almost 1 °C; ii) the rise in the mean sea level; iii) the drop of the ice volume and terrestrial snow; iv) the strong climate variability and extreme weather events that have been happening over the last decades; and v) the spread of epidemics and infectious diseases. All of these events are just some of the evidence of current climate change. The problems and growing concern related to these phenomena, prompted the adoption of the so-called "Kyoto Protocol" (Japan) in 1997, in which the signatory countries established different measurements to control and reduce the emissions of the greenhouse gases. These measurements include the CCS technologies, focused on the capture, transport and storage of CO2. Within this context, it was approved, in October 2008, the Strategic Singular Project "Tecnologías avanzadas de generación, captura y almacenamiento de CO2" (PSE-120000-2008-6), supported by the Ministry of Science and Innovation and the FEDER funds. This Project, by means of the Subproject "Geological Storage of CO2" (PSS- 120000-2008-31), was focused on the detailed study of the Natural Analogue of CO2 Storage and Leakage located in the Ganuelas-Mazarron Tertiary basin (Murcia), among other Spanish Natural Analogues. This research work has been performed in the framework of this Subproject, being its final objective to predict the behaviour and evaluate the safety, at short, medium and long-term, of a CO2 Deep Geological Storage (CO2-DGS) by means of a comprehensive study of the abovementioned Natural Analogue. This study comprises: i) the geological and hydrogeological context of the basin and its geophysical research; ii) the water sampling of the selected aquifers to establish their hydrogeochemical and isotopic features; iii) the mineralogical, petrographic, geochemical and isotopic characterisation of the travertines formed from upwelling groundwater of several hydrogeological and geothermal wells; and iv) the measurement of the free and dissolved gases detected in the basin, as well as their chemical and isotopic characterisation, mainly regarding CO2 and 222Rn. This information, summarised in separate chapters in the text, has enabled to build a conceptual model of the studied natural system and to establish the analogies between both the studied natural system and a CO2-DGS, with possible natural and/or anthropogenic escapes. All this information has served, firstly, to predict the behaviour and to evaluate the safety, at short, medium and long-term, of a CO2-DGS and, secondly, to propose a general methodology to study suitable sites for a CO2-DGS, taking into account the lessons learned from this CO2 natural reservoir. The main results indicate that the Ganuelas-Mazarron basin is a graben bounded by normal faults with significant vertical movements, which move down the metamorphic substrate (Nevado-Filabride Complex), and filled with acid volcanic-subvolcanic rocks. Furthermore, this basin is filled with two sedimentary formations: i) the Miocene marls, which are predominant and almost exclusive in the basin; xxiv and ii) the Plio-Quaternary conglomerates and gravels. A deep saline CO2-rich aquifer was evidenced in this basin as a result of the geothermal exploration wells performed during the 80s, located just at the top of the Nevado-Filabride Complex and at a depth that could exceed 800 m, according to the geophysical exploration performed. This saline CO2-rich aquifer is precisely the main object of this study. Therefore, it is not discarded the possibility that the CO2 in this aquifer be in supercritical state. Consequently, the aforementioned basin gathers the main characteristics of a natural and deep CO2 geological storage, or natural analogue of a CO2-DGS in a deep saline aquifer. The overexploitation of the shallow aquifers in this basin for agriculture purposes caused the drop of the groundwater levels and hydrostatic pressures, and, as a result, the ascent of the deep saline and CO2-rich groundwater, which is the responsible for the contamination of the shallow and fresh aquifers. The hydrogeochemical features of groundwater from the investigated aquifers show the presence of very different hydrofacies, even in those with similar lithology. The high salinity of this groundwater prevents the human and agricultural uses. In addition, the slightly acidic character of most of these waters determines their capacity to dissolve and transport towards the surface heavy and/or toxic elements, among which U is highlighted. This element is abundant in the acidic volcanic rocks of the basin, with concentrations up to 14 ppm, mainly as sub-microscopic uraninite crystals. The isotopic study of this groundwater, particularly the isotopic signature of C from DIC (δ13C-DIC), suggests that dissolved C can be explained considering a mixture of C from two main different sources: i) from the thermal decomposition of limestones and marbles forming the substrate; and ii) from edaphic origin. However, a minor contribution of C from mantle degassing cannot be discarded. The study of travertines being formed from upwelling groundwater of several hydrogeological and geothermal wells, as a result of the fast CO2 degassing and the pH increase, has allowed highlighting this phenomenon, by analogy, as an alert for the CO2 leakages from a CO2-DGS. The analysis of the dissolved and free gases, with special attention to CO2 and 222Rn, indicates that the C from the dissolved and free CO2 has a similar origin to that of the DIC. The R/Ra ratio of He corroborates the minor contribution of CO2 from the mantle degassing. Furthermore, 222Rn is generated by the radioactive decay of U, particularly abundant in the volcanic rocks of the basin, and/or by 226Ra from the U or from the Messinian gypsum in the basin. Moreover, CO2 acts as a carrier of the 222Rn, a fact evidenced by the positive anomalies of both gases at ~ 1 m depth and mainly related to natural (faults and contacts) and anthropogenic (wells) perturbations. The isotopic signature of C from DIC, carbonates (travertines), and dissolved and free CO2, suggests that this parameter can be used as an excellent tracer of CO2 escapes from a CO2-DGS, in which CO2 usually from the combustion of fossil fuels, with δ13C(V-PDB) of ~ -30 ‰, will be injected. All of these results have allowed to build a conceptual model of the behaviour of the natural system studied as a natural analogue of a CO2-DGS, as well as to establish the relationships between both natural xxv and artificial systems. Thus, the most important analogies, regarding the elements of the system, would be the presence of: i) a deep saline CO2-rich aquifer, which would be analogous to the storage formation of a CO2-DGS; ii) a marly sedimentary formation with a thickness greater than 500 m, which would correspond to the sealing formation of a CO2-DGS; and iii) shallow aquifers with fresh waters suitable for human consumption, U-rich volcanic rocks, and faults that are sealed by gypsums and/or marls; geological elements that could also be present in a CO2-DGS. On the other hand, the most important analogous processes identified are: i) the upward injection of CO2, which would be analogous to the downward injection of the anthropogenic CO2, this last with a δ13C(V-PDB) of ~ -30 ‰; ii) the dissolution of CO2 and 222Rn in groundwater of the deep aquifer, which would be analogous to the dissolution of these gases in the storage formation of a CO2-DGS; iii) the contamination of the shallow aquifers by the uprising of CO2-oversaturated groundwater, an analogous process to the contamination that would occur in shallow aquifers located above a CO2-DGS, whenever it was naturally (reactivation of faults) or artificially (wells) perturbed; iv) the degassing (CO2 and associated gases, among which 222Rn is remarkable) of the deep saline aquifer through wells, process which could be similar in a perturbed CO2- DGS; v) the rapid formation of travertines, indicating that the CO2-DGS has lost its seal capacity. The identification of the most important analogies has also allowed analysing and evaluating, approximately, the behaviour and safety in the short, medium and long term, of a CO2-DGS hosted in a similar geological context of the natural system studied. For that, it has been followed the methodology based on the analysis and identification of FEPs (Features, Events and Processes) that have been combined together in order to generate and analyse different scenarios of the system evolution (scenario analysis). These identified scenarios in the perturbed natural system, related to boreholes, overexploitation of aquifers, rapid precipitation of travertines, etc., would be similar to those that might occur in a CO2-DGS anthropogenically perturbed, so that it is absolutely necessary to avoid the artificial perturbation of the seal formation of a CO2-DGS. Finally, a useful methodology for the study of possible sites for a CO2-DGS is suggested based on the information obtained from this investigation, taking into account the lessons learned from this CO2 natural reservoir. This methodology comprises several phases of study, including the geological and structural characterisation of the site and its components (water, rock and gases), the identification of the analogies between a CO2 storage natural system and a conceptual model of a CO2-DGS, and the implications regarding the behaviour and safety of a CO2-DGS.