927 resultados para Multiple Change-point Analysis


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Oceanic zircon trace element and Hf-isotope geochemistry offers a means to assess the magmatic evolution of a dying spreading ridge and provides an independent evaluation of the reliability of oceanic zircon as an indicator of mantle melting conditions. The Macquarie Island ophiolite in the Southern Ocean provides a unique testing ground for this approach due to its formation within a mid-ocean ridge that gradually changed into a transform plate boundary. Detrital zircon recovered from the island records this change through a progressive enrichment in incompatible trace elements. Oligocene age (33-27 Ma) paleo-detrital zircon in ophiolitic sandstones and breccias interbedded with pillow basalt have trace element compositions akin to a MORB crustal source, whereas Late Miocene age (8.5 Ma) modern-detrital zircon collected from gabbroic colluvium on the island have highly enriched compositions unlike typical oceanic zircon. This compositional disparity between age populations is not complimented by analytically equivalent eHf data that primarily ranges from 14 to 13 for sandstone and modern-detrital populations. A wider compositional range for the sandstone population reflects a multiple pluton source provenance and is augmented by a single cobble clast with eHf equivalent to the maximum observed composition in the sandstone (~17). Similar sandstone and colluvium Hf-isotope signatures indicate inheritance from a similar mantle reservoir that was enriched from the depleted MORB mantle average. The continuity in Hf-isotope signature relative to trace element enrichment in Macquarie Island zircon populations, suggests the latter formed by reduced partial melting linked to spreading-segment shortening and transform lengthening along the dying spreading ridge.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In September 1999 two short-term moorings with cylindrical sediment traps were deployed to collect sinking particles in bottom waters off the Ob and Yenisei river mouths. Samples were studied for their bulk composition, pigments, phytoplankton, microzooplankton, fecal material, amino acids, hexosamines, fatty acids and sterols and compared to suspended matter and surface sediments in order to collect information about the nature and cycling of particulate matter in the water column. Results of all measured components in sinking particles point to an ongoing seasonality in the pelagic system from blooming diatoms in the first phase to a more retention system in the second half of trap deployment. Due to a phytoplankton bloom observed north of the Ob estuary, flux rates were generally higher in the trap deployed off the Ob than off the Yenisei. The Ob trap collected fresh surface-derived particulate matter. Particles from the Yenisei trap were more degraded and resembled deep water suspension. This material may partly have been derived from resuspended sediments.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper describes seagrass species and percentage cover point-based field data sets derived from georeferenced photo transects. Annually or biannually over a ten year period (2004-2015) data sets were collected using 30-50 transects, 500-800 m in length distributed across a 142 km**2 shallow, clear water seagrass habitat, the Eastern Banks, Moreton Bay, Australia. Each of the eight data sets include seagrass property information derived from approximately 3000 georeferenced, downward looking photographs captured at 2-4 m intervals along the transects. Photographs were manually interpreted to estimate seagrass species composition and percentage cover (Coral Point Count excel; CPCe). Understanding seagrass biology, ecology and dynamics for scientific and management purposes requires point-based data on species composition and cover. This data set, and the methods used to derive it are a globally unique example for seagrass ecological applications. It provides the basis for multiple further studies at this site, regional to global comparative studies, and, for the design of similar monitoring programs elsewhere.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Contaminated soil reuse was investigated, with higher profusion, throughout the early 90’s, coinciding with the 1991 Gulf War, when efforts to amend large crude oil releases began in geotechnical assessment of contaminated soils. Isolated works referring to geotechnical testing with hydrocarbon ground contaminants are described in the state-of-the-art, which have been extended to other type of contaminated soil references. Contaminated soils by light non-aquous phase liquids (LNAPL) bearing capacity reduction has been previously investigated from a forensic point of view. To date, all the research works have been published based on the assumption of constant contaminant saturation for the entire soil mass. In contrast, the actual LNAPLs distribution plumes exhibit complex flow patterns which are subject to physical and chemical changes with time and distance travelled from the release source. This aspect has been considered along the present text. A typical Madrid arkosic soil formation is commonly known as Miga sand. Geotechnical tests have been carried out, with Miga sand specimens, in incremental series of LNAPL concentrations in order to observe the soil engineering properties variation due to a contamination increase. Results are discussed in relation with previous studies and as a matter of fact, soil mechanics parameters change in the presence of LNAPL, showing different tendencies according to each test and depending on the LNAPL content, as well as to the specimen’s initially planned relative density, dense or loose. Geotechnical practical implications are also commented on and analyzed. Variation on geotechnical properties may occur only within the external contour of contamination distribution plume. This scope has motivated the author to develop a physical model based on transparent soil technology. The model aims to reproduce the distribution of LNAPL into the ground due to an accidental release from a storage facility. Preliminary results indicate that the model is a potentially complementary tool for hydrogeological applications, site-characterization and remediation treatment testing within the framework of soil pollution events. A description of the test setup of an innovative three dimensional physical model for the flow of two or more phases, in porous media, is presented herein, along with a summary of the advantages, limitations and future applications for modeling with transparent material. En los primeros años de la década de los años 90, del siglo pasado, coincidiendo con la Guerra del Golfo en 1991, se investigó intensamente sobre la reutilización de suelos afectados por grandes volúmenes de vertidos de crudo, fomentándose la evaluación geotécnica de los suelos contaminados. Se describen, en el estado del arte de esta tésis, una serie de trabajos aislados en relación con la caracterización geotécnica de suelos contaminados con hidrocarburos, descripción ampliada mediante referencias relacionadas con otros tipos de contaminación de suelos. Existen estudios previos de patología de cimentaciones que analizan la reducción de la capacidad portante de suelos contaminados por hidrocarburos líquidos ligeros en fase no acuosa (acrónimo en inglés: LNAPL de “Liquid Non-Aquous Phase Liquid”). A fecha de redacción de la tesis, todas las publicaciones anteriores estaban basadas en la consideración de una saturación del contaminante constante en toda la extensión del terreno de cimentación. La distribución real de las plumas de contaminante muestra, por el contrario, complejas trayectorias de flujo que están sujetas a cambios físico-químicos en función del tiempo y la distancia recorrida desde su origen de vertido. Éste aspecto ha sido considerado y tratado en el presente texto. La arena de Miga es una formación geológica típica de Madrid. En el ámbito de esta tesis se han desarrollado ensayos geotécnicos con series de muestras de arena de Miga contaminadas con distintas concentraciones de LNAPL con el objeto de estimar la variación de sus propiedades geotécnicas debido a un incremento de contaminación. Se ha realizado una evaluación de resultados de los ensayos en comparación con otros estudios previamente analizados, resultando que las propiedades mecánicas del suelo, efectivamente, varían en función del contenido de LNAPL y de la densidad relativa con la que se prepare la muestra, densa o floja. Se analizan y comentan las implicaciones de carácter práctico que supone la mencionada variación de propiedades geotécnicas. El autor ha desarrollado un modelo físico basado en la tecnología de suelos transparentes, considerando que las variaciones de propiedades geotécnicas únicamente deben producirse en el ámbito interior del contorno de la pluma contaminante. El objeto del modelo es el de reproducir la distribución de un LNAPL en un terreno dado, causada por el vertido accidental de una instalación de almecenamiento de combustible. Los resultados preliminares indican que el modelo podría emplearse como una herramienta complementaria para el estudio de eventos contaminantes, permitiendo el desarrollo de aplicaciones de carácter hidrogeológico, caracterización de suelos contaminados y experimentación de tratamientos de remediación. Como aportación de carácter innovadora, se presenta y describe un modelo físico tridimensional de flujo de dos o más fases a través de un medio poroso transparente, analizándose sus ventajas e inconvenientes así como sus limitaciones y futuras aplicaciones.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

There is no unanimous consensus yet on the propagation mechanism before the break point inside tunnels. Some deem that the propagation mechanism follows the free space model, others argue that it should be described by the multimode waveguide model. Firstly, this paper analyzes the propagation loss in two mechanisms. Then, by conjunctively using the propagation theory and the three-dimensional solid geometry, a generic analytical model for the boundary between the free space mechanism and the multi-mode waveguide mechanism inside tunnels has been presented. Three measurement campaigns validate the model in different tunnels at different frequencies. Furthermore, the condition of the validity of the free space model used in tunnel environment has been discussed in some specific situations. Finally, through mathematical derivation, the seemingly conflicting viewpoints on the free space mechanism and the multi-mode waveguide mechanism have been unified in some specific situations by the presented generic model. The results in this paper can be helpful to gain deeper insight and better understanding of the propagation mechanism inside tunnels

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Global analyzers traditionally read and analyze the entire program at once, in a nonincremental way. However, there are many situations which are not well suited to this simple model and which instead require reanalysis of certain parts of a program which has already been analyzed. In these cases, it appears inecient to perform the analysis of the program again from scratch, as needs to be done with current systems. We describe how the xed-point algorithms used in current generic analysis engines for (constraint) logic programming languages can be extended to support incremental analysis. The possible changes to a program are classied into three types: addition, deletion, and arbitrary change. For each one of these, we provide one or more algorithms for identifying the parts of the analysis that must be recomputed and for performing the actual recomputation. The potential benets and drawbacks of these algorithms are discussed. Finally, we present some experimental results obtained with an implementation of the algorithms in the PLAI generic abstract interpretation framework. The results show signicant benets when using the proposed incremental analysis algorithms.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

El Análisis de Consumo de Recursos o Análisis de Coste trata de aproximar el coste de ejecutar un programa como una función dependiente de sus datos de entrada. A pesar de que existen trabajos previos a esta tesis doctoral que desarrollan potentes marcos para el análisis de coste de programas orientados a objetos, algunos aspectos avanzados, como la eficiencia, la precisión y la fiabilidad de los resultados, todavía deben ser estudiados en profundidad. Esta tesis aborda estos aspectos desde cuatro perspectivas diferentes: (1) Las estructuras de datos compartidas en la memoria del programa son una pesadilla para el análisis estático de programas. Trabajos recientes proponen una serie de condiciones de localidad para poder mantener de forma consistente información sobre los atributos de los objetos almacenados en memoria compartida, reemplazando éstos por variables locales no almacenadas en la memoria compartida. En esta tesis presentamos dos extensiones a estos trabajos: la primera es considerar, no sólo los accesos a los atributos, sino también los accesos a los elementos almacenados en arrays; la segunda se centra en los casos en los que las condiciones de localidad no se cumplen de forma incondicional, para lo cual, proponemos una técnica para encontrar las precondiciones necesarias para garantizar la consistencia de la información acerca de los datos almacenados en memoria. (2) El objetivo del análisis incremental es, dado un programa, los resultados de su análisis y una serie de cambios sobre el programa, obtener los nuevos resultados del análisis de la forma más eficiente posible, evitando reanalizar aquellos fragmentos de código que no se hayan visto afectados por los cambios. Los analizadores actuales todavía leen y analizan el programa completo de forma no incremental. Esta tesis presenta un análisis de coste incremental, que, dado un cambio en el programa, reconstruye la información sobre el coste del programa de todos los métodos afectados por el cambio de forma incremental. Para esto, proponemos (i) un algoritmo multi-dominio y de punto fijo que puede ser utilizado en todos los análisis globales necesarios para inferir el coste, y (ii) una novedosa forma de almacenar las expresiones de coste que nos permite reconstruir de forma incremental únicamente las funciones de coste de aquellos componentes afectados por el cambio. (3) Las garantías de coste obtenidas de forma automática por herramientas de análisis estático no son consideradas totalmente fiables salvo que la implementación de la herramienta o los resultados obtenidos sean verificados formalmente. Llevar a cabo el análisis de estas herramientas es una tarea titánica, ya que se trata de herramientas de gran tamaño y complejidad. En esta tesis nos centramos en el desarrollo de un marco formal para la verificación de las garantías de coste obtenidas por los analizadores en lugar de analizar las herramientas. Hemos implementado esta idea mediante la herramienta COSTA, un analizador de coste para programas Java y KeY, una herramienta de verificación de programas Java. De esta forma, COSTA genera las garantías de coste, mientras que KeY prueba la validez formal de los resultados obtenidos, generando de esta forma garantías de coste verificadas. (4) Hoy en día la concurrencia y los programas distribuidos son clave en el desarrollo de software. Los objetos concurrentes son un modelo de concurrencia asentado para el desarrollo de sistemas concurrentes. En este modelo, los objetos son las unidades de concurrencia y se comunican entre ellos mediante llamadas asíncronas a sus métodos. La distribución de las tareas sugiere que el análisis de coste debe inferir el coste de los diferentes componentes distribuidos por separado. En esta tesis proponemos un análisis de coste sensible a objetos que, utilizando los resultados obtenidos mediante un análisis de apunta-a, mantiene el coste de los diferentes componentes de forma independiente. Abstract Resource Analysis (a.k.a. Cost Analysis) tries to approximate the cost of executing programs as functions on their input data sizes and without actually having to execute the programs. While a powerful resource analysis framework on object-oriented programs existed before this thesis, advanced aspects to improve the efficiency, the accuracy and the reliability of the results of the analysis still need to be further investigated. This thesis tackles this need from the following four different perspectives. (1) Shared mutable data structures are the bane of formal reasoning and static analysis. Analyses which keep track of heap-allocated data are referred to as heap-sensitive. Recent work proposes locality conditions for soundly tracking field accesses by means of ghost non-heap allocated variables. In this thesis we present two extensions to this approach: the first extension is to consider arrays accesses (in addition to object fields), while the second extension focuses on handling cases for which the locality conditions cannot be proven unconditionally by finding aliasing preconditions under which tracking such heap locations is feasible. (2) The aim of incremental analysis is, given a program, its analysis results and a series of changes to the program, to obtain the new analysis results as efficiently as possible and, ideally, without having to (re-)analyze fragments of code that are not affected by the changes. During software development, programs are permanently modified but most analyzers still read and analyze the entire program at once in a non-incremental way. This thesis presents an incremental resource usage analysis which, after a change in the program is made, is able to reconstruct the upper-bounds of all affected methods in an incremental way. To this purpose, we propose (i) a multi-domain incremental fixed-point algorithm which can be used by all global analyses required to infer the cost, and (ii) a novel form of cost summaries that allows us to incrementally reconstruct only those components of cost functions affected by the change. (3) Resource guarantees that are automatically inferred by static analysis tools are generally not considered completely trustworthy, unless the tool implementation or the results are formally verified. Performing full-blown verification of such tools is a daunting task, since they are large and complex. In this thesis we focus on the development of a formal framework for the verification of the resource guarantees obtained by the analyzers, instead of verifying the tools. We have implemented this idea using COSTA, a state-of-the-art cost analyzer for Java programs and KeY, a state-of-the-art verification tool for Java source code. COSTA is able to derive upper-bounds of Java programs while KeY proves the validity of these bounds and provides a certificate. The main contribution of our work is to show that the proposed tools cooperation can be used for automatically producing verified resource guarantees. (4) Distribution and concurrency are today mainstream. Concurrent objects form a well established model for distributed concurrent systems. In this model, objects are the concurrency units that communicate via asynchronous method calls. Distribution suggests that analysis must infer the cost of the diverse distributed components separately. In this thesis we propose a novel object-sensitive cost analysis which, by using the results gathered by a points-to analysis, can keep the cost of the diverse distributed components separate.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The modal analysis of a structural system consists on computing its vibrational modes. The experimental way to estimate these modes requires to excite the system with a measured or known input and then to measure the system output at different points using sensors. Finally, system inputs and outputs are used to compute the modes of vibration. When the system refers to large structures like buildings or bridges, the tests have to be performed in situ, so it is not possible to measure system inputs such as wind, traffic, . . .Even if a known input is applied, the procedure is usually difficult and expensive, and there are still uncontrolled disturbances acting at the time of the test. These facts led to the idea of computing the modes of vibration using only the measured vibrations and regardless of the inputs that originated them, whether they are ambient vibrations (wind, earthquakes, . . . ) or operational loads (traffic, human loading, . . . ). This procedure is usually called Operational Modal Analysis (OMA), and in general consists on to fit a mathematical model to the measured data assuming the unobserved excitations are realizations of a stationary stochastic process (usually white noise processes). Then, the modes of vibration are computed from the estimated model. The first issue investigated in this thesis is the performance of the Expectation- Maximization (EM) algorithm for the maximum likelihood estimation of the state space model in the field of OMA. The algorithm is described in detail and it is analysed how to apply it to vibration data. After that, it is compared to another well known method, the Stochastic Subspace Identification algorithm. The maximum likelihood estimate enjoys some optimal properties from a statistical point of view what makes it very attractive in practice, but the most remarkable property of the EM algorithm is that it can be used to address a wide range of situations in OMA. In this work, three additional state space models are proposed and estimated using the EM algorithm: • The first model is proposed to estimate the modes of vibration when several tests are performed in the same structural system. Instead of analyse record by record and then compute averages, the EM algorithm is extended for the joint estimation of the proposed state space model using all the available data. • The second state space model is used to estimate the modes of vibration when the number of available sensors is lower than the number of points to be tested. In these cases it is usual to perform several tests changing the position of the sensors from one test to the following (multiple setups of sensors). Here, the proposed state space model and the EM algorithm are used to estimate the modal parameters taking into account the data of all setups. • And last, a state space model is proposed to estimate the modes of vibration in the presence of unmeasured inputs that cannot be modelled as white noise processes. In these cases, the frequency components of the inputs cannot be separated from the eigenfrequencies of the system, and spurious modes are obtained in the identification process. The idea is to measure the response of the structure corresponding to different inputs; then, it is assumed that the parameters common to all the data correspond to the structure (modes of vibration), and the parameters found in a specific test correspond to the input in that test. The problem is solved using the proposed state space model and the EM algorithm. Resumen El análisis modal de un sistema estructural consiste en calcular sus modos de vibración. Para estimar estos modos experimentalmente es preciso excitar el sistema con entradas conocidas y registrar las salidas del sistema en diferentes puntos por medio de sensores. Finalmente, los modos de vibración se calculan utilizando las entradas y salidas registradas. Cuando el sistema es una gran estructura como un puente o un edificio, los experimentos tienen que realizarse in situ, por lo que no es posible registrar entradas al sistema tales como viento, tráfico, . . . Incluso si se aplica una entrada conocida, el procedimiento suele ser complicado y caro, y todavía están presentes perturbaciones no controladas que excitan el sistema durante el test. Estos hechos han llevado a la idea de calcular los modos de vibración utilizando sólo las vibraciones registradas en la estructura y sin tener en cuenta las cargas que las originan, ya sean cargas ambientales (viento, terremotos, . . . ) o cargas de explotación (tráfico, cargas humanas, . . . ). Este procedimiento se conoce en la literatura especializada como Análisis Modal Operacional, y en general consiste en ajustar un modelo matemático a los datos registrados adoptando la hipótesis de que las excitaciones no conocidas son realizaciones de un proceso estocástico estacionario (generalmente ruido blanco). Posteriormente, los modos de vibración se calculan a partir del modelo estimado. El primer problema que se ha investigado en esta tesis es la utilización de máxima verosimilitud y el algoritmo EM (Expectation-Maximization) para la estimación del modelo espacio de los estados en el ámbito del Análisis Modal Operacional. El algoritmo se describe en detalle y también se analiza como aplicarlo cuando se dispone de datos de vibraciones de una estructura. A continuación se compara con otro método muy conocido, el método de los Subespacios. Los estimadores máximo verosímiles presentan una serie de propiedades que los hacen óptimos desde un punto de vista estadístico, pero la propiedad más destacable del algoritmo EM es que puede utilizarse para resolver un amplio abanico de situaciones que se presentan en el Análisis Modal Operacional. En este trabajo se proponen y estiman tres modelos en el espacio de los estados: • El primer modelo se utiliza para estimar los modos de vibración cuando se dispone de datos correspondientes a varios experimentos realizados en la misma estructura. En lugar de analizar registro a registro y calcular promedios, se utiliza algoritmo EM para la estimación conjunta del modelo propuesto utilizando todos los datos disponibles. • El segundo modelo en el espacio de los estados propuesto se utiliza para estimar los modos de vibración cuando el número de sensores disponibles es menor que vi Resumen el número de puntos que se quieren analizar en la estructura. En estos casos es usual realizar varios ensayos cambiando la posición de los sensores de un ensayo a otro (múltiples configuraciones de sensores). En este trabajo se utiliza el algoritmo EM para estimar los parámetros modales teniendo en cuenta los datos de todas las configuraciones. • Por último, se propone otro modelo en el espacio de los estados para estimar los modos de vibración en la presencia de entradas al sistema que no pueden modelarse como procesos estocásticos de ruido blanco. En estos casos, las frecuencias de las entradas no se pueden separar de las frecuencias del sistema y se obtienen modos espurios en la fase de identificación. La idea es registrar la respuesta de la estructura correspondiente a diferentes entradas; entonces se adopta la hipótesis de que los parámetros comunes a todos los registros corresponden a la estructura (modos de vibración), y los parámetros encontrados en un registro específico corresponden a la entrada en dicho ensayo. El problema se resuelve utilizando el modelo propuesto y el algoritmo EM.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Actualmente, la escasez de agua constituye un importante problema en muchos lugares del mundo. El crecimiento de la población, la creciente necesidad de alimentos, el desarrollo socio-económico y el cambio climático ejercen una importante y cada vez mayor presión sobre los recursos hídricos, a la que muchos países van a tener que enfrentarse en los próximos anos. La región Mediterránea es una de las regiones del mundo de mayor escasez de recursos hídricos, y es además una de las zonas más vulnerables al cambio climático. La mayoría de estudios sobre cambio climático prevén mayores temperaturas y una disminución de las precipitaciones, y una creciente escasez de agua debida a la disminución de recursos disponibles y al aumento de las demandas de riego. En el contexto actual de desarrollo de políticas se demanda cada vez más una mayor consideración del cambio climático en el marco de las políticas sectoriales. Sin embargo, los estudios enfocados a un solo sector no reflejan las múltiples dimensiones del los efectos del cambio climático. Numerosos estudios científicos han demostrado que el cambio climático es un fenómeno de naturaleza multi-dimensional y cuyos efectos se transmiten a múltiples escalas. Por tanto, es necesaria la producción de estudios y herramientas de análisis capaces de reflejar todas estas dimensiones y que contribuyan a la elaboración de políticas robustas en un contexto de cambio climático. Esta investigación pretende aportar una visión global de la problemática de la escasez de agua y los impactos, la vulnerabilidad y la adaptación al cambio climático en el contexto de la región mediterránea. La investigación presenta un marco integrado de modelización que se va ampliando progresivamente en un proceso secuencial y multi-escalar en el que en cada etapa se incorpora una nueva dimensión. La investigación consta de cuatro etapas que se abordan a lo largo de cuatro capítulos. En primer lugar, se estudia la vulnerabilidad económica de las explotaciones de regadío del Medio Guadiana, en España. Para ello, se utiliza un modelo de programación matemática en combinación con un modelo econométrico. A continuación, en la segunda etapa, se utiliza un modelo hidro-económico que incluye un modelo de cultivo para analizar los procesos que tienen lugar a escala de cultivo, explotación y cuenca teniendo en cuenta distintas escalas geográficas y de toma de decisiones. Esta herramienta permite el análisis de escenarios de cambio climático y la evaluación de posibles medidas de adaptación. La tercera fase consiste en el análisis de las barreras que dificultan la aplicación de procesos de adaptación para lo cual se analizan las redes socio-institucionales en la cuenca. Finalmente, la cuarta etapa aporta una visión sobre la escasez de agua y el cambio climático a escala nacional y regional mediante el estudio de distintos escenarios de futuro plausibles y los posibles efectos de las políticas en la escasez de agua. Para este análisis se utiliza un modelo econométrico de datos de panel para la región mediterránea y un modelo hidro-económico que se aplica a los casos de estudio de España y Jordania. Los resultados del estudio ponen de relieve la importancia de considerar múltiples escalas y múltiples dimensiones en el estudio de la gestión de los recursos hídricos y la adaptación al cambio climático en los contextos mediterráneos de escasez de agua estudiados. Los resultados muestran que los impactos del cambio climático en la cuenca del Guadiana y en el conjunto de España pueden comprometer la sostenibilidad del regadío y de los ecosistemas. El análisis a escala de cuenca hidrográfica resalta la importancia de las interacciones entre los distintos usuarios del agua y en concreto entre distintas comunidades de regantes, así como la necesidad de fortalecer el papel de las instituciones y de fomentar la creación de una visión común en la cuenca para facilitar la aplicación de los procesos de adaptación. Asimismo, los resultados de este trabajo evidencian también la capacidad y el papel fundamental de las políticas para lograr un desarrollo sostenible y la adaptación al cambio climático es regiones de escasez de agua tales como la región mediterránea. Especialmente, este trabajo pone de manifiesto el potencial de la Directiva Marco del Agua de la Unión Europea para lograr una efectiva adaptación al cambio climático. Sin embargo, en Jordania, además de la adaptación al cambio climático, es preciso diseñar estrategias de desarrollo sostenible más ambiciosas que contribuyan a reducir el riesgo futuro de escasez de agua. ABSTRACT Water scarcity is becoming a major concern in many parts of the world. Population growth, increasing needs for food production, socio-economic development and climate change represent pressures on water resources that many countries around the world will have to deal in the coming years. The Mediterranean region is one of the most water scarce regions of the world and is considered a climate change hotspot. Most projections of climate change envisage an increase in temperatures and a decrease in precipitation and a resulting reduction in water resources availability as a consequence of both reduced water availability and increased irrigation demands. Current policy development processes require the integration of climate change concerns into sectoral policies. However, sector-oriented studies often fail to address all the dimensions of climate change implications. Climate change research in the last years has evidenced the need for more integrated studies and methodologies that are capable of addressing the multi-scale and multi-dimensional nature of climate change. This research attempts to provide a comprehensive view of water scarcity and climate change impacts, vulnerability and adaptation in Mediterranean contexts. It presents an integrated modelling framework that is progressively enlarged in a sequential multi-scale process in which a new dimension of climate change and water resources is addressed at every stage. It is comprised of four stages, each one explained in a different chapter. The first stage explores farm-level economic vulnerability in the Spanish Guadiana basin using a mathematical programming model in combination with an econometric model. Then, in a second stage, the use of a hydro-economic modelling framework that includes a crop growth model allows for the analysis of crop, farm and basin level processes taking into account different geographical and decision-making scales. This integrated tool is used for the analysis of climate change scenarios and for the assessment of potential adaptation options. The third stage includes the analysis of barriers to the effective implementation of adaptation processes based on socioinstitutional network analysis. Finally, a regional and country level perspective of water scarcity and climate change is provided focusing on different possible socio-economic development pathways and the effect of policies on future water scarcity. For this analysis, a panel-data econometric model and a hydro-economic model are applied for the analysis of the Mediterranean region and country level case studies in Spain and Jordan. The overall results of the study demonstrate the value of considering multiple scales and multiple dimensions in water management and climate change adaptation in the Mediterranean water scarce contexts analysed. Results show that climate change impacts in the Guadiana basin and in Spain may compromise the sustainability of irrigation systems and ecosystems. The analysis at the basin level highlights the prominent role of interactions between different water users and irrigation districts and the need to strengthen institutional capacity and common understanding in the basin to enhance the implementation of adaptation processes. The results of this research also illustrate the relevance of water policies in achieving sustainable development and climate change adaptation in water scarce areas such as the Mediterranean region. Specifically, the EU Water Framework Directive emerges as a powerful trigger for climate change adaptation. However, in Jordan, outreaching sustainable development strategies are required in addition to climate change adaptation to reduce future risk of water scarcity.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In Operational Modal Analysis (OMA) of a structure, the data acquisition process may be repeated many times. In these cases, the analyst has several similar records for the modal analysis of the structure that have been obtained at di�erent time instants (multiple records). The solution obtained varies from one record to another, sometimes considerably. The differences are due to several reasons: statistical errors of estimation, changes in the external forces (unmeasured forces) that modify the output spectra, appearance of spurious modes, etc. Combining the results of the di�erent individual analysis is not straightforward. To solve the problem, we propose to make the joint estimation of the parameters using all the records. This can be done in a very simple way using state space models and computing the estimates by maximum-likelihood. The method provides a single result for the modal parameters that combines optimally all the records.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents the theoretical analysis of a storage integrated solar thermophotovoltaic (SISTPV) system operating in steady state. These systems combine thermophotovoltaic (TPV) technology and high temperature thermal storage phase-change materials (PCM) in the same unit, providing a great potential in terms of efficiency, cost reduction and storage energy density. The main attraction in the proposed system is its simplicity and modularity compared to conventional Concentrated Solar Power (CSP) technologies. This is mainly due to the absence of moving parts. In this paper we analyze the use of Silicon as the phase change material (PCM). Silicon is an excellent candidate because of its high melting point (1680 K) and its very high latent heat of fusion of 1800 kJ/kg, which is about ten times greater than the conventional PCMs like molten salts. For a simple system configuration, we have demonstrated that overall conversion efficiencies up to ?35% are approachable. Although higher efficiencies are expected by incorporating more advanced devices like multijunction TPV cells, narrow band selective emitters or adopting near-field TPV configurations as well as by enhancing the convective/conductive heat transfer within the PCM. In this paper, we also discuss about the optimum system configurations and provide the general guidelines for designing these systems. Preliminary estimates of night time operations indicate it is possible to achieve over 10 h of operation with a relatively small quantity of Silicon.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper a novel bidirectional multiple port dc/dc transformer topology is presented. The novel concept for dc/dc transformer is based on the Series Resonant Converter (SRC)topology operated at its resonant frequency point. This allows for higher switching frequency to be adopted and enables high efficiency/high power density operation. The feasibility of the proposed concept is verified on a 300W, 700 kHz three port prototype with 390V input voltage and 48V and 12V output voltages. A peak overall efficiency of 93% is measured at full load. A very good load and cross regulation characteristic of the converter is observed in the whole load range, from full load to open circuit. The sensitivity analysis of the resonant capacitance is also performed showing very slight deterioration in the converter performances when a resonant capacitor is changed ±30% of its nominal value.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

To better understand destruction mechanisms of wake-vortices behind aircraft, the point vortex method for stability (inviscid) used by Crow is here compared with viscous modal global stability analysis of the linearized Navier-Stokes equations acting on a two-dimensional basic flow, i.e. BiGlobal stability analysis. The fact that the BiGlobal method is viscous, and uses a flnite área vortex model, gives rise to results somewhat different from the point vortex model. It adds more parameters to the problem, but is more realistic.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Computing the modal parameters of large structures in Operational Modal Analysis often requires to process data from multiple non simultaneously recorded setups of sensors. These setups share some sensors in common, the so-called reference sensors that are fixed for all the measurements, while the other sensors are moved from one setup to the next. One possibility is to process the setups separately what result in different modal parameter estimates for each setup. Then the reference sensors are used to merge or glue the different parts of the mode shapes to obtain global modes, while the natural frequencies and damping ratios are usually averaged. In this paper we present a state space model that can be used to process all setups at once so the global mode shapes are obtained automatically and subsequently only a value for the natural frequency and damping ratio of each mode is computed. We also present how this model can be estimated using maximum likelihood and the Expectation Maximization algorithm. We apply this technique to real data measured at a footbridge.