166 resultados para Distribution static compensator (DSTATCOM)
Resumo:
n the last two decades, interest in species distribution models (SDMs) of plants and animals has grown dramatically. Recent advances in SDMs allow us to potentially forecast anthropogenic effects on patterns of biodiversity at different spatial scales. However, some limitations still preclude the use of SDMs in many theoretical and practical applications. Here, we provide an overview of recent advances in this field, discuss the ecological principles and assumptions underpinning SDMs, and highlight critical limitations and decisions inherent in the construction and evaluation of SDMs. Particular emphasis is given to the use of SDMs for the assessment of climate change impacts and conservation management issues. We suggest new avenues for incorporating species migration, population dynamics, biotic interactions and community ecology into SDMs at multiple spatial scales. Addressing all these issues requires a better integration of SDMs with ecological theory.
Resumo:
Abstract : The existence of a causal relationship between the spatial distribution of living organisms and their environment, in particular climate, has been long recognized and is the central principle of biogeography. In turn, this recognition has led scientists to the idea of using the climatic, topographic, edaphic and biotic characteristics of the environment to predict its potential suitability for a given species or biological community. In this thesis, my objective is to contribute to the development of methodological improvements in the field of species distribution modeling. More precisely, the objectives are to propose solutions to overcome limitations of species distribution models when applied to conservation biology issues, or when .used as an assessment tool of the potential impacts of global change. The first objective of my thesis is to contribute to evidence the potential of species distribution models for conservation-related applications. I present a methodology to generate pseudo-absences in order to overcome the frequent lack of reliable absence data. I also demonstrate, both theoretically (simulation-based) and practically (field-based), how species distribution models can be successfully used to model and sample rare species. Overall, the results of this first part of the thesis demonstrate the strong potential of species distribution models as a tool for practical applications in conservation biology. The second objective this thesis is to contribute to improve .projections of potential climate change impacts on species distributions, and in particular for mountain flora. I develop and a dynamic model, MIGCLIM, that allows the implementation of dispersal limitations into classic species distribution models and present an application of this model to two virtual species. Given that accounting for dispersal limitations requires information on seed dispersal, distances, a general methodology to classify species into broad dispersal types is also developed. Finally, the M~GCLIM model is applied to a large number of species in a study area of the western Swiss Alps. Overall, the results indicate that while dispersal limitations can have an important impact on the outcome of future projections of species distributions under climate change scenarios, estimating species threat levels (e.g. species extinction rates) for a mountainous areas of limited size (i.e. regional scale) can also be successfully achieved when considering dispersal as unlimited (i.e. ignoring dispersal limitations, which is easier from a practical point of view). Finally, I present the largest fine scale assessment of potential climate change impacts on mountain vegetation that has been carried-out to date. This assessment involves vegetation from 12 study areas distributed across all major western and central European mountain ranges. The results highlight that some mountain ranges (the Pyrenees and the Austrian Alps) are expected to be more affected by climate change than others (Norway and the Scottish Highlands). The results I obtain in this study also indicate that the threat levels projected by fine scale models are less severe than those derived from coarse scale models. This result suggests that some species could persist in small refugias that are not detected by coarse scale models. Résumé : L'existence d'une relation causale entre la répartition des espèces animales et végétales et leur environnement, en particulier le climat, a été mis en évidence depuis longtemps et est un des principes centraux en biogéographie. Ce lien a naturellement conduit à l'idée d'utiliser les caractéristiques climatiques, topographiques, édaphiques et biotiques de l'environnement afin d'en prédire la qualité pour une espèce ou une communauté. Dans ce travail de thèse, mon objectif est de contribuer au développement d'améliorations méthodologiques dans le domaine de la modélisation de la distribution d'espèces dans le paysage. Plus précisément, les objectifs sont de proposer des solutions afin de surmonter certaines limitations des modèles de distribution d'espèces dans des applications pratiques de biologie de la conservation ou dans leur utilisation pour évaluer l'impact potentiel des changements climatiques sur l'environnement. Le premier objectif majeur de mon travail est de contribuer à démontrer le potentiel des modèles de distribution d'espèces pour des applications pratiques en biologie de la conservation. Je propose une méthode pour générer des pseudo-absences qui permet de surmonter le problème récurent du manque de données d'absences fiables. Je démontre aussi, de manière théorique (par simulation) et pratique (par échantillonnage de terrain), comment les modèles de distribution d'espèces peuvent être utilisés pour modéliser et améliorer l'échantillonnage des espèces rares. Ces résultats démontrent le potentiel des modèles de distribution d'espèces comme outils pour des applications de biologie de la conservation. Le deuxième objectif majeur de ce travail est de contribuer à améliorer les projections d'impacts potentiels des changements climatiques sur la flore, en particulier dans les zones de montagnes. Je développe un modèle dynamique de distribution appelé MigClim qui permet de tenir compte des limitations de dispersion dans les projections futures de distribution potentielle d'espèces, et teste son application sur deux espèces virtuelles. Vu que le fait de prendre en compte les limitations dues à la dispersion demande des données supplémentaires importantes (p.ex. la distance de dispersion des graines), ce travail propose aussi une méthode de classification simplifiée des espèces végétales dans de grands "types de disperseurs", ce qui permet ainsi de d'obtenir de bonnes approximations de distances de dispersions pour un grand nombre d'espèces. Finalement, j'applique aussi le modèle MIGCLIM à un grand nombre d'espèces de plantes dans une zone d'études des pré-Alpes vaudoises. Les résultats montrent que les limitations de dispersion peuvent avoir un impact considérable sur la distribution potentielle d'espèces prédites sous des scénarios de changements climatiques. Cependant, quand les modèles sont utilisés pour évaluer les taux d'extinction d'espèces dans des zones de montages de taille limitée (évaluation régionale), il est aussi possible d'obtenir de bonnes approximations en considérant la dispersion des espèces comme illimitée, ce qui est nettement plus simple d'un point dé vue pratique. Pour terminer je présente la plus grande évaluation à fine échelle d'impact potentiel des changements climatiques sur la flore des montagnes conduite à ce jour. Cette évaluation englobe 12 zones d'études réparties sur toutes les chaines de montages principales d'Europe occidentale et centrale. Les résultats montrent que certaines chaines de montagnes (les Pyrénées et les Alpes Autrichiennes) sont projetées comme plus sensibles aux changements climatiques que d'autres (les Alpes Scandinaves et les Highlands d'Ecosse). Les résultats obtenus montrent aussi que les modèles à échelle fine projettent des impacts de changement climatiques (p. ex. taux d'extinction d'espèces) moins sévères que les modèles à échelle large. Cela laisse supposer que les modèles a échelle fine sont capables de modéliser des micro-niches climatiques non-détectées par les modèles à échelle large.
Resumo:
CONTEXT: Fatigue-induced alterations in foot mechanics may lead to structural overload and injury. OBJECTIVES: To investigate how a high-intensity running exercise to exhaustion modifies ankle plantar-flexor and dorsiflexor strength and fatigability, as well as plantar-pressure distribution in adolescent runners. DESIGN: Controlled laboratory study. SETTING: Academy research laboratory. PATIENTS OR OTHER PARTICIPANTS: Eleven male adolescent distance runners (age = 16.9 ± 2.0 years, height = 170.6 ± 10.9 cm, mass = 54.6 ± 8.6 kg) were tested. INTERVENTION(S): All participants performed an exhausting run on a treadmill. An isokinetic plantar-flexor and dorsiflexor maximal-strength test and a fatigue test were performed before and after the exhausting run. Plantar-pressure distribution was assessed at the beginning and end of the exhausting run. MAIN OUTCOME MEASURE(S): We recorded plantar-flexor and dorsiflexor peak torques and calculated the fatigue index. Plantar-pressure measurements were recorded 1 minute after the start of the run and before exhaustion. Plantar variables (ie, mean area, contact time, mean pressure, relative load) were determined for 9 selected regions. RESULTS: Isokinetic peak torques were similar before and after the run in both muscle groups, whereas the fatigue index increased in plantar flexion (28.1%; P = .01) but not in dorsiflexion. For the whole foot, mean pressure decreased from 1 minute to the end (-3.4%; P = .003); however, mean area (9.5%; P = .005) and relative load (7.2%; P = .009) increased under the medial midfoot, and contact time increased under the central forefoot (8.3%; P = .01) and the lesser toes (8.9%; P = .008). CONCLUSIONS: Fatigue resistance in the plantar flexors declined after a high-intensity running bout performed by adolescent male distance runners. This phenomenon was associated with increased loading under the medial arch in the fatigued state but without any excessive pronation.
Resumo:
Preface In this thesis we study several questions related to transaction data measured at an individual level. The questions are addressed in three essays that will constitute this thesis. In the first essay we use tick-by-tick data to estimate non-parametrically the jump process of 37 big stocks traded on the Paris Stock Exchange, and of the CAC 40 index. We separate the total daily returns in three components (trading continuous, trading jump, and overnight), and we characterize each one of them. We estimate at the individual and index levels the contribution of each return component to the total daily variability. For the index, the contribution of jumps is smaller and it is compensated by the larger contribution of overnight returns. We test formally that individual stocks jump more frequently than the index, and that they do not respond independently to the arrive of news. Finally, we find that daily jumps are larger when their arrival rates are larger. At the contemporaneous level there is a strong negative correlation between the jump frequency and the trading activity measures. The second essay study the general properties of the trade- and volume-duration processes for two stocks traded on the Paris Stock Exchange. These two stocks correspond to a very illiquid stock and to a relatively liquid stock. We estimate a class of autoregressive gamma process with conditional distribution from the family of non-central gamma (up to a scale factor). This process was introduced by Gouriéroux and Jasiak and it is known as Autoregressive gamma process. We also evaluate the ability of the process to fit the data. For this purpose we use the Diebold, Gunther and Tay (1998) test; and the capacity of the model to reproduce the moments of the observed data, and the empirical serial correlation and the partial serial correlation functions. We establish that the model describes correctly the trade duration process of illiquid stocks, but have problems to adjust correctly the trade duration process of liquid stocks which present long-memory characteristics. When the model is adjusted to volume duration, it successfully fit the data. In the third essay we study the economic relevance of optimal liquidation strategies by calibrating a recent and realistic microstructure model with data from the Paris Stock Exchange. We distinguish the case of parameters which are constant through the day from time-varying ones. An optimization problem incorporating this realistic microstructure model is presented and solved. Our model endogenizes the number of trades required before the position is liquidated. A comparative static exercise demonstrates the realism of our model. We find that a sell decision taken in the morning will be liquidated by the early afternoon. If price impacts increase over the day, the liquidation will take place more rapidly.
Resumo:
MAP5, a microtubule-associated protein characteristic of differentiating neurons, was studied in the developing visual cortex and corpus callosum of the cat. In juvenile cortical tissue, during the first month after birth, MAP5 is present as a protein doublet of molecular weights of 320 and 300 kDa, defined as MAP5a and MAP5b, respectively. MAP5a is the phosphorylated form. MAP5a decreases two weeks after birth and is no longer detectable at the beginning of the second postnatal month; MAP5b also decreases after the second postnatal week but more slowly and it is still present in the adult. In the corpus callosum only MAP5a is present between birth and the end of the first postnatal month. Afterwards only MAP5b is present but decreases in concentration more than 3-fold towards adulthood. Our immunocytochemical studies show MAP5 in somata, dendrites and axonal processes of cortical neurons. In adult tissue it is very prominent in pyramidal cells of layer V. In the corpus callosum MAP5 is present in axons at all ages. There is strong evidence that MAP5a is located in axons while MAP5b seems restricted to somata and dendrites until P28, but is found in callosal axons from P39 onwards. Biochemical experiments indicate that the state of phosphorylation of MAP5 influences its association with structural components. After high speed centrifugation of early postnatal brain tissue, MAP5a remains with pellet fractions while most MAP5b is soluble. In conclusion, phosphorylation of MAP5 may regulate (1) its intracellular distribution within axons and dendrites, and (2) its ability to interact with other subcellular components.
Resumo:
There is no doubt about the necessity of protecting digital communication: Citizens are entrusting their most confidential and sensitive data to digital processing and communication, and so do governments, corporations, and armed forces. Digital communication networks are also an integral component of many critical infrastructures we are seriously depending on in our daily lives. Transportation services, financial services, energy grids, food production and distribution networks are only a few examples of such infrastructures. Protecting digital communication means protecting confidentiality and integrity by encrypting and authenticating its contents. But most digital communication is not secure today. Nevertheless, some of the most ardent problems could be solved with a more stringent use of current cryptographic technologies. Quite surprisingly, a new cryptographic primitive emerges from the ap-plication of quantum mechanics to information and communication theory: Quantum Key Distribution. QKD is difficult to understand, it is complex, technically challenging, and costly-yet it enables two parties to share a secret key for use in any subsequent cryptographic task, with an unprecedented long-term security. It is disputed, whether technically and economically fea-sible applications can be found. Our vision is, that despite technical difficulty and inherent limitations, Quantum Key Distribution has a great potential and fits well with other cryptographic primitives, enabling the development of highly secure new applications and services. In this thesis we take a structured approach to analyze the practical applicability of QKD and display several use cases of different complexity, for which it can be a technology of choice, either because of its unique forward security features, or because of its practicability.
Resumo:
DNA condensation observed in vitro with the addition of polyvalent counterions is due to intermolecular attractive forces. We introduce a quantitative model of these forces in a Brownian dynamics simulation in addition to a standard mean-field Poisson-Boltzmann repulsion. The comparison of a theoretical value of the effective diameter calculated from the second virial coefficient in cylindrical geometry with some experimental results allows a quantitative evaluation of the one-parameter attractive potential. We show afterward that with a sufficient concentration of divalent salt (typically approximately 20 mM MgCl(2)), supercoiled DNA adopts a collapsed form where opposing segments of interwound regions present zones of lateral contact. However, under the same conditions the same plasmid without torsional stress does not collapse. The condensed molecules present coexisting open and collapsed plectonemic regions. Furthermore, simulations show that circular DNA in 50% methanol solutions with 20 mM MgCl(2) aggregates without the requirement of torsional energy. This confirms known experimental results. Finally, a simulated DNA molecule confined in a box of variable size also presents some local collapsed zones in 20 mM MgCl(2) above a critical concentration of the DNA. Conformational entropy reduction obtained either by supercoiling or by confinement seems thus to play a crucial role in all forms of condensation of DNA.
Resumo:
Aim The imperfect detection of species may lead to erroneous conclusions about species-environment relationships. Accuracy in species detection usually requires temporal replication at sampling sites, a time-consuming and costly monitoring scheme. Here, we applied a lower-cost alternative based on a double-sampling approach to incorporate the reliability of species detection into regression-based species distribution modelling.Location Doñana National Park (south-western Spain).Methods Using species-specific monthly detection probabilities, we estimated the detection reliability as the probability of having detected the species given the species-specific survey time. Such reliability estimates were used to account explicitly for data uncertainty by weighting each absence. We illustrated how this novel framework can be used to evaluate four competing hypotheses as to what constitutes primary environmental control of amphibian distribution: breeding habitat, aestivating habitat, spatial distribution of surrounding habitats and/or major ecosystems zonation. The study was conducted on six pond-breeding amphibian species during a 4-year period.Results Non-detections should not be considered equivalent to real absences, as their reliability varied considerably. The occurrence of Hyla meridionalis and Triturus pygmaeus was related to a particular major ecosystem of the study area, where suitable habitat for these species seemed to be widely available. Characteristics of the breeding habitat (area and hydroperiod) were of high importance for the occurrence of Pelobates cultripes and Pleurodeles waltl. Terrestrial characteristics were the most important predictors of the occurrence of Discoglossus galganoi and Lissotriton boscai, along with spatial distribution of breeding habitats for the last species.Main conclusions We did not find a single best supported hypothesis valid for all species, which stresses the importance of multiscale and multifactor approaches. More importantly, this study shows that estimating the reliability of non-detection records, an exercise that had been previously seen as a naïve goal in species distribution modelling, is feasible and could be promoted in future studies, at least in comparable systems.
Resumo:
Many traits and/or strategies expressed by organisms are quantitative phenotypes. Because populations are of finite size and genomes are subject to mutations, these continuously varying phenotypes are under the joint pressure of mutation, natural selection and random genetic drift. This article derives the stationary distribution for such a phenotype under a mutation-selection-drift balance in a class-structured population allowing for demographically varying class sizes and/or changing environmental conditions. The salient feature of the stationary distribution is that it can be entirely characterized in terms of the average size of the gene pool and Hamilton's inclusive fitness effect. The exploration of the phenotypic space varies exponentially with the cumulative inclusive fitness effect over state space, which determines an adaptive landscape. The peaks of the landscapes are those phenotypes that are candidate evolutionary stable strategies and can be determined by standard phenotypic selection gradient methods (e.g. evolutionary game theory, kin selection theory, adaptive dynamics). The curvature of the stationary distribution provides a measure of the stability by convergence of candidate evolutionary stable strategies, and it is evaluated explicitly for two biological scenarios: first, a coordination game, which illustrates that, for a multipeaked adaptive landscape, stochastically stable strategies can be singled out by letting the size of the gene pool grow large; second, a sex-allocation game for diploids and haplo-diploids, which suggests that the equilibrium sex ratio follows a Beta distribution with parameters depending on the features of the genetic system.
Resumo:
Many studies have forecasted the possible impact of climate change on plant distribution using models based on ecological niche theory. In their basic implementation, niche-based models do not constrain predictions by dispersal limitations. Hence, most niche-based modelling studies published so far have assumed dispersal to be either unlimited or null. However, depending on the rate of climatic change, the landscape fragmentation and the dispersal capabilities of individual species, these assumptions are likely to prove inaccurate, leading to under- or overestimation of future species distributions and yielding large uncertainty between these two extremes. As a result, the concepts of "potentially suitable" and "potentially colonisable" habitat are expected to differ significantly. To quantify to what extent these two concepts can differ, we developed MIGCLIM, a model simulating plant dispersal under climate change and landscape fragmentation scenarios. MIGCLIM implements various parameters, such as dispersal distance, increase in reproductive potential over time, barriers to dispersal or long distance dispersal. Several simulations were run for two virtual species in a study area of the western Swiss Alps, by varying dispersal distance and other parameters. Each simulation covered the hundred-year period 2001-2100 and three different IPCC-based temperature warming scenarios were considered. Our results indicate that: (i) using realistic parameter values, the future potential distributions generated using MIGCLIM can differ significantly (up to more than 95% decrease in colonized surface) from those that ignore dispersal; (ii) this divergence increases both with increasing climate warming and over longer time periods; (iii) the uncertainty associated with the warming scenario can be nearly as large as the one related to dispersal parameters; (iv) accounting for dispersal, even roughly, can importantly reduce uncertainty in projections.
Resumo:
L'utilisation efficace des systèmes géothermaux, la séquestration du CO2 pour limiter le changement climatique et la prévention de l'intrusion d'eau salée dans les aquifères costaux ne sont que quelques exemples qui démontrent notre besoin en technologies nouvelles pour suivre l'évolution des processus souterrains à partir de la surface. Un défi majeur est d'assurer la caractérisation et l'optimisation des performances de ces technologies à différentes échelles spatiales et temporelles. Les méthodes électromagnétiques (EM) d'ondes planes sont sensibles à la conductivité électrique du sous-sol et, par conséquent, à la conductivité électrique des fluides saturant la roche, à la présence de fractures connectées, à la température et aux matériaux géologiques. Ces méthodes sont régies par des équations valides sur de larges gammes de fréquences, permettant détudier de manières analogues des processus allant de quelques mètres sous la surface jusqu'à plusieurs kilomètres de profondeur. Néanmoins, ces méthodes sont soumises à une perte de résolution avec la profondeur à cause des propriétés diffusives du champ électromagnétique. Pour cette raison, l'estimation des modèles du sous-sol par ces méthodes doit prendre en compte des informations a priori afin de contraindre les modèles autant que possible et de permettre la quantification des incertitudes de ces modèles de façon appropriée. Dans la présente thèse, je développe des approches permettant la caractérisation statique et dynamique du sous-sol à l'aide d'ondes EM planes. Dans une première partie, je présente une approche déterministe permettant de réaliser des inversions répétées dans le temps (time-lapse) de données d'ondes EM planes en deux dimensions. Cette stratégie est basée sur l'incorporation dans l'algorithme d'informations a priori en fonction des changements du modèle de conductivité électrique attendus. Ceci est réalisé en intégrant une régularisation stochastique et des contraintes flexibles par rapport à la gamme des changements attendus en utilisant les multiplicateurs de Lagrange. J'utilise des normes différentes de la norme l2 pour contraindre la structure du modèle et obtenir des transitions abruptes entre les régions du model qui subissent des changements dans le temps et celles qui n'en subissent pas. Aussi, j'incorpore une stratégie afin d'éliminer les erreurs systématiques de données time-lapse. Ce travail a mis en évidence l'amélioration de la caractérisation des changements temporels par rapport aux approches classiques qui réalisent des inversions indépendantes à chaque pas de temps et comparent les modèles. Dans la seconde partie de cette thèse, j'adopte un formalisme bayésien et je teste la possibilité de quantifier les incertitudes sur les paramètres du modèle dans l'inversion d'ondes EM planes. Pour ce faire, je présente une stratégie d'inversion probabiliste basée sur des pixels à deux dimensions pour des inversions de données d'ondes EM planes et de tomographies de résistivité électrique (ERT) séparées et jointes. Je compare les incertitudes des paramètres du modèle en considérant différents types d'information a priori sur la structure du modèle et différentes fonctions de vraisemblance pour décrire les erreurs sur les données. Les résultats indiquent que la régularisation du modèle est nécessaire lorsqu'on a à faire à un large nombre de paramètres car cela permet d'accélérer la convergence des chaînes et d'obtenir des modèles plus réalistes. Cependent, ces contraintes mènent à des incertitudes d'estimations plus faibles, ce qui implique des distributions a posteriori qui ne contiennent pas le vrai modèledans les régions ou` la méthode présente une sensibilité limitée. Cette situation peut être améliorée en combinant des méthodes d'ondes EM planes avec d'autres méthodes complémentaires telles que l'ERT. De plus, je montre que le poids de régularisation des paramètres et l'écart-type des erreurs sur les données peuvent être retrouvés par une inversion probabiliste. Finalement, j'évalue la possibilité de caractériser une distribution tridimensionnelle d'un panache de traceur salin injecté dans le sous-sol en réalisant une inversion probabiliste time-lapse tridimensionnelle d'ondes EM planes. Etant donné que les inversions probabilistes sont très coûteuses en temps de calcul lorsque l'espace des paramètres présente une grande dimension, je propose une stratégie de réduction du modèle ou` les coefficients de décomposition des moments de Legendre du panache de traceur injecté ainsi que sa position sont estimés. Pour ce faire, un modèle de résistivité de base est nécessaire. Il peut être obtenu avant l'expérience time-lapse. Un test synthétique montre que la méthodologie marche bien quand le modèle de résistivité de base est caractérisé correctement. Cette méthodologie est aussi appliquée à un test de trac¸age par injection d'une solution saline et d'acides réalisé dans un système géothermal en Australie, puis comparée à une inversion time-lapse tridimensionnelle réalisée selon une approche déterministe. L'inversion probabiliste permet de mieux contraindre le panache du traceur salin gr^ace à la grande quantité d'informations a priori incluse dans l'algorithme. Néanmoins, les changements de conductivités nécessaires pour expliquer les changements observés dans les données sont plus grands que ce qu'expliquent notre connaissance actuelle des phénomenès physiques. Ce problème peut être lié à la qualité limitée du modèle de résistivité de base utilisé, indiquant ainsi que des efforts plus grands devront être fournis dans le futur pour obtenir des modèles de base de bonne qualité avant de réaliser des expériences dynamiques. Les études décrites dans cette thèse montrent que les méthodes d'ondes EM planes sont très utiles pour caractériser et suivre les variations temporelles du sous-sol sur de larges échelles. Les présentes approches améliorent l'évaluation des modèles obtenus, autant en termes d'incorporation d'informations a priori, qu'en termes de quantification d'incertitudes a posteriori. De plus, les stratégies développées peuvent être appliquées à d'autres méthodes géophysiques, et offrent une grande flexibilité pour l'incorporation d'informations additionnelles lorsqu'elles sont disponibles. -- The efficient use of geothermal systems, the sequestration of CO2 to mitigate climate change, and the prevention of seawater intrusion in coastal aquifers are only some examples that demonstrate the need for novel technologies to monitor subsurface processes from the surface. A main challenge is to assure optimal performance of such technologies at different temporal and spatial scales. Plane-wave electromagnetic (EM) methods are sensitive to subsurface electrical conductivity and consequently to fluid conductivity, fracture connectivity, temperature, and rock mineralogy. These methods have governing equations that are the same over a large range of frequencies, thus allowing to study in an analogous manner processes on scales ranging from few meters close to the surface down to several hundreds of kilometers depth. Unfortunately, they suffer from a significant resolution loss with depth due to the diffusive nature of the electromagnetic fields. Therefore, estimations of subsurface models that use these methods should incorporate a priori information to better constrain the models, and provide appropriate measures of model uncertainty. During my thesis, I have developed approaches to improve the static and dynamic characterization of the subsurface with plane-wave EM methods. In the first part of this thesis, I present a two-dimensional deterministic approach to perform time-lapse inversion of plane-wave EM data. The strategy is based on the incorporation of prior information into the inversion algorithm regarding the expected temporal changes in electrical conductivity. This is done by incorporating a flexible stochastic regularization and constraints regarding the expected ranges of the changes by using Lagrange multipliers. I use non-l2 norms to penalize the model update in order to obtain sharp transitions between regions that experience temporal changes and regions that do not. I also incorporate a time-lapse differencing strategy to remove systematic errors in the time-lapse inversion. This work presents improvements in the characterization of temporal changes with respect to the classical approach of performing separate inversions and computing differences between the models. In the second part of this thesis, I adopt a Bayesian framework and use Markov chain Monte Carlo (MCMC) simulations to quantify model parameter uncertainty in plane-wave EM inversion. For this purpose, I present a two-dimensional pixel-based probabilistic inversion strategy for separate and joint inversions of plane-wave EM and electrical resistivity tomography (ERT) data. I compare the uncertainties of the model parameters when considering different types of prior information on the model structure and different likelihood functions to describe the data errors. The results indicate that model regularization is necessary when dealing with a large number of model parameters because it helps to accelerate the convergence of the chains and leads to more realistic models. These constraints also lead to smaller uncertainty estimates, which imply posterior distributions that do not include the true underlying model in regions where the method has limited sensitivity. This situation can be improved by combining planewave EM methods with complimentary geophysical methods such as ERT. In addition, I show that an appropriate regularization weight and the standard deviation of the data errors can be retrieved by the MCMC inversion. Finally, I evaluate the possibility of characterizing the three-dimensional distribution of an injected water plume by performing three-dimensional time-lapse MCMC inversion of planewave EM data. Since MCMC inversion involves a significant computational burden in high parameter dimensions, I propose a model reduction strategy where the coefficients of a Legendre moment decomposition of the injected water plume and its location are estimated. For this purpose, a base resistivity model is needed which is obtained prior to the time-lapse experiment. A synthetic test shows that the methodology works well when the base resistivity model is correctly characterized. The methodology is also applied to an injection experiment performed in a geothermal system in Australia, and compared to a three-dimensional time-lapse inversion performed within a deterministic framework. The MCMC inversion better constrains the water plumes due to the larger amount of prior information that is included in the algorithm. The conductivity changes needed to explain the time-lapse data are much larger than what is physically possible based on present day understandings. This issue may be related to the base resistivity model used, therefore indicating that more efforts should be given to obtain high-quality base models prior to dynamic experiments. The studies described herein give clear evidence that plane-wave EM methods are useful to characterize and monitor the subsurface at a wide range of scales. The presented approaches contribute to an improved appraisal of the obtained models, both in terms of the incorporation of prior information in the algorithms and the posterior uncertainty quantification. In addition, the developed strategies can be applied to other geophysical methods, and offer great flexibility to incorporate additional information when available.
Resumo:
Time-lapse geophysical measurements are widely used to monitor the movement of water and solutes through the subsurface. Yet commonly used deterministic least squares inversions typically suffer from relatively poor mass recovery, spread overestimation, and limited ability to appropriately estimate nonlinear model uncertainty. We describe herein a novel inversion methodology designed to reconstruct the three-dimensional distribution of a tracer anomaly from geophysical data and provide consistent uncertainty estimates using Markov chain Monte Carlo simulation. Posterior sampling is made tractable by using a lower-dimensional model space related both to the Legendre moments of the plume and to predefined morphological constraints. Benchmark results using cross-hole ground-penetrating radar travel times measurements during two synthetic water tracer application experiments involving increasingly complex plume geometries show that the proposed method not only conserves mass but also provides better estimates of plume morphology and posterior model uncertainty than deterministic inversion results.