932 resultados para Markov chains. Convergence. Evolutionary Strategy. Large Deviations
Resumo:
We investigate the theoretical and numerical computation of rare transitions in simple geophysical turbulent models. We consider the barotropic quasi-geostrophic and two-dimensional Navier–Stokes equations in regimes where bistability between two coexisting large-scale attractors exist. By means of large deviations and instanton theory with the use of an Onsager–Machlup path integral formalism for the transition probability, we show how one can directly compute the most probable transition path between two coexisting attractors analytically in an equilibrium (Langevin) framework and numerically otherWe adapt a class of numerical optimization algorithms known as minimum action methods to simple geophysical turbulent models. We show that by numerically minimizing an appropriate action functional in a large deviation limit, one can predict the most likely transition path for a rare transition between two states. By considering examples where theoretical predictions can be made, we show that the minimum action method successfully predicts the most likely transition path. Finally, we discuss the application and extension of such numerical optimization schemes to the computation of rare transitions observed in direct numerical simulations and experiments and to other, more complex, turbulent systems.
Resumo:
Lehet-e beszélni a 2011-ig felgyülemlett empirikus tapasztalatok tükrében egy egységes válságlefolyásról, amely a fejlett ipari országok egészére általában jellemző, és a meghatározó országok esetében is megragadható? Megállapíthatók-e olyan univerzális változások a kibocsátás, a munkapiacok, a fogyasztás, valamint a beruházás tekintetében, amelyek jól illeszkednek a korábbi tapasztalatokhoz, nem kevésbé az ismert makromodellek predikcióihoz? A válasz – legalábbis jelen sorok írásakor – nemleges: sem a válság lefolyásának jellegzetességeiben és a makrogazdasági teljesítmények romlásának ütemében, sem a visszacsúszás mértékében és időbeli kiterjedésében sincsenek jól azonosítható közös jegyek, olyanok, amelyek a meglévő elméleti keretekbe jól beilleszthetők. A tanulmány áttekinti a válsággal és a makrogazdasági sokkokkal foglalkozó empirikus irodalom – a pénzügyi globalizáció értelmezései nyomán – relevánsnak tartott munkáit. Ezt követően egy 60 év távlatát átfogó vizsgálatban próbáljuk megítélni a recessziós időszakokban az amerikai gazdaság teljesítményét azzal a célkitűzéssel, hogy az elmúlt válság súlyosságának megítélése kellően objektív lehessen, legalább a fontosabb makrováltozók elmozdulásának nagyságrendje tekintetében. / === / Based on the empirical evidence accumulated until 2011, using official statistics from the OECD data bank and the US Commerce Department, the article addresses the question whether one can, or cannot, speak about generally observable recession/crisis patterns, such that were to be universally recognized in all major industrial countries (the G7). The answer to this question is a firm no. Changes and volatility in most major macroeconomic indicators such as output-gap, labor market distortions and large deviations from trend in consumption and in investment did all, respectively, exhibit wide differences in depth and width across the G7 countries. The large deviations in output-gaps and especially strong distortions in labor market inputs and hours per capita worked over the crisis months can hardly be explained by the existing model classes of DSGE and those of the real business cycle. Especially bothering are the difficulties in fitting the data into any established model whether business cycle or some other types, in which financial distress reduces economic activity. It is argued that standard business cycle models with financial market imperfections have no mechanism for generating deviation from standard theory, thus they do not shed light on the key factors underlying the 2007–2009 recession. That does not imply that the financial crisis is unimportant in understanding the recession, but it does indicate however, that we do not fully understand the channels through which financial distress reduced labor input. Long historical trends on the privately held portion of the federal debt in the US economy indicate that the standard macro proposition of public debt crowding out private investment and thus inhibiting growth, can be strongly challenged in so far as this ratio is neither a direct indicator of growth slowing down, nor for recession.
Resumo:
Type systems for secure information flow aim to prevent a program from leaking information from H (high) to L (low) variables. Traditionally, bisimulation has been the prevalent technique for proving the soundness of such systems. This work introduces a new proof technique based on stripping and fast simulation, and shows that it can be applied in a number of cases where bisimulation fails. We present a progressive development of this technique over a representative sample of languages including a simple imperative language (core theory), a multiprocessing nondeterministic language, a probabilistic language, and a language with cryptographic primitives. In the core theory we illustrate the key concepts of this technique in a basic setting. A fast low simulation in the context of transition systems is a binary relation where simulating states can match the moves of simulated states while maintaining the equivalence of low variables; stripping is a function that removes high commands from programs. We show that we can prove secure information flow by arguing that the stripping relation is a fast low simulation. We then extend the core theory to an abstract distributed language under a nondeterministic scheduler. Next, we extend to a probabilistic language with a random assignment command; we generalize fast simulation to the setting of discrete time Markov Chains, and prove approximate probabilistic noninterference. Finally, we introduce cryptographic primitives into the probabilistic language and prove computational noninterference, provided that the underling encryption scheme is secure.
Resumo:
The use of behavioural indicators of suffering and welfare in captive animals has produced ambiguous results. In comparisons between groups, those in worse condition tend to exhibit increased overall rate of Behaviours Potentially Indicative of Stress (BPIS), but when comparing within groups, individuals differ in their stress coping strategies. This dissertation presents analyses to unravel the Behavioural Profile of a sample of 26 captive capuchin monkeys, of three different species (Sapajus libidinosus, S. flavius and S. xanthosternos), kept in different enclosure types. In total, 147,17 hours of data were collected. We explored four type of analysis: Activity Budgets, Diversity indexes, Markov chains and Sequence analyses, and Social Network Analyses, resulting in nine indexes of behavioural occurrence and organization. In chapter One we explore group differences. Results support predictions of minor sex and species differences and major differences in behavioural profile due to enclosure type: i. individuals in less enriched enclosures exhibited a more diverse BPIS repertoire and a decreased probability of a sequence with six Genus Normative Behaviour; ii. number of most probable behavioural transitions including at least one BPIS was higher in less enriched enclosures; iii. proeminence indexes indicate that BPIS function as dead ends of behavioural sequences, and proeminence of three BPIS (pacing, self-direct, active I) were higher in less enriched enclosures. Overall, these data are not supportive of BPIS as a repetitive pattern, with a mantra-like calming effect. Rather, the picture that emerges is more supportive of BPIS as activities that disrupt organization of behaviours, introducing “noise” that compromises optimal activity budget. In chapter Two we explored individual differences in stress coping strategies. We classified individuals along six axes of exploratory behaviour. These were only weakly correlated indicating low correlation among behavioural indicators of syndromes. Nevertheless, the results are suggestive of two broad stress coping strategies, similar to the bold/proactive and shy/reactive pattern: more exploratory capuchin monkeys exhibited increased values of proeminence in Pacing, aberrant sexual display and Active 1 BPIS, while less active animals exhibited increased probability in significant sequences involving at least one BPIS, and increased prominence in own stereotypy. Capuchin monkeys are known for their cognitive capacities and behavioural flexibility, therefore, the search for a consistent set of behavioural indictors of welfare and individual differences requires further studies and larger data sets. With this work we aim contributing to design scientifically grounded and statistically correct protocols for collection of behavioural data that permits comparability of results and meta-analyses, from whatever theoretical perspective interpretation it may receive.
Resumo:
The fatty acid (FA) composition of representatives belonging to 18 polychaete families from the Southern Ocean shelf and deep sea (600 to 5337 m) was analysed in order to identify trophic biomarkers and elucidate possible feeding preferences. Total FA content was relatively low with few exceptions and ranged from 1.0 to 11.6% of total body dry weight. The most prominent FA found were 20:5(n-3), 16:0, 22:6(n-3), 18:1(n-7), 20:4(n-6), 18:0, 20:1(n-11) and 18:1(n-9). For some polychaete families and species FA profiles indicated selective feeding on certain dietary components, like freshly deposited diatom remains (e.g., Spionidae, Fauveliopsidae and Flabelligeridae) or foraminiferans (e.g., Euphrosinidae, Nephtyidae and Syllidae). Feeding patterns were relatively consistent within families at the deep stations, while the FA composition differed between the deep and the shelf stations within the same family. Fatty alcohols, indicative of wax ester storage, were found in almost all families (in proportions of 0.0 to 29.3% of total FA and fatty alcohols). The development of this long-term storage mechanism of energy reserves possibly displays an evolutionary strategy.
Resumo:
Authentication plays an important role in how we interact with computers, mobile devices, the web, etc. The idea of authentication is to uniquely identify a user before granting access to system privileges. For example, in recent years more corporate information and applications have been accessible via the Internet and Intranet. Many employees are working from remote locations and need access to secure corporate files. During this time, it is possible for malicious or unauthorized users to gain access to the system. For this reason, it is logical to have some mechanism in place to detect whether the logged-in user is the same user in control of the user's session. Therefore, highly secure authentication methods must be used. We posit that each of us is unique in our use of computer systems. It is this uniqueness that is leveraged to "continuously authenticate users" while they use web software. To monitor user behavior, n-gram models are used to capture user interactions with web-based software. This statistical language model essentially captures sequences and sub-sequences of user actions, their orderings, and temporal relationships that make them unique by providing a model of how each user typically behaves. Users are then continuously monitored during software operations. Large deviations from "normal behavior" can possibly indicate malicious or unintended behavior. This approach is implemented in a system called Intruder Detector (ID) that models user actions as embodied in web logs generated in response to a user's actions. User identification through web logs is cost-effective and non-intrusive. We perform experiments on a large fielded system with web logs of approximately 4000 users. For these experiments, we use two classification techniques; binary and multi-class classification. We evaluate model-specific differences of user behavior based on coarse-grain (i.e., role) and fine-grain (i.e., individual) analysis. A specific set of metrics are used to provide valuable insight into how each model performs. Intruder Detector achieves accurate results when identifying legitimate users and user types. This tool is also able to detect outliers in role-based user behavior with optimal performance. In addition to web applications, this continuous monitoring technique can be used with other user-based systems such as mobile devices and the analysis of network traffic.
Resumo:
There is considerable interest in the use of genetic algorithms to solve problems arising in the areas of scheduling and timetabling. However, the classical genetic algorithm paradigm is not well equipped to handle the conflict between objectives and constraints that typically occurs in such problems. In order to overcome this, successful implementations frequently make use of problem specific knowledge. This paper is concerned with the development of a GA for a nurse rostering problem at a major UK hospital. The structure of the constraints is used as the basis for a co-evolutionary strategy using co-operating sub-populations. Problem specific knowledge is also used to define a system of incentives and disincentives, and a complementary mutation operator. Empirical results based on 52 weeks of live data show how these features are able to improve an unsuccessful canonical GA to the point where it is able to provide a practical solution to the problem.
Resumo:
There is considerable interest in the use of genetic algorithms to solve problems arising in the areas of scheduling and timetabling. However, the classical genetic algorithm paradigm is not well equipped to handle the conflict between objectives and constraints that typically occurs in such problems. In order to overcome this, successful implementations frequently make use of problem specific knowledge. This paper is concerned with the development of a GA for a nurse rostering problem at a major UK hospital. The structure of the constraints is used as the basis for a co-evolutionary strategy using co-operating sub-populations. Problem specific knowledge is also used to define a system of incentives and disincentives, and a complementary mutation operator. Empirical results based on 52 weeks of live data show how these features are able to improve an unsuccessful canonical GA to the point where it is able to provide a practical solution to the problem.
Resumo:
There is considerable interest in the use of genetic algorithms to solve problems arising in the areas of scheduling and timetabling. However, the classical genetic algorithm paradigm is not well equipped to handle the conflict between objectives and constraints that typically occurs in such problems. In order to overcome this, successful implementations frequently make use of problem specific knowledge. This paper is concerned with the development of a GA for a nurse rostering problem at a major UK hospital. The structure of the constraints is used as the basis for a co-evolutionary strategy using co-operating sub-populations. Problem specific knowledge is also used to define a system of incentives and disincentives, and a complementary mutation operator. Empirical results based on 52 weeks of live data show how these features are able to improve an unsuccessful canonical GA to the point where it is able to provide a practical solution to the problem.
Resumo:
In questo elaborato ci siamo occupati della legge di Zipf sia da un punto di vista applicativo che teorico. Tale legge empirica afferma che il rango in frequenza (RF) delle parole di un testo seguono una legge a potenza con esponente -1. Per quanto riguarda l'approccio teorico abbiamo trattato due classi di modelli in grado di ricreare leggi a potenza nella loro distribuzione di probabilità. In particolare, abbiamo considerato delle generalizzazioni delle urne di Polya e i processi SSR (Sample Space Reducing). Di questi ultimi abbiamo dato una formalizzazione in termini di catene di Markov. Infine abbiamo proposto un modello di dinamica delle popolazioni capace di unificare e riprodurre i risultati dei tre SSR presenti in letteratura. Successivamente siamo passati all'analisi quantitativa dell'andamento del RF sulle parole di un corpus di testi. Infatti in questo caso si osserva che la RF non segue una pura legge a potenza ma ha un duplice andamento che può essere rappresentato da una legge a potenza che cambia esponente. Abbiamo cercato di capire se fosse possibile legare l'analisi dell'andamento del RF con le proprietà topologiche di un grafo. In particolare, a partire da un corpus di testi abbiamo costruito una rete di adiacenza dove ogni parola era collegata tramite un link alla parola successiva. Svolgendo un'analisi topologica della struttura del grafo abbiamo trovato alcuni risultati che sembrano confermare l'ipotesi che la sua struttura sia legata al cambiamento di pendenza della RF. Questo risultato può portare ad alcuni sviluppi nell'ambito dello studio del linguaggio e della mente umana. Inoltre, siccome la struttura del grafo presenterebbe alcune componenti che raggruppano parole in base al loro significato, un approfondimento di questo studio potrebbe condurre ad alcuni sviluppi nell'ambito della comprensione automatica del testo (text mining).
Resumo:
We address the problem of automotive cybersecurity from the point of view of Threat Analysis and Risk Assessment (TARA). The central question that motivates the thesis is the one about the acceptability of risk, which is vital in taking a decision about the implementation of cybersecurity solutions. For this purpose, we develop a quantitative framework in which we take in input the results of risk assessment and define measures of various facets of a possible risk response; we then exploit the natural presence of trade-offs (cost versus effectiveness) to formulate the problem as a multi-objective optimization. Finally, we develop a stochastic model of the future evolution of the risk factors, by means of Markov chains; we adapt the formulations of the optimization problems to this non-deterministic context. The thesis is the result of a collaboration with the Vehicle Electrification division of Marelli, in particular with the Cybersecurity team based in Bologna; this allowed us to consider a particular instance of the problem, deriving from a real TARA, in order to test both the deterministic and the stochastic framework in a real world application. The collaboration also explains why in the work we often assume the point of view of a tier-1 supplier; however, the analyses performed can be adapted to any other level of the supply chain.
Resumo:
In old, phosphorus (P)-impoverished habitats, root specializations such as cluster roots efficiently mobilize and acquire P by releasing large amounts of carboxylates in the rhizosphere. These specialized roots are rarely mycorrhizal. We investigated whether Discocactus placentiformis (Cactaceae), a common species in nutrient-poor campos rupestres over white sands, operates in the same way as other root specializations. Discocactus placentiformis showed no mycorrhizal colonization, but exhibited a sand-binding root specialization with rhizosheath formation. We first provide circumstantial evidence for carboxylate exudation in field material, based on its very high shoot manganese (Mn) concentrations, and then firm evidence, based on exudate analysis. We identified predominantly oxalic acid, but also malic, citric, lactic, succinic, fumaric, and malonic acids. When grown in nutrient solution with P concentrations ranging from 0 to 100 μM, we observed an increase in total carboxylate exudation with decreasing P supply, showing that P deficiency stimulated carboxylate release. Additionally, we tested P solubilization by citric, malic and oxalic acids, and found that they solubilized P from the strongly P-sorbing soil in its native habitat, when the acids were added in combination and in relatively low concentrations. We conclude that the sand-binding root specialization in this nonmycorrhizal cactus functions similar to that of cluster roots, which efficiently enhance P acquisition in other habitats with very low P availability.
Resumo:
Rubisco is responsible for the fixation of CO2 into organic compounds through photosynthesis and thus has a great agronomic importance. It is well established that this enzyme suffers from a slow catalysis, and its low specificity results into photorespiration, which is considered as an energy waste for the plant. However, natural variations exist, and some Rubisco lineages, such as in C4 plants, exhibit higher catalytic efficiencies coupled to lower specificities. These C4 kinetics could have evolved as an adaptation to the higher CO2 concentration present in C4 photosynthetic cells. In this study, using phylogenetic analyses on a large data set of C3 and C4 monocots, we showed that the rbcL gene, which encodes the large subunit of Rubisco, evolved under positive selection in independent C4 lineages. This confirms that selective pressures on Rubisco have been switched in C4 plants by the high CO2 environment prevailing in their photosynthetic cells. Eight rbcL codons evolving under positive selection in C4 clades were involved in parallel changes among the 23 independent monocot C4 lineages included in this study. These amino acids are potentially responsible for the C4 kinetics, and their identification opens new roads for human-directed Rubisco engineering. The introgression of C4-like high-efficiency Rubisco would strongly enhance C3 crop yields in the future CO2-enriched atmosphere.
Resumo:
Many traits and/or strategies expressed by organisms are quantitative phenotypes. Because populations are of finite size and genomes are subject to mutations, these continuously varying phenotypes are under the joint pressure of mutation, natural selection and random genetic drift. This article derives the stationary distribution for such a phenotype under a mutation-selection-drift balance in a class-structured population allowing for demographically varying class sizes and/or changing environmental conditions. The salient feature of the stationary distribution is that it can be entirely characterized in terms of the average size of the gene pool and Hamilton's inclusive fitness effect. The exploration of the phenotypic space varies exponentially with the cumulative inclusive fitness effect over state space, which determines an adaptive landscape. The peaks of the landscapes are those phenotypes that are candidate evolutionary stable strategies and can be determined by standard phenotypic selection gradient methods (e.g. evolutionary game theory, kin selection theory, adaptive dynamics). The curvature of the stationary distribution provides a measure of the stability by convergence of candidate evolutionary stable strategies, and it is evaluated explicitly for two biological scenarios: first, a coordination game, which illustrates that, for a multipeaked adaptive landscape, stochastically stable strategies can be singled out by letting the size of the gene pool grow large; second, a sex-allocation game for diploids and haplo-diploids, which suggests that the equilibrium sex ratio follows a Beta distribution with parameters depending on the features of the genetic system.
Resumo:
L'utilisation efficace des systèmes géothermaux, la séquestration du CO2 pour limiter le changement climatique et la prévention de l'intrusion d'eau salée dans les aquifères costaux ne sont que quelques exemples qui démontrent notre besoin en technologies nouvelles pour suivre l'évolution des processus souterrains à partir de la surface. Un défi majeur est d'assurer la caractérisation et l'optimisation des performances de ces technologies à différentes échelles spatiales et temporelles. Les méthodes électromagnétiques (EM) d'ondes planes sont sensibles à la conductivité électrique du sous-sol et, par conséquent, à la conductivité électrique des fluides saturant la roche, à la présence de fractures connectées, à la température et aux matériaux géologiques. Ces méthodes sont régies par des équations valides sur de larges gammes de fréquences, permettant détudier de manières analogues des processus allant de quelques mètres sous la surface jusqu'à plusieurs kilomètres de profondeur. Néanmoins, ces méthodes sont soumises à une perte de résolution avec la profondeur à cause des propriétés diffusives du champ électromagnétique. Pour cette raison, l'estimation des modèles du sous-sol par ces méthodes doit prendre en compte des informations a priori afin de contraindre les modèles autant que possible et de permettre la quantification des incertitudes de ces modèles de façon appropriée. Dans la présente thèse, je développe des approches permettant la caractérisation statique et dynamique du sous-sol à l'aide d'ondes EM planes. Dans une première partie, je présente une approche déterministe permettant de réaliser des inversions répétées dans le temps (time-lapse) de données d'ondes EM planes en deux dimensions. Cette stratégie est basée sur l'incorporation dans l'algorithme d'informations a priori en fonction des changements du modèle de conductivité électrique attendus. Ceci est réalisé en intégrant une régularisation stochastique et des contraintes flexibles par rapport à la gamme des changements attendus en utilisant les multiplicateurs de Lagrange. J'utilise des normes différentes de la norme l2 pour contraindre la structure du modèle et obtenir des transitions abruptes entre les régions du model qui subissent des changements dans le temps et celles qui n'en subissent pas. Aussi, j'incorpore une stratégie afin d'éliminer les erreurs systématiques de données time-lapse. Ce travail a mis en évidence l'amélioration de la caractérisation des changements temporels par rapport aux approches classiques qui réalisent des inversions indépendantes à chaque pas de temps et comparent les modèles. Dans la seconde partie de cette thèse, j'adopte un formalisme bayésien et je teste la possibilité de quantifier les incertitudes sur les paramètres du modèle dans l'inversion d'ondes EM planes. Pour ce faire, je présente une stratégie d'inversion probabiliste basée sur des pixels à deux dimensions pour des inversions de données d'ondes EM planes et de tomographies de résistivité électrique (ERT) séparées et jointes. Je compare les incertitudes des paramètres du modèle en considérant différents types d'information a priori sur la structure du modèle et différentes fonctions de vraisemblance pour décrire les erreurs sur les données. Les résultats indiquent que la régularisation du modèle est nécessaire lorsqu'on a à faire à un large nombre de paramètres car cela permet d'accélérer la convergence des chaînes et d'obtenir des modèles plus réalistes. Cependent, ces contraintes mènent à des incertitudes d'estimations plus faibles, ce qui implique des distributions a posteriori qui ne contiennent pas le vrai modèledans les régions ou` la méthode présente une sensibilité limitée. Cette situation peut être améliorée en combinant des méthodes d'ondes EM planes avec d'autres méthodes complémentaires telles que l'ERT. De plus, je montre que le poids de régularisation des paramètres et l'écart-type des erreurs sur les données peuvent être retrouvés par une inversion probabiliste. Finalement, j'évalue la possibilité de caractériser une distribution tridimensionnelle d'un panache de traceur salin injecté dans le sous-sol en réalisant une inversion probabiliste time-lapse tridimensionnelle d'ondes EM planes. Etant donné que les inversions probabilistes sont très coûteuses en temps de calcul lorsque l'espace des paramètres présente une grande dimension, je propose une stratégie de réduction du modèle ou` les coefficients de décomposition des moments de Legendre du panache de traceur injecté ainsi que sa position sont estimés. Pour ce faire, un modèle de résistivité de base est nécessaire. Il peut être obtenu avant l'expérience time-lapse. Un test synthétique montre que la méthodologie marche bien quand le modèle de résistivité de base est caractérisé correctement. Cette méthodologie est aussi appliquée à un test de trac¸age par injection d'une solution saline et d'acides réalisé dans un système géothermal en Australie, puis comparée à une inversion time-lapse tridimensionnelle réalisée selon une approche déterministe. L'inversion probabiliste permet de mieux contraindre le panache du traceur salin gr^ace à la grande quantité d'informations a priori incluse dans l'algorithme. Néanmoins, les changements de conductivités nécessaires pour expliquer les changements observés dans les données sont plus grands que ce qu'expliquent notre connaissance actuelle des phénomenès physiques. Ce problème peut être lié à la qualité limitée du modèle de résistivité de base utilisé, indiquant ainsi que des efforts plus grands devront être fournis dans le futur pour obtenir des modèles de base de bonne qualité avant de réaliser des expériences dynamiques. Les études décrites dans cette thèse montrent que les méthodes d'ondes EM planes sont très utiles pour caractériser et suivre les variations temporelles du sous-sol sur de larges échelles. Les présentes approches améliorent l'évaluation des modèles obtenus, autant en termes d'incorporation d'informations a priori, qu'en termes de quantification d'incertitudes a posteriori. De plus, les stratégies développées peuvent être appliquées à d'autres méthodes géophysiques, et offrent une grande flexibilité pour l'incorporation d'informations additionnelles lorsqu'elles sont disponibles. -- The efficient use of geothermal systems, the sequestration of CO2 to mitigate climate change, and the prevention of seawater intrusion in coastal aquifers are only some examples that demonstrate the need for novel technologies to monitor subsurface processes from the surface. A main challenge is to assure optimal performance of such technologies at different temporal and spatial scales. Plane-wave electromagnetic (EM) methods are sensitive to subsurface electrical conductivity and consequently to fluid conductivity, fracture connectivity, temperature, and rock mineralogy. These methods have governing equations that are the same over a large range of frequencies, thus allowing to study in an analogous manner processes on scales ranging from few meters close to the surface down to several hundreds of kilometers depth. Unfortunately, they suffer from a significant resolution loss with depth due to the diffusive nature of the electromagnetic fields. Therefore, estimations of subsurface models that use these methods should incorporate a priori information to better constrain the models, and provide appropriate measures of model uncertainty. During my thesis, I have developed approaches to improve the static and dynamic characterization of the subsurface with plane-wave EM methods. In the first part of this thesis, I present a two-dimensional deterministic approach to perform time-lapse inversion of plane-wave EM data. The strategy is based on the incorporation of prior information into the inversion algorithm regarding the expected temporal changes in electrical conductivity. This is done by incorporating a flexible stochastic regularization and constraints regarding the expected ranges of the changes by using Lagrange multipliers. I use non-l2 norms to penalize the model update in order to obtain sharp transitions between regions that experience temporal changes and regions that do not. I also incorporate a time-lapse differencing strategy to remove systematic errors in the time-lapse inversion. This work presents improvements in the characterization of temporal changes with respect to the classical approach of performing separate inversions and computing differences between the models. In the second part of this thesis, I adopt a Bayesian framework and use Markov chain Monte Carlo (MCMC) simulations to quantify model parameter uncertainty in plane-wave EM inversion. For this purpose, I present a two-dimensional pixel-based probabilistic inversion strategy for separate and joint inversions of plane-wave EM and electrical resistivity tomography (ERT) data. I compare the uncertainties of the model parameters when considering different types of prior information on the model structure and different likelihood functions to describe the data errors. The results indicate that model regularization is necessary when dealing with a large number of model parameters because it helps to accelerate the convergence of the chains and leads to more realistic models. These constraints also lead to smaller uncertainty estimates, which imply posterior distributions that do not include the true underlying model in regions where the method has limited sensitivity. This situation can be improved by combining planewave EM methods with complimentary geophysical methods such as ERT. In addition, I show that an appropriate regularization weight and the standard deviation of the data errors can be retrieved by the MCMC inversion. Finally, I evaluate the possibility of characterizing the three-dimensional distribution of an injected water plume by performing three-dimensional time-lapse MCMC inversion of planewave EM data. Since MCMC inversion involves a significant computational burden in high parameter dimensions, I propose a model reduction strategy where the coefficients of a Legendre moment decomposition of the injected water plume and its location are estimated. For this purpose, a base resistivity model is needed which is obtained prior to the time-lapse experiment. A synthetic test shows that the methodology works well when the base resistivity model is correctly characterized. The methodology is also applied to an injection experiment performed in a geothermal system in Australia, and compared to a three-dimensional time-lapse inversion performed within a deterministic framework. The MCMC inversion better constrains the water plumes due to the larger amount of prior information that is included in the algorithm. The conductivity changes needed to explain the time-lapse data are much larger than what is physically possible based on present day understandings. This issue may be related to the base resistivity model used, therefore indicating that more efforts should be given to obtain high-quality base models prior to dynamic experiments. The studies described herein give clear evidence that plane-wave EM methods are useful to characterize and monitor the subsurface at a wide range of scales. The presented approaches contribute to an improved appraisal of the obtained models, both in terms of the incorporation of prior information in the algorithms and the posterior uncertainty quantification. In addition, the developed strategies can be applied to other geophysical methods, and offer great flexibility to incorporate additional information when available.