88 resultados para FLEXIBILITY
Resumo:
Retrospective research is conducted on already available data and/or biologic material. Whether such research requires that patients specifically consent to the use of "their" data continues to stir controversy. From a legal and ethical point of view, it depends on several factors. The main criteria to be considered are whether the data or the sample is anonymous, whether the researcher is the one who collected it and whether the patient was told of the possible research use. In Switzerland, several laws delineate the procedure to be followed. The definition of "anonymous" is open to some interpretation. In addition, it is debatable whether consent waivers that are legally admissible for data extend to research involving human biological samples. In a few years, a new Swiss federal law on human research could clarify the regulatory landscape. Meanwhile, hospital-internal guidelines may impose stricter conditions than required by federal or cantonal law. Conversely, Swiss and European ethical texts may suggest greater flexibility and call for a looser interpretation of existing laws. The present article provides an overview of the issues for physicians, scientists, ethics committee members and policy makers involved in retrospective research in Switzerland. It aims at provoking more open discussions of the regulatory problems and possible future legal and ethical solutions.
Resumo:
Psychosis is a debilitating disease, causing harm to the individual and society. Since early detection of the disease is associated with a more benign course, factors are warranted that enable the early detection of psychosis. In the present thesis we will be focusing on two potential risk factors, namely schizotypy and drug use. The schizotypy concept, originally developed by Meehl (1962), states that schizophrenia symptoms exist on a spectrum, with symptoms ranging from the most severe in patients with schizophrenia to the least affected individual in the general population. Along the schizophrenia spectrum cognitive impairments are commonly found, for instance reduced hemispheric asymmetry or frontal lobe functions. The second risk factor (drug use), affects similar cognitive functions as those attenuated along the schizophrenia spectrum, and drug use is elevated in schizophrenia and people scoring high on schizotypy. Therefore, we set out to investigate whether cognitive attenuations formerly allocated to schizotypal symptoms could have been influenced by elevated substance use in this population. To test this idea, we assessed various drugs (nicotine, cannabis, mephedrone, general substance dependence) and schizotypy symptoms (O-LIFE), and measured either hemispheric asymmetry of function (left hemisphere dominance for language, and right hemisphere dominance for facial processing) or functions largely relying on the frontal lobes (such as cognitive flexibility, working memory, verbal short-term memory, verbal learning and verbal fluency). Results of all studies suggest that it is mostly drugs, and not schizotypy in general that predict cognitive functioning. Therefore, cognitive attenuations subscribed to schizotypy dimensions are likely to have been affected by enhanced drug use. Future studies should extend the list of potential risk factors (e.g. depression and IQ) to acquire a comprehensive overview of the most reliable predictors of disadvantageous cognitive profiles.
Resumo:
BACKGROUND: The goals of our study are to determine the most appropriate model for alcohol consumption as an exposure for burden of disease, to analyze the effect of the chosen alcohol consumption distribution on the estimation of the alcohol Population- Attributable Fractions (PAFs), and to characterize the chosen alcohol consumption distribution by exploring if there is a global relationship within the distribution. METHODS: To identify the best model, the Log-Normal, Gamma, and Weibull prevalence distributions were examined using data from 41 surveys from Gender, Alcohol and Culture: An International Study (GENACIS) and from the European Comparative Alcohol Study. To assess the effect of these distributions on the estimated alcohol PAFs, we calculated the alcohol PAF for diabetes, breast cancer, and pancreatitis using the three above-named distributions and using the more traditional approach based on categories. The relationship between the mean and the standard deviation from the Gamma distribution was estimated using data from 851 datasets for 66 countries from GENACIS and from the STEPwise approach to Surveillance from the World Health Organization. RESULTS: The Log-Normal distribution provided a poor fit for the survey data, with Gamma and Weibull distributions providing better fits. Additionally, our analyses showed that there were no marked differences for the alcohol PAF estimates based on the Gamma or Weibull distributions compared to PAFs based on categorical alcohol consumption estimates. The standard deviation of the alcohol distribution was highly dependent on the mean, with a unit increase in alcohol consumption associated with a unit increase in the mean of 1.258 (95% CI: 1.223 to 1.293) (R2 = 0.9207) for women and 1.171 (95% CI: 1.144 to 1.197) (R2 = 0. 9474) for men. CONCLUSIONS: Although the Gamma distribution and the Weibull distribution provided similar results, the Gamma distribution is recommended to model alcohol consumption from population surveys due to its fit, flexibility, and the ease with which it can be modified. The results showed that a large degree of variance of the standard deviation of the alcohol consumption Gamma distribution was explained by the mean alcohol consumption, allowing for alcohol consumption to be modeled through a Gamma distribution using only average consumption.
Resumo:
Instead of standard rigid thoracoscopes, we used a modified gastroscope for video assistance during 12 minimally invasive left internal mammary harvesting. Flexibility and remote control of its last centimeters give to the gastroscope a total freedom of movements, and perfect positioning in every direction. The scope is equipped with cold light, a suction canal and an irrigation canal, which allow for in situ washing without needing to remove it from the thoracic cavity. Thanks to these advantages, vision and lighting are always perfect.
Resumo:
This review describes the advances in malaria antigen discovery and vaccine development using the long synthetic peptide platforms that have been made available during the past 5 years. The most recent technical developments regarding peptide synthesis with the optimized production of large synthetic fragments are discussed. Clinical trials of long synthetic peptides are also reviewed. These trials demonstrated that long synthetic peptides are safe and immunogenic when formulated with various adjuvants. In addition, long synthetic peptides can elicit an antibody response in humans and have demonstrated inhibitory activity against parasite growth in vitro. Finally, new approaches to exploit the abundance of genomic data and the flexibility and speed of peptide synthesis are proposed.
Resumo:
The measurement of rigidity and perseveration respectively gets increasing importance in clinical psychodiagnostics. Recently we have developed a computer-assisted technique which allows to get information about inadequate persisting in psychic processes and behaviour within shortest time and to differentiate between psychopathological groups. 257 patients of both sexes who came for elucidation of their disorders to the department of clinical psychodiagnostics were investigated. The most significant differences between the groups were found in redundance of second degree (the patient has to press 10 buttons indiscriminately according to the beat of a metronom--standard condition) and in personal speed (the patient has to press 10 buttons as fast as possible--speed condition). Furthermore the psychopathological groups were ranged in the particular variables of rigidity according to their mean values and their average ranges the schizophrenics and effective psychoses were characterized by a high tendency of perseveration while the neurotics, patients with organic brain syndrome and alcohol and drug dependents showed more flexibility.
Resumo:
Abstract Sitting between your past and your future doesn't mean you are in the present. Dakota Skye Complex systems science is an interdisciplinary field grouping under the same umbrella dynamical phenomena from social, natural or mathematical sciences. The emergence of a higher order organization or behavior, transcending that expected of the linear addition of the parts, is a key factor shared by all these systems. Most complex systems can be modeled as networks that represent the interactions amongst the system's components. In addition to the actual nature of the part's interactions, the intrinsic topological structure of underlying network is believed to play a crucial role in the remarkable emergent behaviors exhibited by the systems. Moreover, the topology is also a key a factor to explain the extraordinary flexibility and resilience to perturbations when applied to transmission and diffusion phenomena. In this work, we study the effect of different network structures on the performance and on the fault tolerance of systems in two different contexts. In the first part, we study cellular automata, which are a simple paradigm for distributed computation. Cellular automata are made of basic Boolean computational units, the cells; relying on simple rules and information from- the surrounding cells to perform a global task. The limited visibility of the cells can be modeled as a network, where interactions amongst cells are governed by an underlying structure, usually a regular one. In order to increase the performance of cellular automata, we chose to change its topology. We applied computational principles inspired by Darwinian evolution, called evolutionary algorithms, to alter the system's topological structure starting from either a regular or a random one. The outcome is remarkable, as the resulting topologies find themselves sharing properties of both regular and random network, and display similitudes Watts-Strogtz's small-world network found in social systems. Moreover, the performance and tolerance to probabilistic faults of our small-world like cellular automata surpasses that of regular ones. In the second part, we use the context of biological genetic regulatory networks and, in particular, Kauffman's random Boolean networks model. In some ways, this model is close to cellular automata, although is not expected to perform any task. Instead, it simulates the time-evolution of genetic regulation within living organisms under strict conditions. The original model, though very attractive by it's simplicity, suffered from important shortcomings unveiled by the recent advances in genetics and biology. We propose to use these new discoveries to improve the original model. Firstly, we have used artificial topologies believed to be closer to that of gene regulatory networks. We have also studied actual biological organisms, and used parts of their genetic regulatory networks in our models. Secondly, we have addressed the improbable full synchronicity of the event taking place on. Boolean networks and proposed a more biologically plausible cascading scheme. Finally, we tackled the actual Boolean functions of the model, i.e. the specifics of how genes activate according to the activity of upstream genes, and presented a new update function that takes into account the actual promoting and repressing effects of one gene on another. Our improved models demonstrate the expected, biologically sound, behavior of previous GRN model, yet with superior resistance to perturbations. We believe they are one step closer to the biological reality.
Resumo:
The prevalence of obesity is rising progressively, even among older age groups. By the year 2030-2035 over 20% of the adult US population and over 25% of the Europeans will be aged 65 years and older. The predicted prevalence of obesity in Americans, 60 years and older was 37% in 2010. The predicted prevalence of obesity in Europe in 2015 varies between 20 and 30% dependent on the model used. This means 20.9 million obese 60+ people in the USA in 2010 and 32 million obese elders in 2015 in the EU. Although cut-off values of BMI, waist circumference and percentages of fat mass have not been defined for the elderly (nor for the elderly of different ethnicity), it is clear from several meta-analyses that mortality and morbidity associated with overweight and obesity only increases at a BMI above 30 kg/m(2). Thus, treatment should only be offered to patients who are obese rather than overweight and who also have functional impairments, metabolic complications or obesity-related diseases, that can benefit from weight loss. The weight loss therapy should aim to minimize muscle and bone loss but also vigilance as regards the development of sarcopenic obesity - a combination of an unhealthy excess of body fat with a detrimental loss of muscle and fat-free mass including bone - is important in the elderly, who are vulnerable to this outcome. Life-style intervention should be the first step and consists of a diet with a 500 kcal (2.1 MJ) energy deficit and an adequate intake of protein of high biological quality together with calcium and vitamin D, behavioural therapy and multi-component exercise. Multi-component exercise includes flexibility training, balance training, aerobic exercise and resistance training. The adherence rate in most studies is around 75%. Knowledge of constraints and modulators of physical inactivity should be of help to engage the elderly in physical activity. The role of pharmacotherapy and bariatric surgery in the elderly is largely unknown as in most studies people aged 65 years and older have been excluded.
Resumo:
Ligands and receptors of the TNF superfamily are therapeutically relevant targets in a wide range of human diseases. This chapter describes assays based on ELISA, immunoprecipitation, FACS, and reporter cell lines to monitor interactions of tagged receptors and ligands in both soluble and membrane-bound forms using unified detection techniques. A reporter cell assay that is sensitive to ligand oligomerization can identify ligands with high probability of being active on endogenous receptors. Several assays are also suitable to measure the activity of agonist or antagonist antibodies, or to detect interactions with proteoglycans. Finally, self-interaction of membrane-bound receptors can be evidenced using a FRET-based assay. This panel of methods provides a large degree of flexibility to address questions related to the specificity, activation, or inhibition of TNF-TNF receptor interactions in independent assay systems, but does not substitute for further tests in physiologically relevant conditions.
Resumo:
In this paper we study the relevance of multiple kernel learning (MKL) for the automatic selection of time series inputs. Recently, MKL has gained great attention in the machine learning community due to its flexibility in modelling complex patterns and performing feature selection. In general, MKL constructs the kernel as a weighted linear combination of basis kernels, exploiting different sources of information. An efficient algorithm wrapping a Support Vector Regression model for optimizing the MKL weights, named SimpleMKL, is used for the analysis. In this sense, MKL performs feature selection by discarding inputs/kernels with low or null weights. The approach proposed is tested with simulated linear and nonlinear time series (AutoRegressive, Henon and Lorenz series).
Resumo:
By the end of the 1970s, contaminated sites had emerged as one of the most complex and urgent environmental issues affecting industrialized countries. The authors show that small and prosperous Switzerland is no exception to the pervasive problem of sites contamination, the legacy of past practices in waste management having left some 38,000 contaminated sites throughout the country. This book outlines the problem, offering evidence that open and polycentric environmental decision-making that includes civil society actors is valuable. They propose an understanding of environmental management of contaminated sites as a political process in which institutions frame interactions between strategic actors pursuing sometimes conflicting interests. In the opening chapter, the authors describe the influences of politics and the power relationships between actors involved in decision-making in contaminated sites management, which they term a "wicked problem." Chapter Two offers a theoretical framework for understanding institutions and the environmental management of contaminated sites. The next five chapters present a detailed case study on environmental management and contaminated sites in Switzerland, focused on the Bonfol Chemical Landfill. The study and analysis covers the establishment of the landfill under the first generation of environmental regulations, its closure and early remediation efforts, and the gambling on the remediation objectives, methods and funding in the first decade of the 21st Century. The concluding chapter discusses the question of whether the strength of environmental regulations, and the type of interactions between public, private, and civil society actors can explain the environmental choices in contaminated sites management. Drawing lessons from research, the authors debate the value of institutional flexibility for dealing with environmental issues such as contaminated sites.
Resumo:
One of the key emphases of these three essays is to provide practical managerial insight. However, good practical insight, can only be created by grounding it firmly on theoretical and empirical research. Practical experience-based understanding without theoretical grounding remains tacit and cannot be easily disseminated. Theoretical understanding without links to real life remains sterile. My studies aim to increase the understanding of how radical innovation could be generated at large established firms and how it can have an impact on business performance as most businesses pursue innovation with one prime objective: value creation. My studies focus on large established firms with sales revenue exceeding USD $ 1 billion. Usually large established firms cannot rely on informal ways of management, as these firms tend to be multinational businesses operating with subsidiaries, offices, or production facilities in more than one country. I. Internal and External Determinants of Corporate Venture Capital Investment The goal of this chapter is to focus on CVC as one of the mechanisms available for established firms to source new ideas that can be exploited. We explore the internal and external determinants under which established firms engage in CVC to source new knowledge through investment in startups. We attempt to make scholars and managers aware of the forces that influence CVC activity by providing findings and insights to facilitate the strategic management of CVC. There are research opportunities to further understand the CVC phenomenon. Why do companies engage in CVC? What motivates them to continue "playing the game" and keep their active CVC investment status. The study examines CVC investment activity, and the importance of understanding the influential factors that make a firm decide to engage in CVC. The main question is: How do established firms' CVC programs adapt to changing internal conditions and external environments. Adaptation typically involves learning from exploratory endeavors, which enable companies to transform the ways they compete (Guth & Ginsberg, 1990). Our study extends the current stream of research on CVC. It aims to contribute to the literature by providing an extensive comparison of internal and external determinants leading to CVC investment activity. To our knowledge, this is the first study to examine the influence of internal and external determinants on CVC activity throughout specific expansion and contraction periods determined by structural breaks occurring between 1985 to 2008. Our econometric analysis indicates a strong and significant positive association between CVC activity and R&D, cash flow availability and environmental financial market conditions, as well as a significant negative association between sales growth and the decision to engage into CVC. The analysis of this study reveals that CVC investment is highly volatile, as demonstrated by dramatic fluctuations in CVC investment activity over the past decades. When analyzing the overall cyclical CVC period from 1985 to 2008 the results of our study suggest that CVC activity has a pattern influenced by financial factors such as the level of R&D, free cash flow, lack of sales growth, and external conditions of the economy, with the NASDAQ price index as the most significant variable influencing CVC during this period. II. Contribution of CVC and its Interaction with R&D to Value Creation The second essay takes into account the demands of corporate executives and shareholders regarding business performance and value creation justifications for investments in innovation. Billions of dollars are invested in CVC and R&D. However there is little evidence that CVC and its interaction with R&D create value. Firms operating in dynamic business sectors seek to innovate to create the value demanded by changing market conditions, consumer preferences, and competitive offerings. Consequently, firms operating in such business sectors put a premium on finding new, sustainable and competitive value propositions. CVC and R&D can help them in this challenge. Dushnitsky and Lenox (2006) presented evidence that CVC investment is associated with value creation. However, studies have shown that the most innovative firms do not necessarily benefit from innovation. For instance Oyon (2007) indicated that between 1995 and 2005 the most innovative automotive companies did not obtain adequate rewards for shareholders. The interaction between CVC and R&D has generated much debate in the CVC literature. Some researchers see them as substitutes suggesting that firms have to choose between CVC and R&D (Hellmann, 2002), while others expect them to be complementary (Chesbrough & Tucci, 2004). This study explores the interaction that CVC and R&D have on value creation. This essay examines the impact of CVC and R&D on value creation over sixteen years across six business sectors and different geographical regions. Our findings suggest that the effect of CVC and its interaction with R&D on value creation is positive and significant. In dynamic business sectors technologies rapidly relinquish obsolete, consequently firms operating in such business sectors need to continuously develop new sources of value creation (Eisenhardt & Martin, 2000; Qualls, Olshavsky, & Michaels, 1981). We conclude that in order to impact value creation, firms operating in business sectors such as Engineering & Business Services, and Information Communication & Technology ought to consider CVC as a vital element of their innovation strategy. Moreover, regarding the CVC and R&D interaction effect, our findings suggest that R&D and CVC are complementary to value creation hence firms in certain business sectors can be better off supporting both R&D and CVC simultaneously to increase the probability of generating value creation. III. MCS and Organizational Structures for Radical Innovation Incremental innovation is necessary for continuous improvement but it does not provide a sustainable permanent source of competitiveness (Cooper, 2003). On the other hand, radical innovation pursuing new technologies and new market frontiers can generate new platforms for growth providing firms with competitive advantages and high economic margin rents (Duchesneau et al., 1979; Markides & Geroski, 2005; O'Connor & DeMartino, 2006; Utterback, 1994). Interestingly, not all companies distinguish between incremental and radical innovation, and more importantly firms that manage innovation through a one-sizefits- all process can almost guarantee a sub-optimization of certain systems and resources (Davila et al., 2006). Moreover, we conducted research on the utilization of MCS along with radical innovation and flexible organizational structures as these have been associated with firm growth (Cooper, 2003; Davila & Foster, 2005, 2007; Markides & Geroski, 2005; O'Connor & DeMartino, 2006). Davila et al. (2009) identified research opportunities for innovation management and provided a list of pending issues: How do companies manage the process of radical and incremental innovation? What are the performance measures companies use to manage radical ideas and how do they select them? The fundamental objective of this paper is to address the following research question: What are the processes, MCS, and organizational structures for generating radical innovation? Moreover, in recent years, research on innovation management has been conducted mainly at either the firm level (Birkinshaw, Hamel, & Mol, 2008a) or at the project level examining appropriate management techniques associated with high levels of uncertainty (Burgelman & Sayles, 1988; Dougherty & Heller, 1994; Jelinek & Schoonhoven, 1993; Kanter, North, Bernstein, & Williamson, 1990; Leifer et al., 2000). Therefore, we embarked on a novel process-related research framework to observe the process stages, MCS, and organizational structures that can generate radical innovation. This article is based on a case study at Alcan Engineered Products, a division of a multinational company provider of lightweight material solutions. Our observations suggest that incremental and radical innovation should be managed through different processes, MCS and organizational structures that ought to be activated and adapted contingent to the type of innovation that is being pursued (i.e. incremental or radical innovation). More importantly, we conclude that radical can be generated in a systematic way through enablers such as processes, MCS, and organizational structures. This is in line with the findings of Jelinek and Schoonhoven (1993) and Davila et al. (2006; 2007) who show that innovative firms have institutionalized mechanisms, arguing that radical innovation cannot occur in an organic environment where flexibility and consensus are the main managerial mechanisms. They rather argue that radical innovation requires a clear organizational structure and formal MCS.
Resumo:
Coagulation factor V (FV) deficiency is characterised by variable bleeding phenotypes and heterogeneous mutations. To add new insights into the FV genotype-phenotype relationship, we characterised the R1698W change in the A3 domain, at the poorly investigated interface with the A2 domain. The FV R1698W mutation was responsible for a markedly reduced expression level (10% of FV-WT) and specific activity in thrombin generation (0.39). Interestingly, the FVa1698W showed rapid activity decay upon activation due to increased dissociation rate between the heavy and light chains. The importance of the size and charge of the residue at position 1698 was investigated by three additional recombinant mutants, FVR1698A, FVR1698Q, and FVR1698E. FVR1698A and FVR1698Q expression (30 and 45% of FV-WT), specific activity (both 0.57) and stability were all reduced. Noticeably, FVR1698E showed normal activity and stability despite poor expression (10% of FV-WT). These data indicate the essential role of R1698 for normal biosynthetic process and support local flexibility for positively or negatively charged residues to produce stable and functional A3-A2 domain interactions. Their experimental alteration produces a gradient of FV defects, which help to interpret the wide spectrum of phenotypes in FV-deficient patients.
Resumo:
The co-occurrence of PTSD and of substance use disorder (SD) is known to be very high. However the question of whether and how to treat such patients remains largely unanswered in the EMDR community. We report on two cases of EMDR-based treatment of heavily affected SD patients in whom psychotraumatic antecedents were identified. EMDR sessions focused on trauma-related material and not on the expression of cue-induced drug craving. The treatment appeared to be a difficult and challenging endeavour. However, some beneficial effects on general comfort and on drug consumption could be observed. A long stabilization phase was mandatory and the standard EMDR protocol needed to be conducted with much flexibility. Interestingly, there was no provocation of a prolonged psychological crisis or of relapse. Experiencing of emotional stress could be limited to the sessions and dissociation could be absorbed with specific well-known techniques without permanently increasing drug craving. These observations are discussed in relation to previously published concepts of using EMDR in the field of trauma and substance abuse.
Resumo:
L'utilisation efficace des systèmes géothermaux, la séquestration du CO2 pour limiter le changement climatique et la prévention de l'intrusion d'eau salée dans les aquifères costaux ne sont que quelques exemples qui démontrent notre besoin en technologies nouvelles pour suivre l'évolution des processus souterrains à partir de la surface. Un défi majeur est d'assurer la caractérisation et l'optimisation des performances de ces technologies à différentes échelles spatiales et temporelles. Les méthodes électromagnétiques (EM) d'ondes planes sont sensibles à la conductivité électrique du sous-sol et, par conséquent, à la conductivité électrique des fluides saturant la roche, à la présence de fractures connectées, à la température et aux matériaux géologiques. Ces méthodes sont régies par des équations valides sur de larges gammes de fréquences, permettant détudier de manières analogues des processus allant de quelques mètres sous la surface jusqu'à plusieurs kilomètres de profondeur. Néanmoins, ces méthodes sont soumises à une perte de résolution avec la profondeur à cause des propriétés diffusives du champ électromagnétique. Pour cette raison, l'estimation des modèles du sous-sol par ces méthodes doit prendre en compte des informations a priori afin de contraindre les modèles autant que possible et de permettre la quantification des incertitudes de ces modèles de façon appropriée. Dans la présente thèse, je développe des approches permettant la caractérisation statique et dynamique du sous-sol à l'aide d'ondes EM planes. Dans une première partie, je présente une approche déterministe permettant de réaliser des inversions répétées dans le temps (time-lapse) de données d'ondes EM planes en deux dimensions. Cette stratégie est basée sur l'incorporation dans l'algorithme d'informations a priori en fonction des changements du modèle de conductivité électrique attendus. Ceci est réalisé en intégrant une régularisation stochastique et des contraintes flexibles par rapport à la gamme des changements attendus en utilisant les multiplicateurs de Lagrange. J'utilise des normes différentes de la norme l2 pour contraindre la structure du modèle et obtenir des transitions abruptes entre les régions du model qui subissent des changements dans le temps et celles qui n'en subissent pas. Aussi, j'incorpore une stratégie afin d'éliminer les erreurs systématiques de données time-lapse. Ce travail a mis en évidence l'amélioration de la caractérisation des changements temporels par rapport aux approches classiques qui réalisent des inversions indépendantes à chaque pas de temps et comparent les modèles. Dans la seconde partie de cette thèse, j'adopte un formalisme bayésien et je teste la possibilité de quantifier les incertitudes sur les paramètres du modèle dans l'inversion d'ondes EM planes. Pour ce faire, je présente une stratégie d'inversion probabiliste basée sur des pixels à deux dimensions pour des inversions de données d'ondes EM planes et de tomographies de résistivité électrique (ERT) séparées et jointes. Je compare les incertitudes des paramètres du modèle en considérant différents types d'information a priori sur la structure du modèle et différentes fonctions de vraisemblance pour décrire les erreurs sur les données. Les résultats indiquent que la régularisation du modèle est nécessaire lorsqu'on a à faire à un large nombre de paramètres car cela permet d'accélérer la convergence des chaînes et d'obtenir des modèles plus réalistes. Cependent, ces contraintes mènent à des incertitudes d'estimations plus faibles, ce qui implique des distributions a posteriori qui ne contiennent pas le vrai modèledans les régions ou` la méthode présente une sensibilité limitée. Cette situation peut être améliorée en combinant des méthodes d'ondes EM planes avec d'autres méthodes complémentaires telles que l'ERT. De plus, je montre que le poids de régularisation des paramètres et l'écart-type des erreurs sur les données peuvent être retrouvés par une inversion probabiliste. Finalement, j'évalue la possibilité de caractériser une distribution tridimensionnelle d'un panache de traceur salin injecté dans le sous-sol en réalisant une inversion probabiliste time-lapse tridimensionnelle d'ondes EM planes. Etant donné que les inversions probabilistes sont très coûteuses en temps de calcul lorsque l'espace des paramètres présente une grande dimension, je propose une stratégie de réduction du modèle ou` les coefficients de décomposition des moments de Legendre du panache de traceur injecté ainsi que sa position sont estimés. Pour ce faire, un modèle de résistivité de base est nécessaire. Il peut être obtenu avant l'expérience time-lapse. Un test synthétique montre que la méthodologie marche bien quand le modèle de résistivité de base est caractérisé correctement. Cette méthodologie est aussi appliquée à un test de trac¸age par injection d'une solution saline et d'acides réalisé dans un système géothermal en Australie, puis comparée à une inversion time-lapse tridimensionnelle réalisée selon une approche déterministe. L'inversion probabiliste permet de mieux contraindre le panache du traceur salin gr^ace à la grande quantité d'informations a priori incluse dans l'algorithme. Néanmoins, les changements de conductivités nécessaires pour expliquer les changements observés dans les données sont plus grands que ce qu'expliquent notre connaissance actuelle des phénomenès physiques. Ce problème peut être lié à la qualité limitée du modèle de résistivité de base utilisé, indiquant ainsi que des efforts plus grands devront être fournis dans le futur pour obtenir des modèles de base de bonne qualité avant de réaliser des expériences dynamiques. Les études décrites dans cette thèse montrent que les méthodes d'ondes EM planes sont très utiles pour caractériser et suivre les variations temporelles du sous-sol sur de larges échelles. Les présentes approches améliorent l'évaluation des modèles obtenus, autant en termes d'incorporation d'informations a priori, qu'en termes de quantification d'incertitudes a posteriori. De plus, les stratégies développées peuvent être appliquées à d'autres méthodes géophysiques, et offrent une grande flexibilité pour l'incorporation d'informations additionnelles lorsqu'elles sont disponibles. -- The efficient use of geothermal systems, the sequestration of CO2 to mitigate climate change, and the prevention of seawater intrusion in coastal aquifers are only some examples that demonstrate the need for novel technologies to monitor subsurface processes from the surface. A main challenge is to assure optimal performance of such technologies at different temporal and spatial scales. Plane-wave electromagnetic (EM) methods are sensitive to subsurface electrical conductivity and consequently to fluid conductivity, fracture connectivity, temperature, and rock mineralogy. These methods have governing equations that are the same over a large range of frequencies, thus allowing to study in an analogous manner processes on scales ranging from few meters close to the surface down to several hundreds of kilometers depth. Unfortunately, they suffer from a significant resolution loss with depth due to the diffusive nature of the electromagnetic fields. Therefore, estimations of subsurface models that use these methods should incorporate a priori information to better constrain the models, and provide appropriate measures of model uncertainty. During my thesis, I have developed approaches to improve the static and dynamic characterization of the subsurface with plane-wave EM methods. In the first part of this thesis, I present a two-dimensional deterministic approach to perform time-lapse inversion of plane-wave EM data. The strategy is based on the incorporation of prior information into the inversion algorithm regarding the expected temporal changes in electrical conductivity. This is done by incorporating a flexible stochastic regularization and constraints regarding the expected ranges of the changes by using Lagrange multipliers. I use non-l2 norms to penalize the model update in order to obtain sharp transitions between regions that experience temporal changes and regions that do not. I also incorporate a time-lapse differencing strategy to remove systematic errors in the time-lapse inversion. This work presents improvements in the characterization of temporal changes with respect to the classical approach of performing separate inversions and computing differences between the models. In the second part of this thesis, I adopt a Bayesian framework and use Markov chain Monte Carlo (MCMC) simulations to quantify model parameter uncertainty in plane-wave EM inversion. For this purpose, I present a two-dimensional pixel-based probabilistic inversion strategy for separate and joint inversions of plane-wave EM and electrical resistivity tomography (ERT) data. I compare the uncertainties of the model parameters when considering different types of prior information on the model structure and different likelihood functions to describe the data errors. The results indicate that model regularization is necessary when dealing with a large number of model parameters because it helps to accelerate the convergence of the chains and leads to more realistic models. These constraints also lead to smaller uncertainty estimates, which imply posterior distributions that do not include the true underlying model in regions where the method has limited sensitivity. This situation can be improved by combining planewave EM methods with complimentary geophysical methods such as ERT. In addition, I show that an appropriate regularization weight and the standard deviation of the data errors can be retrieved by the MCMC inversion. Finally, I evaluate the possibility of characterizing the three-dimensional distribution of an injected water plume by performing three-dimensional time-lapse MCMC inversion of planewave EM data. Since MCMC inversion involves a significant computational burden in high parameter dimensions, I propose a model reduction strategy where the coefficients of a Legendre moment decomposition of the injected water plume and its location are estimated. For this purpose, a base resistivity model is needed which is obtained prior to the time-lapse experiment. A synthetic test shows that the methodology works well when the base resistivity model is correctly characterized. The methodology is also applied to an injection experiment performed in a geothermal system in Australia, and compared to a three-dimensional time-lapse inversion performed within a deterministic framework. The MCMC inversion better constrains the water plumes due to the larger amount of prior information that is included in the algorithm. The conductivity changes needed to explain the time-lapse data are much larger than what is physically possible based on present day understandings. This issue may be related to the base resistivity model used, therefore indicating that more efforts should be given to obtain high-quality base models prior to dynamic experiments. The studies described herein give clear evidence that plane-wave EM methods are useful to characterize and monitor the subsurface at a wide range of scales. The presented approaches contribute to an improved appraisal of the obtained models, both in terms of the incorporation of prior information in the algorithms and the posterior uncertainty quantification. In addition, the developed strategies can be applied to other geophysical methods, and offer great flexibility to incorporate additional information when available.