28 resultados para Epg Data Reduction
em Université de Lausanne, Switzerland
Resumo:
Imaging mass spectrometry (IMS) represents an innovative tool in the cancer research pipeline, which is increasingly being used in clinical and pharmaceutical applications. The unique properties of the technique, especially the amount of data generated, make the handling of data from multiple IMS acquisitions challenging. This work presents a histology-driven IMS approach aiming to identify discriminant lipid signatures from the simultaneous mining of IMS data sets from multiple samples. The feasibility of the developed workflow is evaluated on a set of three human colorectal cancer liver metastasis (CRCLM) tissue sections. Lipid IMS on tissue sections was performed using MALDI-TOF/TOF MS in both negative and positive ionization modes after 1,5-diaminonaphthalene matrix deposition by sublimation. The combination of both positive and negative acquisition results was performed during data mining to simplify the process and interrogate a larger lipidome into a single analysis. To reduce the complexity of the IMS data sets, a sub data set was generated by randomly selecting a fixed number of spectra from a histologically defined region of interest, resulting in a 10-fold data reduction. Principal component analysis confirmed that the molecular selectivity of the regions of interest is maintained after data reduction. Partial least-squares and heat map analyses demonstrated a selective signature of the CRCLM, revealing lipids that are significantly up- and down-regulated in the tumor region. This comprehensive approach is thus of interest for defining disease signatures directly from IMS data sets by the use of combinatory data mining, opening novel routes of investigation for addressing the demands of the clinical setting.
Resumo:
Background: Microarray data is frequently used to characterize the expression profile of a whole genome and to compare the characteristics of that genome under several conditions. Geneset analysis methods have been described previously to analyze the expression values of several genes related by known biological criteria (metabolic pathway, pathology signature, co-regulation by a common factor, etc.) at the same time and the cost of these methods allows for the use of more values to help discover the underlying biological mechanisms. Results: As several methods assume different null hypotheses, we propose to reformulate the main question that biologists seek to answer. To determine which genesets are associated with expression values that differ between two experiments, we focused on three ad hoc criteria: expression levels, the direction of individual gene expression changes (up or down regulation), and correlations between genes. We introduce the FAERI methodology, tailored from a two-way ANOVA to examine these criteria. The significance of the results was evaluated according to the self-contained null hypothesis, using label sampling or by inferring the null distribution from normally distributed random data. Evaluations performed on simulated data revealed that FAERI outperforms currently available methods for each type of set tested. We then applied the FAERI method to analyze three real-world datasets on hypoxia response. FAERI was able to detect more genesets than other methodologies, and the genesets selected were coherent with current knowledge of cellular response to hypoxia. Moreover, the genesets selected by FAERI were confirmed when the analysis was repeated on two additional related datasets. Conclusions: The expression values of genesets are associated with several biological effects. The underlying mathematical structure of the genesets allows for analysis of data from several genes at the same time. Focusing on expression levels, the direction of the expression changes, and correlations, we showed that two-step data reduction allowed us to significantly improve the performance of geneset analysis using a modified two-way ANOVA procedure, and to detect genesets that current methods fail to detect.
Resumo:
BACKGROUND: Physical activity and sedentary behaviour in youth have been reported to vary by sex, age, weight status and country. However, supporting data are often self-reported and/or do not encompass a wide range of ages or geographical locations. This study aimed to describe objectively-measured physical activity and sedentary time patterns in youth. METHODS: The International Children's Accelerometry Database (ICAD) consists of ActiGraph accelerometer data from 20 studies in ten countries, processed using common data reduction procedures. Analyses were conducted on 27,637 participants (2.8-18.4 years) who provided at least three days of valid accelerometer data. Linear regression was used to examine associations between age, sex, weight status, country and physical activity outcomes. RESULTS: Boys were less sedentary and more active than girls at all ages. After 5 years of age there was an average cross-sectional decrease of 4.2 % in total physical activity with each additional year of age, due mainly to lower levels of light-intensity physical activity and greater time spent sedentary. Physical activity did not differ by weight status in the youngest children, but from age seven onwards, overweight/obese participants were less active than their normal weight counterparts. Physical activity varied between samples from different countries, with a 15-20 % difference between the highest and lowest countries at age 9-10 and a 26-28 % difference at age 12-13. CONCLUSIONS: Physical activity differed between samples from different countries, but the associations between demographic characteristics and physical activity were consistently observed. Further research is needed to explore environmental and sociocultural explanations for these differences.
Resumo:
BACKGROUND: Most available pharmacotherapies for alcohol-dependent patients target abstinence; however, reduced alcohol consumption may be a more realistic goal. Using randomized clinical trial (RCT) data, a previous microsimulation model evaluated the clinical relevance of reduced consumption in terms of avoided alcohol-attributable events. Using real-life observational data, the current analysis aimed to adapt the model and confirm previous findings about the clinical relevance of reduced alcohol consumption. METHODS: Based on the prospective observational CONTROL study, evaluating daily alcohol consumption among alcohol-dependent patients, the model predicted the probability of drinking any alcohol during a given day. Predicted daily alcohol consumption was simulated in a hypothetical sample of 200,000 patients observed over a year. Individual total alcohol consumption (TAC) and number of heavy drinking days (HDD) were derived. Using published risk equations, probabilities of alcohol-attributable adverse health events (e.g., hospitalizations or death) corresponding to simulated consumptions were computed, and aggregated for categories of patients defined by HDDs and TAC (expressed per 100,000 patient-years). Sensitivity analyses tested model robustness. RESULTS: Shifting from >220 HDDs per year to 120-140 HDDs and shifting from 36,000-39,000 g TAC per year (120-130 g/day) to 15,000-18,000 g TAC per year (50-60 g/day) impacted substantially on the incidence of events (14,588 and 6148 events avoided per 100,000 patient-years, respectively). Results were robust to sensitivity analyses. CONCLUSIONS: This study corroborates the previous microsimulation modeling approach and, using real-life data, confirms RCT-based findings that reduced alcohol consumption is a relevant objective for consideration in alcohol dependence management to improve public health.
Resumo:
BACKGROUND: Pathogen reduction of platelets (PRT-PLTs) using riboflavin and ultraviolet light treatment has undergone Phase 1 and 2 studies examining efficacy and safety. This randomized controlled clinical trial (RCT) assessed the efficacy and safety of PRT-PLTs using the 1-hour corrected count increment (CCI(1hour) ) as the primary outcome. STUDY DESIGN AND METHODS: A noninferiority RCT was performed where patients with chemotherapy-induced thrombocytopenia (six centers) were randomly allocated to receive PRT-PLTs (Mirasol PRT, CaridianBCT Biotechnologies) or reference platelet (PLT) products. The treatment period was 28 days followed by a 28-day follow-up (safety) period. The primary outcome was the CCI(1hour) determined using up to the first eight on-protocol PLT transfusions given during the treatment period. RESULTS: A total of 118 patients were randomly assigned (60 to PRT-PLTs; 58 to reference). Four patients per group did not require PLT transfusions leaving 110 patients in the analysis (56 PRT-PLTs; 54 reference). A total of 541 on-protocol PLT transfusions were given (303 PRT-PLTs; 238 reference). The least square mean CCI was 11,725 (standard error [SE], 1.140) for PRT-PLTs and 16,939 (SE, 1.149) for the reference group (difference, -5214; 95% confidence interval, -7542 to -2887; p<0.0001 for a test of the null hypothesis of no difference between the two groups). CONCLUSION: The study failed to show noninferiority of PRT-PLTs based on predefined CCI criteria. PLT and red blood cell utilization in the two groups was not significantly different suggesting that the slightly lower CCIs (PRT-PLTs) did not increase blood product utilization. Safety data showed similar findings in the two groups. Further studies are required to determine if the lower CCI observed with PRT-PLTs translates into an increased risk of bleeding.
Resumo:
BACKGROUND: We analysed 5-year treatment with agalsidase alfa enzyme replacement therapy in patients with Fabry's disease who were enrolled in the Fabry Outcome Survey observational database (FOS). METHODS: Baseline and 5-year data were available for up to 181 adults (126 men) in FOS. Serial data for cardiac mass and function, renal function, pain, and quality of life were assessed. Safety and sensitivity analyses were done in patients with baseline and at least one relevant follow-up measurement during the 5 years (n=555 and n=475, respectively). FINDINGS: In patients with baseline cardiac hypertrophy, treatment resulted in a sustained reduction in left ventricular mass (LVM) index after 5 years (from 71.4 [SD 22.5] g/m(2.7) to 64.1 [18.7] g/m(2.7), p=0.0111) and a significant increase in midwall fractional shortening (MFS) from 14.3% (2.3) to 16.0% (3.8) after 3 years (p=0.02). In patients without baseline hypertrophy, LVM index and MFS remained stable. Mean yearly fall in estimated glomerular filtration rate versus baseline after 5 years of enzyme replacement therapy was -3.17 mL/min per 1.73 m(2) for men and -0.89 mL/min per 1.73 m(2) for women. Average pain, measured by Brief Pain Inventory score, improved significantly, from 3.7 (2.3) at baseline to 2.5 (2.4) after 5 years (p=0.0023). Quality of life, measured by deviation scores from normal EuroQol values, improved significantly, from -0.24 (0.3) at baseline to -0.17 (0.3) after 5 years (p=0.0483). Findings were confirmed by sensitivity analysis. No unexpected safety concerns were identified. INTERPRETATION: By comparison with historical natural history data for patients with Fabry's disease who were not treated with enzyme replacement therapy, long-term treatment with agalsidase alfa leads to substantial and sustained clinical benefits. FUNDING: Shire Human Genetic Therapies AB.
Resumo:
BACKGROUND: Malaria is almost invariably ranked as the leading cause of morbidity and mortality in Africa. There is growing evidence of a decline in malaria transmission, morbidity and mortality over the last decades, especially so in East Africa. However, there is still doubt whether this decline is reflected in a reduction of the proportion of malaria among fevers. The objective of this systematic review was to estimate the change in the Proportion of Fevers associated with Plasmodium falciparum parasitaemia (PFPf) over the past 20 years in sub-Saharan Africa. METHODS: Search strategy. In December 2009, publications from the National Library of Medicine database were searched using the combination of 16 MeSH terms.Selection criteria. Inclusion criteria: studies 1) conducted in sub-Saharan Africa, 2) patients presenting with a syndrome of 'presumptive malaria', 3) numerators (number of parasitologically confirmed cases) and denominators (total number of presumptive malaria cases) available, 4) good quality microscopy.Data collection and analysis. The following variables were extracted: parasite presence/absence, total number of patients, age group, year, season, country and setting, clinical inclusion criteria. To assess the dynamic of PFPf over time, the median PFPf was compared between studies published in the years ≤2000 and > 2000. RESULTS: 39 studies conducted between 1986 and 2007 in 16 different African countries were included in the final analysis. When comparing data up to year 2000 (24 studies) with those afterwards (15 studies), there was a clear reduction in the median PFPf from 44% (IQR 31-58%; range 7-81%) to 22% (IQR 13-33%; range 2-77%). This dramatic decline is likely to reflect a true change since stratified analyses including explanatory variables were performed and median PFPfs were always lower after 2000 compared to before. CONCLUSIONS: There was a considerable reduction of the proportion of malaria among fevers over time in Africa. This decline provides evidence for the policy change from presumptive anti-malarial treatment of all children with fever to laboratory diagnosis and treatment upon result. This should insure appropriate care of non-malaria fevers and rationale use of anti-malarials.
Resumo:
OBJECTIVES: To determine whether nalmefene combined with psychosocial support is cost-effective compared with psychosocial support alone for reducing alcohol consumption in alcohol-dependent patients with high/very high drinking risk levels (DRLs) as defined by the WHO, and to evaluate the public health benefit of reducing harmful alcohol-attributable diseases, injuries and deaths. DESIGN: Decision modelling using Markov chains compared costs and effects over 5 years. SETTING: The analysis was from the perspective of the National Health Service (NHS) in England and Wales. PARTICIPANTS: The model considered the licensed population for nalmefene, specifically adults with both alcohol dependence and high/very high DRLs, who do not require immediate detoxification and who continue to have high/very high DRLs after initial assessment. DATA SOURCES: We modelled treatment effect using data from three clinical trials for nalmefene (ESENSE 1 (NCT00811720), ESENSE 2 (NCT00812461) and SENSE (NCT00811941)). Baseline characteristics of the model population, treatment resource utilisation and utilities were from these trials. We estimated the number of alcohol-attributable events occurring at different levels of alcohol consumption based on published epidemiological risk-relation studies. Health-related costs were from UK sources. MAIN OUTCOME MEASURES: We measured incremental cost per quality-adjusted life year (QALY) gained and number of alcohol-attributable harmful events avoided. RESULTS: Nalmefene in combination with psychosocial support had an incremental cost-effectiveness ratio (ICER) of £5204 per QALY gained, and was therefore cost-effective at the £20,000 per QALY gained decision threshold. Sensitivity analyses showed that the conclusion was robust. Nalmefene plus psychosocial support led to the avoidance of 7179 alcohol-attributable diseases/injuries and 309 deaths per 100,000 patients compared to psychosocial support alone over the course of 5 years. CONCLUSIONS: Nalmefene can be seen as a cost-effective treatment for alcohol dependence, with substantial public health benefits. TRIAL REGISTRATION NUMBERS: This cost-effectiveness analysis was developed based on data from three randomised clinical trials: ESENSE 1 (NCT00811720), ESENSE 2 (NCT00812461) and SENSE (NCT00811941).
Resumo:
BACKGROUND: Numerous trials of the efficacy of brief alcohol intervention have been conducted in various settings among individuals with a wide range of alcohol disorders. Nevertheless, the efficacy of the intervention is likely to be influenced by the context. We evaluated the evidence of efficacy of brief alcohol interventions aimed at reducing long-term alcohol use and related harm in individuals attending primary care facilities but not seeking help for alcohol-related problems. METHODS: We selected randomized trials reporting at least 1 outcome related to alcohol consumption conducted in outpatients who were actively attending primary care centers or seeing providers. Data sources were the Cochrane Central Register of Controlled Trials, MEDLINE, PsycINFO, ISI Web of Science, ETOH database, and bibliographies of retrieved references and previous reviews. Study selection and data abstraction were performed independently and in duplicate. We assessed the validity of the studies and performed a meta-analysis of studies reporting alcohol consumption at 6 or 12 months of follow-up. RESULTS: We examined 19 trials that included 5639 individuals. Seventeen trials reported a measure of alcohol consumption, of which 8 reported a significant effect of intervention. The adjusted intention-to-treat analysis showed a mean pooled difference of -38 g of ethanol (approximately 4 drinks) per week (95% confidence interval, -51 to -24 g/wk) in favor of the brief alcohol intervention group. Evidence of other outcome measures was inconclusive. CONCLUSION: Focusing on patients in primary care, our systematic review and meta-analysis indicated that brief alcohol intervention is effective in reducing alcohol consumption at 6 and 12 months.
Resumo:
The 2008 Data Fusion Contest organized by the IEEE Geoscience and Remote Sensing Data Fusion Technical Committee deals with the classification of high-resolution hyperspectral data from an urban area. Unlike in the previous issues of the contest, the goal was not only to identify the best algorithm but also to provide a collaborative effort: The decision fusion of the best individual algorithms was aiming at further improving the classification performances, and the best algorithms were ranked according to their relative contribution to the decision fusion. This paper presents the five awarded algorithms and the conclusions of the contest, stressing the importance of decision fusion, dimension reduction, and supervised classification methods, such as neural networks and support vector machines.
Resumo:
Ultrasound scans in the mid-trimester of pregnancy are now a routine part of antenatal care in most European countries. Using data from registries of congenital anomalies a study was undertaken in Europe. The objective of the study was to evaluate prenatal detection of limb reduction deficiencies (LRD) by routine ultrasonographic examination of the fetus. All LRDs suspected prenatally and all LRDs (including chromosome anomalies) confirmed at birth were identified from 20 Congenital Malformation Registers from the following 12 European countries: Austria, Croatia, Denmark, France, Germany, Italy, Lithuania, Spain, Switzerland, The Netherlands, UK and Ukrainia. These registries are following the same methodology. During the study period (1996-98) there were 709,030 births, and 7,758 cases with congenital malformations including LRDs. If more than one LRD was present the case was coded as complex LRD; 250 cases of LRDs with 63 (25.2%) termination of pregnancies were identified including 138 cases with isolated LRD, 112 with associated malformations, 16 with chromosomal anomalies and 38 non chromosomal recognized syndromes. The prenatal detection rate of isolated LRD was 24.6% (34 out of 138 cases) compared with 49.1% for associated malformations (55 out of 112; p<0.01). The prenatal detection of isolated terminal transverse LRD was 22.7% (22 out of 97), 50% (3 out of 6) for proximal intercalary LRD, 8.3% (1 out of 12) for longitudinal LRD and 0 for split hand/foot; for multipli-malformed children with LRD those percentages were 46.1% (30 out of 65), 66.6% (6 out of 9), 57.1% (8 out of 14) and 0 (0 out of 2), respectively. The prenatal detection rate of LRDs varied in relation with the ultrasound screening policies from 20.0% to 64.0% in countries with at least one routine fetal scan.
Resumo:
OBJECTIVES: To preliminarily evaluate prospectively the accuracy and reliability of a specific ad hoc reduction-compression forceps in intraoral open reduction of transverse and displaced mandibular angle fractures. STUDY DESIGN: We analyzed the clinical and radiologic data of 7 patients with 7 single transverse and displaced angle fractures. An intraoral approach was performed in all of the patients without using perioperative intermaxillary fixation. A single Arbeitsgemeinschaft Osteosynthese (AO) unilock reconstruction plate was fixed to each stable fragment with 3 locking screws (2.0 mm in 5 patients and 2.4 mm in 2 patients) at the basilar border of the mandible, according to AO/American Society of Internal Fixation (ASIF) principles. Follow-up was at 1, 3, 6, and 12 months, and we noted the status of healing and complications, if any. RESULTS: All of the patients had satisfactory fracture reduction as well as a successful treatment outcome without complications. CONCLUSION: This preliminary study demonstrated that the intraoral reduction of transverse and displaced angle fractures using a specific ad hoc reduction-forceps results in a high rate of success.
Resumo:
Connexin36 (Cx36), a trans-membrane protein that forms gap junctions between insulin-secreting beta-cells in the Langerhans islets, contributes to the proper control of insulin secretion and beta-cell survival. Hypercholesterolemia and pro-atherogenic low density lipoproteins (LDL) contribute to beta-cell dysfunction and apoptosis in the context of Type 2 diabetes. We investigated the impact of LDL-cholesterol on Cx36 levels in beta-cells. As compared to WT mice, the Cx36 content was reduced in islets from hypercholesterolemic ApoE-/- mice. Prolonged exposure to human native (nLDL) or oxidized LDL (oxLDL) particles decreased the expression of Cx36 in insulin secreting cell-lines and isolated rodent islets. Cx36 down-regulation was associated with overexpression of the inducible cAMP early repressor (ICER-1) and the selective disruption of ICER-1 prevented the effects of oxLDL on Cx36 expression. Oil red O staining and Plin1 expression levels suggested that oxLDL were less stored as neutral lipid droplets than nLDL in INS-1E cells. The lipid beta-oxidation inhibitor etomoxir enhanced oxLDL-induced apoptosis whereas the ceramide synthesis inhibitor myriocin partially protected INS-1E cells, suggesting that oxLDL toxicity was due to impaired metabolism of the lipids. ICER-1 and Cx36 expressions were closely correlated with oxLDL toxicity. Cx36 knock-down in INS-1E cells or knock-out in primary islets sensitized beta-cells to oxLDL-induced apoptosis. In contrast, overexpression of Cx36 partially protected INS-1E cells against apoptosis. These data demonstrate that the reduction of Cx36 content in beta-cells by oxLDL particles is mediated by ICER-1 and contributes to oxLDL-induced beta-cell apoptosis.
Resumo:
L'utilisation efficace des systèmes géothermaux, la séquestration du CO2 pour limiter le changement climatique et la prévention de l'intrusion d'eau salée dans les aquifères costaux ne sont que quelques exemples qui démontrent notre besoin en technologies nouvelles pour suivre l'évolution des processus souterrains à partir de la surface. Un défi majeur est d'assurer la caractérisation et l'optimisation des performances de ces technologies à différentes échelles spatiales et temporelles. Les méthodes électromagnétiques (EM) d'ondes planes sont sensibles à la conductivité électrique du sous-sol et, par conséquent, à la conductivité électrique des fluides saturant la roche, à la présence de fractures connectées, à la température et aux matériaux géologiques. Ces méthodes sont régies par des équations valides sur de larges gammes de fréquences, permettant détudier de manières analogues des processus allant de quelques mètres sous la surface jusqu'à plusieurs kilomètres de profondeur. Néanmoins, ces méthodes sont soumises à une perte de résolution avec la profondeur à cause des propriétés diffusives du champ électromagnétique. Pour cette raison, l'estimation des modèles du sous-sol par ces méthodes doit prendre en compte des informations a priori afin de contraindre les modèles autant que possible et de permettre la quantification des incertitudes de ces modèles de façon appropriée. Dans la présente thèse, je développe des approches permettant la caractérisation statique et dynamique du sous-sol à l'aide d'ondes EM planes. Dans une première partie, je présente une approche déterministe permettant de réaliser des inversions répétées dans le temps (time-lapse) de données d'ondes EM planes en deux dimensions. Cette stratégie est basée sur l'incorporation dans l'algorithme d'informations a priori en fonction des changements du modèle de conductivité électrique attendus. Ceci est réalisé en intégrant une régularisation stochastique et des contraintes flexibles par rapport à la gamme des changements attendus en utilisant les multiplicateurs de Lagrange. J'utilise des normes différentes de la norme l2 pour contraindre la structure du modèle et obtenir des transitions abruptes entre les régions du model qui subissent des changements dans le temps et celles qui n'en subissent pas. Aussi, j'incorpore une stratégie afin d'éliminer les erreurs systématiques de données time-lapse. Ce travail a mis en évidence l'amélioration de la caractérisation des changements temporels par rapport aux approches classiques qui réalisent des inversions indépendantes à chaque pas de temps et comparent les modèles. Dans la seconde partie de cette thèse, j'adopte un formalisme bayésien et je teste la possibilité de quantifier les incertitudes sur les paramètres du modèle dans l'inversion d'ondes EM planes. Pour ce faire, je présente une stratégie d'inversion probabiliste basée sur des pixels à deux dimensions pour des inversions de données d'ondes EM planes et de tomographies de résistivité électrique (ERT) séparées et jointes. Je compare les incertitudes des paramètres du modèle en considérant différents types d'information a priori sur la structure du modèle et différentes fonctions de vraisemblance pour décrire les erreurs sur les données. Les résultats indiquent que la régularisation du modèle est nécessaire lorsqu'on a à faire à un large nombre de paramètres car cela permet d'accélérer la convergence des chaînes et d'obtenir des modèles plus réalistes. Cependent, ces contraintes mènent à des incertitudes d'estimations plus faibles, ce qui implique des distributions a posteriori qui ne contiennent pas le vrai modèledans les régions ou` la méthode présente une sensibilité limitée. Cette situation peut être améliorée en combinant des méthodes d'ondes EM planes avec d'autres méthodes complémentaires telles que l'ERT. De plus, je montre que le poids de régularisation des paramètres et l'écart-type des erreurs sur les données peuvent être retrouvés par une inversion probabiliste. Finalement, j'évalue la possibilité de caractériser une distribution tridimensionnelle d'un panache de traceur salin injecté dans le sous-sol en réalisant une inversion probabiliste time-lapse tridimensionnelle d'ondes EM planes. Etant donné que les inversions probabilistes sont très coûteuses en temps de calcul lorsque l'espace des paramètres présente une grande dimension, je propose une stratégie de réduction du modèle ou` les coefficients de décomposition des moments de Legendre du panache de traceur injecté ainsi que sa position sont estimés. Pour ce faire, un modèle de résistivité de base est nécessaire. Il peut être obtenu avant l'expérience time-lapse. Un test synthétique montre que la méthodologie marche bien quand le modèle de résistivité de base est caractérisé correctement. Cette méthodologie est aussi appliquée à un test de trac¸age par injection d'une solution saline et d'acides réalisé dans un système géothermal en Australie, puis comparée à une inversion time-lapse tridimensionnelle réalisée selon une approche déterministe. L'inversion probabiliste permet de mieux contraindre le panache du traceur salin gr^ace à la grande quantité d'informations a priori incluse dans l'algorithme. Néanmoins, les changements de conductivités nécessaires pour expliquer les changements observés dans les données sont plus grands que ce qu'expliquent notre connaissance actuelle des phénomenès physiques. Ce problème peut être lié à la qualité limitée du modèle de résistivité de base utilisé, indiquant ainsi que des efforts plus grands devront être fournis dans le futur pour obtenir des modèles de base de bonne qualité avant de réaliser des expériences dynamiques. Les études décrites dans cette thèse montrent que les méthodes d'ondes EM planes sont très utiles pour caractériser et suivre les variations temporelles du sous-sol sur de larges échelles. Les présentes approches améliorent l'évaluation des modèles obtenus, autant en termes d'incorporation d'informations a priori, qu'en termes de quantification d'incertitudes a posteriori. De plus, les stratégies développées peuvent être appliquées à d'autres méthodes géophysiques, et offrent une grande flexibilité pour l'incorporation d'informations additionnelles lorsqu'elles sont disponibles. -- The efficient use of geothermal systems, the sequestration of CO2 to mitigate climate change, and the prevention of seawater intrusion in coastal aquifers are only some examples that demonstrate the need for novel technologies to monitor subsurface processes from the surface. A main challenge is to assure optimal performance of such technologies at different temporal and spatial scales. Plane-wave electromagnetic (EM) methods are sensitive to subsurface electrical conductivity and consequently to fluid conductivity, fracture connectivity, temperature, and rock mineralogy. These methods have governing equations that are the same over a large range of frequencies, thus allowing to study in an analogous manner processes on scales ranging from few meters close to the surface down to several hundreds of kilometers depth. Unfortunately, they suffer from a significant resolution loss with depth due to the diffusive nature of the electromagnetic fields. Therefore, estimations of subsurface models that use these methods should incorporate a priori information to better constrain the models, and provide appropriate measures of model uncertainty. During my thesis, I have developed approaches to improve the static and dynamic characterization of the subsurface with plane-wave EM methods. In the first part of this thesis, I present a two-dimensional deterministic approach to perform time-lapse inversion of plane-wave EM data. The strategy is based on the incorporation of prior information into the inversion algorithm regarding the expected temporal changes in electrical conductivity. This is done by incorporating a flexible stochastic regularization and constraints regarding the expected ranges of the changes by using Lagrange multipliers. I use non-l2 norms to penalize the model update in order to obtain sharp transitions between regions that experience temporal changes and regions that do not. I also incorporate a time-lapse differencing strategy to remove systematic errors in the time-lapse inversion. This work presents improvements in the characterization of temporal changes with respect to the classical approach of performing separate inversions and computing differences between the models. In the second part of this thesis, I adopt a Bayesian framework and use Markov chain Monte Carlo (MCMC) simulations to quantify model parameter uncertainty in plane-wave EM inversion. For this purpose, I present a two-dimensional pixel-based probabilistic inversion strategy for separate and joint inversions of plane-wave EM and electrical resistivity tomography (ERT) data. I compare the uncertainties of the model parameters when considering different types of prior information on the model structure and different likelihood functions to describe the data errors. The results indicate that model regularization is necessary when dealing with a large number of model parameters because it helps to accelerate the convergence of the chains and leads to more realistic models. These constraints also lead to smaller uncertainty estimates, which imply posterior distributions that do not include the true underlying model in regions where the method has limited sensitivity. This situation can be improved by combining planewave EM methods with complimentary geophysical methods such as ERT. In addition, I show that an appropriate regularization weight and the standard deviation of the data errors can be retrieved by the MCMC inversion. Finally, I evaluate the possibility of characterizing the three-dimensional distribution of an injected water plume by performing three-dimensional time-lapse MCMC inversion of planewave EM data. Since MCMC inversion involves a significant computational burden in high parameter dimensions, I propose a model reduction strategy where the coefficients of a Legendre moment decomposition of the injected water plume and its location are estimated. For this purpose, a base resistivity model is needed which is obtained prior to the time-lapse experiment. A synthetic test shows that the methodology works well when the base resistivity model is correctly characterized. The methodology is also applied to an injection experiment performed in a geothermal system in Australia, and compared to a three-dimensional time-lapse inversion performed within a deterministic framework. The MCMC inversion better constrains the water plumes due to the larger amount of prior information that is included in the algorithm. The conductivity changes needed to explain the time-lapse data are much larger than what is physically possible based on present day understandings. This issue may be related to the base resistivity model used, therefore indicating that more efforts should be given to obtain high-quality base models prior to dynamic experiments. The studies described herein give clear evidence that plane-wave EM methods are useful to characterize and monitor the subsurface at a wide range of scales. The presented approaches contribute to an improved appraisal of the obtained models, both in terms of the incorporation of prior information in the algorithms and the posterior uncertainty quantification. In addition, the developed strategies can be applied to other geophysical methods, and offer great flexibility to incorporate additional information when available.