27 resultados para Sampling Time Deviation
em Université de Lausanne, Switzerland
Resumo:
Most of the novel targeted anticancer agents share classical characteristics that define drugs as candidates for blood concentration monitoring: long-term therapy; high interindividual but restricted intraindividual variability; significant drug-drug and drug- food interactions; correlations between concentration and efficacy/ toxicity with rather narrow therapeutic index; reversibility of effects; and absence of early markers of response. Surprisingly though, therapeutic concentration monitoring has received little attention for these drugs despite reiterated suggestions from clinical pharmacologists. Several issues explain the lack of clinical research and development in this field: global tradition of empiricism regarding treatment monitoring, lack of formal conceptual framework, ethical difficulties in the elaboration of controlled clinical trials, disregard from both drug manufacturers and public funders, limited encouragement from regulatory authorities, and practical hurdles making dosage adjustment based on concentration monitoring a difficult task for prescribers. However, new technologies are soon to help us overcome these obstacles, with the advent of miniaturized measurement devices able to quantify circulating drug concentrations at the point-of-care, to evaluate their plausibility given actual dosage and sampling time, to determine their appropriateness with reference to therapeutic targets, and to advise on suitable dosage adjustment. Such evolutions could bring conceptual changes into the clinical development of drugs such as anticancer agents, while increasing the therapeutic impact of population PK-PD studies and systematic reviews. Research efforts in that direction from the clinical pharmacology community will be essential for patients to receive the greatest benefits and the least harm from new anticancer treatments. The example of imatinib, the first commercialized tyrosine kinase inhibitor, will be outlined to illustrate a potential research agenda for the rational development of therapeutic concentration monitoring.
Resumo:
In Switzerland, a two-tier system based on impairment by any psychoactive substances which affect the capacity to drive safely and zero tolerance for certain illicit drugs came into force on 1 January 2005. According to the new legislation, the offender is sanctioned if Delta(9)-tetrahydrocannabinol THC is >or=1.5ng/ml or amphetamine, methamphetamine, 3,4-methylenedioxymethamphetamine (MDMA), 3,4-methylenedioxyethylamphetamine (MDEA), cocaine, free morphine are >or=15ng/ml in whole blood (confidence interval+/-30%). For all other psychoactive substances, impairment must be proven in applying the so-called "three pillars expertise". At the same time the legal blood alcohol concentration (BAC) limit for driving was lowered from 0.80 to 0.50g/kg. The purpose of this study was to analyze the prevalence of drugs in the first year after the introduction of the revision of the Swiss Traffic Law in the population of drivers suspected of driving under the influence of drugs (DUID). A database was developed to collect the data from all DUID cases submitted by the police or the Justice to the eight Swiss authorized laboratories between January and December 2005. Data collected were anonymous and included the age, gender, date and time of the event, the type of vehicle, the circumstances, the sampling time and the results of all the performed toxicological analyses. The focus was explicitly on DUID; cases of drivers who were suspected to be under the influence of ethanol only were not considered. The final study population included 4794 DUID offenders (4243 males, 543 females). The mean age of all drivers was 31+/-12 years (range 14-92 years). One or more psychoactive drugs were detected in 89% of all analyzed blood samples. In 11% (N=530) of the samples, neither alcohol nor drugs were present. The most frequently encountered drugs in whole blood were cannabinoids (48% of total number of cases), ethanol (35%), cocaine (25%), opiates (10%), amphetamines (7%), benzodiazepines (6%) and methadone (5%). Other medicinal drugs such as antidepressants and benzodiazepine-like were detected less frequently. Poly-drug use was prevalent but it may be underestimated because the laboratories do not always analyze all drugs in a blood sample. This first Swiss study points out that DUID is a serious problem on the roads in Switzerland. Further investigations will show if this situation has changed in the following years.
Resumo:
BACKGROUND: Empirical antibacterial therapy in hospitals is usually guided by local epidemiologic features reflected by institutional cumulative antibiograms. We investigated additional information inferred by aggregating cumulative antibiograms by type of unit or according to the place of acquisition (i.e. community vs. hospital) of the bacteria. MATERIALS AND METHODS: Antimicrobial susceptibility rates of selected pathogens were collected over a 4-year period in an university-affiliated hospital. Hospital-wide antibiograms were compared with those selected by type of unit and sampling time (<48 or >48 h after hospital admission). RESULTS: Strains isolated >48 h after admission were less susceptible than those presumably arising from the community (<48 h). The comparison of units revealed significant differences among strains isolated >48 h after admission. When compared to hospital-wide antibiograms, susceptibility rates were lower in the ICU and surgical units for Escherichia coli to amoxicillin-clavulanate, enterococci to penicillin, and Pseudomonas aeruginosa to anti-pseudomonal beta-lactams, and in medical units for Staphylococcus aureus to oxacillin. In contrast, few differences were observed among strains isolated within 48 h of admission. CONCLUSIONS: Hospital-wide antibiograms reflect the susceptibility pattern for a specific unit with respect to community-acquired, but not to hospital-acquired strains. Antibiograms adjusted to these parameters may be useful in guiding the choice of empirical antibacterial therapy.
Resumo:
Captan and folpet are two fungicides largely used in agriculture, but biomonitoring data are mostly limited to measurements of captan metabolite concentrations in spot urine samples of workers, which complicate interpretation of results in terms of internal dose estimation, daily variations according to tasks performed, and most plausible routes of exposure. This study aimed at performing repeated biological measurements of exposure to captan and folpet in field workers (i) to better assess internal dose along with main routes-of-entry according to tasks and (ii) to establish most appropriate sampling and analysis strategies. The detailed urinary excretion time courses of specific and non-specific biomarkers of exposure to captan and folpet were established in tree farmers (n = 2) and grape growers (n = 3) over a typical workweek (seven consecutive days), including spraying and harvest activities. The impact of the expression of urinary measurements [excretion rate values adjusted or not for creatinine or cumulative amounts over given time periods (8, 12, and 24 h)] was evaluated. Absorbed doses and main routes-of-entry were then estimated from the 24-h cumulative urinary amounts through the use of a kinetic model. The time courses showed that exposure levels were higher during spraying than harvest activities. Model simulations also suggest a limited absorption in the studied workers and an exposure mostly through the dermal route. It further pointed out the advantage of expressing biomarker values in terms of body weight-adjusted amounts in repeated 24-h urine collections as compared to concentrations or excretion rates in spot samples, without the necessity for creatinine corrections.
Resumo:
Introduction/objectives: Multipatient use of a single-patient CBSD occurred inan outpatient clinic during 4 to 16 months before itsnotification. We looked for transmission of blood-bornepathogens among exposed patients.Methods: Exposed patients underwent serology testing for HBV,HCV and HIV. Patients with isolated anti-HBc receivedone dose of hepatitis B vaccine to look for a memoryimmune response. Possible transmissions were investigatedby mapping visits and sequencing of the viral genomeif needed.Results: Of 280 exposed patients, 9 had died without suspicionof blood-borne infection, 3 could not be tested, and 5declined investigations. Among the 263 (93%) testedpatients, 218 (83%) had negative results. We confirmeda known history of HCV infection in 6 patients (1 coinfectedby HIV), and also identified resolved HBVinfection in 37 patients, of whom 18 were alreadyknown. 2 patients were found to have a previouslyunknown HCV infection. According to the time elapsedfrom the closest previous visit of a HCV-infected potentialsource patient, we could rule out nosocomial transmissionin one case (14 weeks) but not in the other (1day). In the latter, however, transmission was deemedvery unlikely by 2 reference centers based on thesequences of the E1 and HVR1 regions of the virus.Conclusion: We did not identify any transmission of blood-bornepathogens in 263 patients exposed to a single-patientCBSD, despite the presence of potential source cases.Change of needle and disinfection of the device betweenpatients may have contributed to this outcome.Although we cannot exclude transmission of HBV, previousacquisition in endemic countries is a more likelyexplanation in this multi-national population.
Resumo:
The purpose of this study was to prospectively compare free-breathing navigator-gated cardiac-triggered three-dimensional steady-state free precession (SSFP) spin-labeling coronary magnetic resonance (MR) angiography performed by using Cartesian k-space sampling with that performed by using radial k-space sampling. A new dedicated placement of the two-dimensional selective labeling pulse and an individually adjusted labeling delay time approved by the institutional review board were used. In 14 volunteers (eight men, six women; mean age, 28.8 years) who gave informed consent, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), vessel sharpness, vessel length, and subjective image quality were investigated. Differences between groups were analyzed with nonparametric tests (Wilcoxon, Pearson chi2). Radial imaging, as compared with Cartesian imaging, resulted in a significant reduction in the severity of motion artifacts, as well as an increase in SNR (26.9 vs 12.0, P < .05) in the coronary arteries and CNR (23.1 vs 8.8, P < .05) between the coronary arteries and the myocardium. A tendency toward improved vessel sharpness and vessel length was also found with radial imaging. Radial SSFP imaging is a promising technique for spin-labeling coronary MR angiography.
Resumo:
The aim of this study is to investigate the influence of unusual writing positions on a person's signature, in comparison to a standard writing position. Ten writers were asked to sign their signature six times, in each of four different writing positions, including the standard one. In order to take into consideration the effect of the day-to-day variation, this same process was repeated over 12 sessions, giving a total of 288 signatures per subject. The signatures were collected simultaneously in an off-line and on-line acquisition mode, using an interactive tablet and a ballpoint pen. Unidimensional variables (height to width ratio; time with or without in air displacement) and time-dependent variables (pressure; X and Y coordinates; altitude and azimuth angles) were extracted from each signature. For the unidimensional variables, the position effect was assessed through ANOVA and Dunnett contrast tests. Concerning the time-dependent variables, the signatures were compared by using dynamic time warping, and the position effect was evaluated through classification by linear discriminant analysis. Both of these variables provided similar results: no general tendency regarding the position factor could be highlighted. The influence of the position factor varies according to the subject as well as the variable studied. The impact of the session factor was shown to cover the impact that could be ascribed to the writing position factor. Indeed, the day-to-day variation has a greater effect than the position factor on the studied signature variables. The results of this study suggest guidelines for best practice in the area of signature comparisons and demonstrate the importance of a signature collection procedure covering an adequate number of sampling sessions, with a sufficient number of samples per session.
Resumo:
Background/Introduction: ln Switzerland, most trends in overweight and obesity levels have been assessed using reported data, a methodology which is prone to reporting bias. ln this study, we aimed at assessing trends in overweight and obesity levels using objectively measured data. Methods: We used independent cross-sectional data collected between 2005 and 2011 by the Bus Santé study on representative samples of the Geneva population. Trends were assessed overall and according to different characteristics of the participants. Overweight and obesity were defined as a body mass index (BMI) between 25 and 29.9 kg/m2 and >=30 kg/m2, respectively. Results: Data from 4093 participants (2012 men) was assessed. Mean BMI was 25.2 ± 4.3 kg/m2 (mean ±standard deviation) in 2005 and 25.4 ± 4.3 in 2011 (p for trend using linear regression=0.98). For men, mean BMI was 26.3 ± 3.8 kg/m2 in 2005 and 26.1 ± 3.7 in 2011 (p for trend=0.37); for women, the corresponding values were 24.3 ± 4.6 and 24.7 ± 4.7 kg/m2 (p for trend=0.42). Overall prevalence of overweight and obesity was 32.2% and 13.3%, respectively, in 2005 and 33.6% and 13.7% in 2011 (p for trend using polytomous logistic regression adjusting for gender, age and smoking=0.49 and 0.94 for overweight and obesity, respectively). For men, prevalence of overweight and obesity was 45.9% and 12.2% in 2005 and 42.1 % and 14.6% in 2011 (P for trend=0.03 for overweight and 0.81 for obesity); for women, the corresponding values were 20.4% and 14.2% in 2005 and 25.4% and 12.9% in 2011 (p for trend=0.13 for overweight and 0.99 for obesity). Conclusion: Overweight and obesity levels appear to have levelled in Geneva, with a possible decrease in overweight levels in men. These favorable findings should be replicated in other geographical locations.
Resumo:
BACKGROUND: Mediastinal lymph-node dissection was compared to systematic mediastinal lymph-node sampling in patients undergoing complete resection for non-small cell lung cancer with respect to morbidity, duration of chest tube drainage and hospitalization, survival, disease-free survival, and site of recurrence. METHODS: A consecutive series of one hundred patients with non-small-cell lung cancer, clinical stage T1-3 N0-1 after standardized staging, was divided into two groups of 50 patients each, according to the technique of intraoperative mediastinal lymph-node assessment (dissection versus sampling). Mediastinal lymph-node dissection consisted of removal of all lymphatic tissues within defined anatomic landmarks of stations 2-4 and 7-9 on the right side, and stations 4-9 on the left side according to the classification of the American Thoracic Society. Systematic mediastinal lymph-node sampling consisted of harvesting of one or more representative lymph nodes from stations 2-4 and 7-9 on the right side, and stations 4-9 on the left side. RESULTS: All patients had complete resection. A mean follow-up time of 89 months was achieved in 92 patients. The two groups of patients were comparable with respect to age, gender, performance status, tumor stage, histology, extent of lung resection, and follow-up time. No significant difference was found between both groups regarding the duration of chest tube drainage, hospitalization, and morbidity. However, dissection required a longer operation time than sampling (179 +/- 38 min versus 149 +/- 37 min, p < 0.001). There was no significant difference in overall survival between the two groups; however, patients with stage I disease had a significantly longer disease-free survival after dissection than after sampling (60.2 +/- 7 versus 44.8 +/- 8 months, p < 0.03). Local recurrence was significantly higher after sampling than after dissection in patients with stage I tumor (12.5% versus 45%, p = 0.02) and in patients with nodal tumor negative mediastinum (N0/N1 disease) (46% versus 13%, p = 0.004). CONCLUSION: Our results suggest that mediastinal lymph-node dissection may provide a longer disease-free survival in stage I non-small cell lung cancer and, most importantly, a better local tumor control than mediastinal lymph-node sampling after complete resection for N0/N1 disease without leading to increased morbidity.
Resumo:
Because of the various matrices available for forensic investigations, the development of versatile analytical approaches allowing the simultaneous determination of drugs is challenging. The aim of this work was to assess a liquid chromatography-tandem mass spectrometry (LC-MS/MS) platform allowing the rapid quantification of colchicine in body fluids and tissues collected in the context of a fatal overdose. For this purpose, filter paper was used as a sampling support and was associated with an automated 96-well plate extraction performed by the LC autosampler itself. The developed method features a 7-min total run time including automated filter paper extraction (2 min) and chromatographic separation (5 min). The sample preparation was reduced to a minimum regardless of the matrix analyzed. This platform was fully validated for dried blood spots (DBS) in the toxic concentration range of colchicine. The DBS calibration curve was applied successfully to quantification in all other matrices (body fluids and tissues) except for bile, where an excessive matrix effect was found. The distribution of colchicine for a fatal overdose case was reported as follows: peripheral blood, 29 ng/ml; urine, 94 ng/ml; vitreous humour and cerebrospinal fluid, < 5 ng/ml; pericardial fluid, 14 ng/ml; brain, < 5 pg/mg; heart, 121 pg/mg; kidney, 245 pg/mg; and liver, 143 pg/mg. Although filter paper is usually employed for DBS, we report here the extension of this alternative sampling support to the analysis of other body fluids and tissues. The developed platform represents a rapid and versatile approach for drug determination in multiple forensic media.
Resumo:
A simple wipe sampling procedure was developed for the surface contamination determination of ten cytotoxic drugs: cytarabine, gemcitabine, methotrexate, etoposide phosphate, cyclophosphamide, ifosfamide, irinotecan, doxorubicin, epirubicin and vincristine. Wiping was performed using Whatman filter paper on different surfaces such as stainless steel, polypropylene, polystyrol, glass, latex gloves, computer mouse and coated paperboard. Wiping and desorption procedures were investigated: The same solution containing 20% acetonitrile and 0.1% formic acid in water gave the best results. After ultrasonic desorption and then centrifugation, samples were analysed by a validated liquid chromatography coupled to tandem mass spectrometry (LC-MS/MS) in selected reaction monitoring mode. The whole analytical strategy from wipe sampling to LC-MS/MS analysis was evaluated to determine quantitative performance. The lowest limit of quantification of 10 ng per wiping sample (i.e. 0.1 ng cm(-2)) was determined for the ten investigated cytotoxic drugs. Relative standard deviation for intermediate precision was always inferior to 20%. As recovery was dependent on the tested surface for each drug, a correction factor was determined and applied for real samples. The method was then successfully applied at the cytotoxic production unit of the Geneva University Hospitals pharmacy.
Resumo:
Traditional culture-dependent methods to quantify and identify airborne microorganisms are limited by factors such as short-duration sampling times and inability to count nonculturableor non-viable bacteria. Consequently, the quantitative assessment of bioaerosols is often underestimated. Use of the real-time quantitative polymerase chain reaction (Q-PCR) to quantify bacteria in environmental samples presents an alternative method, which should overcome this problem. The aim of this study was to evaluate the performance of a real-time Q-PCR assay as a simple and reliable way to quantify the airborne bacterial load within poultry houses and sewage treatment plants, in comparison with epifluorescencemicroscopy and culture-dependent methods. The estimates of bacterial load that we obtained from real-time PCR and epifluorescence methods, are comparable, however, our analysis of sewage treatment plants indicate these methods give values 270-290 fold greater than those obtained by the ''impaction on nutrient agar'' method. The culture-dependent method of air impaction on nutrient agar was also inadequate in poultry houses, as was the impinger-culture method, which gave a bacterial load estimate 32-fold lower than obtained by Q-PCR. Real-time quantitative PCR thus proves to be a reliable, discerning, and simple method that could be used to estimate airborne bacterial load in a broad variety of other environments expected to carry high numbers of airborne bacteria. [Authors]
Resumo:
L'utilisation efficace des systèmes géothermaux, la séquestration du CO2 pour limiter le changement climatique et la prévention de l'intrusion d'eau salée dans les aquifères costaux ne sont que quelques exemples qui démontrent notre besoin en technologies nouvelles pour suivre l'évolution des processus souterrains à partir de la surface. Un défi majeur est d'assurer la caractérisation et l'optimisation des performances de ces technologies à différentes échelles spatiales et temporelles. Les méthodes électromagnétiques (EM) d'ondes planes sont sensibles à la conductivité électrique du sous-sol et, par conséquent, à la conductivité électrique des fluides saturant la roche, à la présence de fractures connectées, à la température et aux matériaux géologiques. Ces méthodes sont régies par des équations valides sur de larges gammes de fréquences, permettant détudier de manières analogues des processus allant de quelques mètres sous la surface jusqu'à plusieurs kilomètres de profondeur. Néanmoins, ces méthodes sont soumises à une perte de résolution avec la profondeur à cause des propriétés diffusives du champ électromagnétique. Pour cette raison, l'estimation des modèles du sous-sol par ces méthodes doit prendre en compte des informations a priori afin de contraindre les modèles autant que possible et de permettre la quantification des incertitudes de ces modèles de façon appropriée. Dans la présente thèse, je développe des approches permettant la caractérisation statique et dynamique du sous-sol à l'aide d'ondes EM planes. Dans une première partie, je présente une approche déterministe permettant de réaliser des inversions répétées dans le temps (time-lapse) de données d'ondes EM planes en deux dimensions. Cette stratégie est basée sur l'incorporation dans l'algorithme d'informations a priori en fonction des changements du modèle de conductivité électrique attendus. Ceci est réalisé en intégrant une régularisation stochastique et des contraintes flexibles par rapport à la gamme des changements attendus en utilisant les multiplicateurs de Lagrange. J'utilise des normes différentes de la norme l2 pour contraindre la structure du modèle et obtenir des transitions abruptes entre les régions du model qui subissent des changements dans le temps et celles qui n'en subissent pas. Aussi, j'incorpore une stratégie afin d'éliminer les erreurs systématiques de données time-lapse. Ce travail a mis en évidence l'amélioration de la caractérisation des changements temporels par rapport aux approches classiques qui réalisent des inversions indépendantes à chaque pas de temps et comparent les modèles. Dans la seconde partie de cette thèse, j'adopte un formalisme bayésien et je teste la possibilité de quantifier les incertitudes sur les paramètres du modèle dans l'inversion d'ondes EM planes. Pour ce faire, je présente une stratégie d'inversion probabiliste basée sur des pixels à deux dimensions pour des inversions de données d'ondes EM planes et de tomographies de résistivité électrique (ERT) séparées et jointes. Je compare les incertitudes des paramètres du modèle en considérant différents types d'information a priori sur la structure du modèle et différentes fonctions de vraisemblance pour décrire les erreurs sur les données. Les résultats indiquent que la régularisation du modèle est nécessaire lorsqu'on a à faire à un large nombre de paramètres car cela permet d'accélérer la convergence des chaînes et d'obtenir des modèles plus réalistes. Cependent, ces contraintes mènent à des incertitudes d'estimations plus faibles, ce qui implique des distributions a posteriori qui ne contiennent pas le vrai modèledans les régions ou` la méthode présente une sensibilité limitée. Cette situation peut être améliorée en combinant des méthodes d'ondes EM planes avec d'autres méthodes complémentaires telles que l'ERT. De plus, je montre que le poids de régularisation des paramètres et l'écart-type des erreurs sur les données peuvent être retrouvés par une inversion probabiliste. Finalement, j'évalue la possibilité de caractériser une distribution tridimensionnelle d'un panache de traceur salin injecté dans le sous-sol en réalisant une inversion probabiliste time-lapse tridimensionnelle d'ondes EM planes. Etant donné que les inversions probabilistes sont très coûteuses en temps de calcul lorsque l'espace des paramètres présente une grande dimension, je propose une stratégie de réduction du modèle ou` les coefficients de décomposition des moments de Legendre du panache de traceur injecté ainsi que sa position sont estimés. Pour ce faire, un modèle de résistivité de base est nécessaire. Il peut être obtenu avant l'expérience time-lapse. Un test synthétique montre que la méthodologie marche bien quand le modèle de résistivité de base est caractérisé correctement. Cette méthodologie est aussi appliquée à un test de trac¸age par injection d'une solution saline et d'acides réalisé dans un système géothermal en Australie, puis comparée à une inversion time-lapse tridimensionnelle réalisée selon une approche déterministe. L'inversion probabiliste permet de mieux contraindre le panache du traceur salin gr^ace à la grande quantité d'informations a priori incluse dans l'algorithme. Néanmoins, les changements de conductivités nécessaires pour expliquer les changements observés dans les données sont plus grands que ce qu'expliquent notre connaissance actuelle des phénomenès physiques. Ce problème peut être lié à la qualité limitée du modèle de résistivité de base utilisé, indiquant ainsi que des efforts plus grands devront être fournis dans le futur pour obtenir des modèles de base de bonne qualité avant de réaliser des expériences dynamiques. Les études décrites dans cette thèse montrent que les méthodes d'ondes EM planes sont très utiles pour caractériser et suivre les variations temporelles du sous-sol sur de larges échelles. Les présentes approches améliorent l'évaluation des modèles obtenus, autant en termes d'incorporation d'informations a priori, qu'en termes de quantification d'incertitudes a posteriori. De plus, les stratégies développées peuvent être appliquées à d'autres méthodes géophysiques, et offrent une grande flexibilité pour l'incorporation d'informations additionnelles lorsqu'elles sont disponibles. -- The efficient use of geothermal systems, the sequestration of CO2 to mitigate climate change, and the prevention of seawater intrusion in coastal aquifers are only some examples that demonstrate the need for novel technologies to monitor subsurface processes from the surface. A main challenge is to assure optimal performance of such technologies at different temporal and spatial scales. Plane-wave electromagnetic (EM) methods are sensitive to subsurface electrical conductivity and consequently to fluid conductivity, fracture connectivity, temperature, and rock mineralogy. These methods have governing equations that are the same over a large range of frequencies, thus allowing to study in an analogous manner processes on scales ranging from few meters close to the surface down to several hundreds of kilometers depth. Unfortunately, they suffer from a significant resolution loss with depth due to the diffusive nature of the electromagnetic fields. Therefore, estimations of subsurface models that use these methods should incorporate a priori information to better constrain the models, and provide appropriate measures of model uncertainty. During my thesis, I have developed approaches to improve the static and dynamic characterization of the subsurface with plane-wave EM methods. In the first part of this thesis, I present a two-dimensional deterministic approach to perform time-lapse inversion of plane-wave EM data. The strategy is based on the incorporation of prior information into the inversion algorithm regarding the expected temporal changes in electrical conductivity. This is done by incorporating a flexible stochastic regularization and constraints regarding the expected ranges of the changes by using Lagrange multipliers. I use non-l2 norms to penalize the model update in order to obtain sharp transitions between regions that experience temporal changes and regions that do not. I also incorporate a time-lapse differencing strategy to remove systematic errors in the time-lapse inversion. This work presents improvements in the characterization of temporal changes with respect to the classical approach of performing separate inversions and computing differences between the models. In the second part of this thesis, I adopt a Bayesian framework and use Markov chain Monte Carlo (MCMC) simulations to quantify model parameter uncertainty in plane-wave EM inversion. For this purpose, I present a two-dimensional pixel-based probabilistic inversion strategy for separate and joint inversions of plane-wave EM and electrical resistivity tomography (ERT) data. I compare the uncertainties of the model parameters when considering different types of prior information on the model structure and different likelihood functions to describe the data errors. The results indicate that model regularization is necessary when dealing with a large number of model parameters because it helps to accelerate the convergence of the chains and leads to more realistic models. These constraints also lead to smaller uncertainty estimates, which imply posterior distributions that do not include the true underlying model in regions where the method has limited sensitivity. This situation can be improved by combining planewave EM methods with complimentary geophysical methods such as ERT. In addition, I show that an appropriate regularization weight and the standard deviation of the data errors can be retrieved by the MCMC inversion. Finally, I evaluate the possibility of characterizing the three-dimensional distribution of an injected water plume by performing three-dimensional time-lapse MCMC inversion of planewave EM data. Since MCMC inversion involves a significant computational burden in high parameter dimensions, I propose a model reduction strategy where the coefficients of a Legendre moment decomposition of the injected water plume and its location are estimated. For this purpose, a base resistivity model is needed which is obtained prior to the time-lapse experiment. A synthetic test shows that the methodology works well when the base resistivity model is correctly characterized. The methodology is also applied to an injection experiment performed in a geothermal system in Australia, and compared to a three-dimensional time-lapse inversion performed within a deterministic framework. The MCMC inversion better constrains the water plumes due to the larger amount of prior information that is included in the algorithm. The conductivity changes needed to explain the time-lapse data are much larger than what is physically possible based on present day understandings. This issue may be related to the base resistivity model used, therefore indicating that more efforts should be given to obtain high-quality base models prior to dynamic experiments. The studies described herein give clear evidence that plane-wave EM methods are useful to characterize and monitor the subsurface at a wide range of scales. The presented approaches contribute to an improved appraisal of the obtained models, both in terms of the incorporation of prior information in the algorithms and the posterior uncertainty quantification. In addition, the developed strategies can be applied to other geophysical methods, and offer great flexibility to incorporate additional information when available.
Resumo:
Time-lapse geophysical measurements are widely used to monitor the movement of water and solutes through the subsurface. Yet commonly used deterministic least squares inversions typically suffer from relatively poor mass recovery, spread overestimation, and limited ability to appropriately estimate nonlinear model uncertainty. We describe herein a novel inversion methodology designed to reconstruct the three-dimensional distribution of a tracer anomaly from geophysical data and provide consistent uncertainty estimates using Markov chain Monte Carlo simulation. Posterior sampling is made tractable by using a lower-dimensional model space related both to the Legendre moments of the plume and to predefined morphological constraints. Benchmark results using cross-hole ground-penetrating radar travel times measurements during two synthetic water tracer application experiments involving increasingly complex plume geometries show that the proposed method not only conserves mass but also provides better estimates of plume morphology and posterior model uncertainty than deterministic inversion results.