992 resultados para Método Monte Carlo
Resumo:
L'utilisation efficace des systèmes géothermaux, la séquestration du CO2 pour limiter le changement climatique et la prévention de l'intrusion d'eau salée dans les aquifères costaux ne sont que quelques exemples qui démontrent notre besoin en technologies nouvelles pour suivre l'évolution des processus souterrains à partir de la surface. Un défi majeur est d'assurer la caractérisation et l'optimisation des performances de ces technologies à différentes échelles spatiales et temporelles. Les méthodes électromagnétiques (EM) d'ondes planes sont sensibles à la conductivité électrique du sous-sol et, par conséquent, à la conductivité électrique des fluides saturant la roche, à la présence de fractures connectées, à la température et aux matériaux géologiques. Ces méthodes sont régies par des équations valides sur de larges gammes de fréquences, permettant détudier de manières analogues des processus allant de quelques mètres sous la surface jusqu'à plusieurs kilomètres de profondeur. Néanmoins, ces méthodes sont soumises à une perte de résolution avec la profondeur à cause des propriétés diffusives du champ électromagnétique. Pour cette raison, l'estimation des modèles du sous-sol par ces méthodes doit prendre en compte des informations a priori afin de contraindre les modèles autant que possible et de permettre la quantification des incertitudes de ces modèles de façon appropriée. Dans la présente thèse, je développe des approches permettant la caractérisation statique et dynamique du sous-sol à l'aide d'ondes EM planes. Dans une première partie, je présente une approche déterministe permettant de réaliser des inversions répétées dans le temps (time-lapse) de données d'ondes EM planes en deux dimensions. Cette stratégie est basée sur l'incorporation dans l'algorithme d'informations a priori en fonction des changements du modèle de conductivité électrique attendus. Ceci est réalisé en intégrant une régularisation stochastique et des contraintes flexibles par rapport à la gamme des changements attendus en utilisant les multiplicateurs de Lagrange. J'utilise des normes différentes de la norme l2 pour contraindre la structure du modèle et obtenir des transitions abruptes entre les régions du model qui subissent des changements dans le temps et celles qui n'en subissent pas. Aussi, j'incorpore une stratégie afin d'éliminer les erreurs systématiques de données time-lapse. Ce travail a mis en évidence l'amélioration de la caractérisation des changements temporels par rapport aux approches classiques qui réalisent des inversions indépendantes à chaque pas de temps et comparent les modèles. Dans la seconde partie de cette thèse, j'adopte un formalisme bayésien et je teste la possibilité de quantifier les incertitudes sur les paramètres du modèle dans l'inversion d'ondes EM planes. Pour ce faire, je présente une stratégie d'inversion probabiliste basée sur des pixels à deux dimensions pour des inversions de données d'ondes EM planes et de tomographies de résistivité électrique (ERT) séparées et jointes. Je compare les incertitudes des paramètres du modèle en considérant différents types d'information a priori sur la structure du modèle et différentes fonctions de vraisemblance pour décrire les erreurs sur les données. Les résultats indiquent que la régularisation du modèle est nécessaire lorsqu'on a à faire à un large nombre de paramètres car cela permet d'accélérer la convergence des chaînes et d'obtenir des modèles plus réalistes. Cependent, ces contraintes mènent à des incertitudes d'estimations plus faibles, ce qui implique des distributions a posteriori qui ne contiennent pas le vrai modèledans les régions ou` la méthode présente une sensibilité limitée. Cette situation peut être améliorée en combinant des méthodes d'ondes EM planes avec d'autres méthodes complémentaires telles que l'ERT. De plus, je montre que le poids de régularisation des paramètres et l'écart-type des erreurs sur les données peuvent être retrouvés par une inversion probabiliste. Finalement, j'évalue la possibilité de caractériser une distribution tridimensionnelle d'un panache de traceur salin injecté dans le sous-sol en réalisant une inversion probabiliste time-lapse tridimensionnelle d'ondes EM planes. Etant donné que les inversions probabilistes sont très coûteuses en temps de calcul lorsque l'espace des paramètres présente une grande dimension, je propose une stratégie de réduction du modèle ou` les coefficients de décomposition des moments de Legendre du panache de traceur injecté ainsi que sa position sont estimés. Pour ce faire, un modèle de résistivité de base est nécessaire. Il peut être obtenu avant l'expérience time-lapse. Un test synthétique montre que la méthodologie marche bien quand le modèle de résistivité de base est caractérisé correctement. Cette méthodologie est aussi appliquée à un test de trac¸age par injection d'une solution saline et d'acides réalisé dans un système géothermal en Australie, puis comparée à une inversion time-lapse tridimensionnelle réalisée selon une approche déterministe. L'inversion probabiliste permet de mieux contraindre le panache du traceur salin gr^ace à la grande quantité d'informations a priori incluse dans l'algorithme. Néanmoins, les changements de conductivités nécessaires pour expliquer les changements observés dans les données sont plus grands que ce qu'expliquent notre connaissance actuelle des phénomenès physiques. Ce problème peut être lié à la qualité limitée du modèle de résistivité de base utilisé, indiquant ainsi que des efforts plus grands devront être fournis dans le futur pour obtenir des modèles de base de bonne qualité avant de réaliser des expériences dynamiques. Les études décrites dans cette thèse montrent que les méthodes d'ondes EM planes sont très utiles pour caractériser et suivre les variations temporelles du sous-sol sur de larges échelles. Les présentes approches améliorent l'évaluation des modèles obtenus, autant en termes d'incorporation d'informations a priori, qu'en termes de quantification d'incertitudes a posteriori. De plus, les stratégies développées peuvent être appliquées à d'autres méthodes géophysiques, et offrent une grande flexibilité pour l'incorporation d'informations additionnelles lorsqu'elles sont disponibles. -- The efficient use of geothermal systems, the sequestration of CO2 to mitigate climate change, and the prevention of seawater intrusion in coastal aquifers are only some examples that demonstrate the need for novel technologies to monitor subsurface processes from the surface. A main challenge is to assure optimal performance of such technologies at different temporal and spatial scales. Plane-wave electromagnetic (EM) methods are sensitive to subsurface electrical conductivity and consequently to fluid conductivity, fracture connectivity, temperature, and rock mineralogy. These methods have governing equations that are the same over a large range of frequencies, thus allowing to study in an analogous manner processes on scales ranging from few meters close to the surface down to several hundreds of kilometers depth. Unfortunately, they suffer from a significant resolution loss with depth due to the diffusive nature of the electromagnetic fields. Therefore, estimations of subsurface models that use these methods should incorporate a priori information to better constrain the models, and provide appropriate measures of model uncertainty. During my thesis, I have developed approaches to improve the static and dynamic characterization of the subsurface with plane-wave EM methods. In the first part of this thesis, I present a two-dimensional deterministic approach to perform time-lapse inversion of plane-wave EM data. The strategy is based on the incorporation of prior information into the inversion algorithm regarding the expected temporal changes in electrical conductivity. This is done by incorporating a flexible stochastic regularization and constraints regarding the expected ranges of the changes by using Lagrange multipliers. I use non-l2 norms to penalize the model update in order to obtain sharp transitions between regions that experience temporal changes and regions that do not. I also incorporate a time-lapse differencing strategy to remove systematic errors in the time-lapse inversion. This work presents improvements in the characterization of temporal changes with respect to the classical approach of performing separate inversions and computing differences between the models. In the second part of this thesis, I adopt a Bayesian framework and use Markov chain Monte Carlo (MCMC) simulations to quantify model parameter uncertainty in plane-wave EM inversion. For this purpose, I present a two-dimensional pixel-based probabilistic inversion strategy for separate and joint inversions of plane-wave EM and electrical resistivity tomography (ERT) data. I compare the uncertainties of the model parameters when considering different types of prior information on the model structure and different likelihood functions to describe the data errors. The results indicate that model regularization is necessary when dealing with a large number of model parameters because it helps to accelerate the convergence of the chains and leads to more realistic models. These constraints also lead to smaller uncertainty estimates, which imply posterior distributions that do not include the true underlying model in regions where the method has limited sensitivity. This situation can be improved by combining planewave EM methods with complimentary geophysical methods such as ERT. In addition, I show that an appropriate regularization weight and the standard deviation of the data errors can be retrieved by the MCMC inversion. Finally, I evaluate the possibility of characterizing the three-dimensional distribution of an injected water plume by performing three-dimensional time-lapse MCMC inversion of planewave EM data. Since MCMC inversion involves a significant computational burden in high parameter dimensions, I propose a model reduction strategy where the coefficients of a Legendre moment decomposition of the injected water plume and its location are estimated. For this purpose, a base resistivity model is needed which is obtained prior to the time-lapse experiment. A synthetic test shows that the methodology works well when the base resistivity model is correctly characterized. The methodology is also applied to an injection experiment performed in a geothermal system in Australia, and compared to a three-dimensional time-lapse inversion performed within a deterministic framework. The MCMC inversion better constrains the water plumes due to the larger amount of prior information that is included in the algorithm. The conductivity changes needed to explain the time-lapse data are much larger than what is physically possible based on present day understandings. This issue may be related to the base resistivity model used, therefore indicating that more efforts should be given to obtain high-quality base models prior to dynamic experiments. The studies described herein give clear evidence that plane-wave EM methods are useful to characterize and monitor the subsurface at a wide range of scales. The presented approaches contribute to an improved appraisal of the obtained models, both in terms of the incorporation of prior information in the algorithms and the posterior uncertainty quantification. In addition, the developed strategies can be applied to other geophysical methods, and offer great flexibility to incorporate additional information when available.
Resumo:
The activity of radiopharmaceuticals in nuclear medicine is measured before patient injection with radionuclide calibrators. In Switzerland, the general requirements for quality controls are defined in a federal ordinance and a directive of the Federal Office of Metrology (METAS) which require each instrument to be verified. A set of three gamma sources (Co-57, Cs-137 and Co-60) is used to verify the response of radionuclide calibrators in the gamma energy range of their use. A beta source, a mixture of (90)Sr and (90)Y in secular equilibrium, is used as well. Manufacturers are responsible for the calibration factors. The main goal of the study was to monitor the validity of the calibration factors by using two sources: a (90)Sr/(90)Y source and a (18)F source. The three types of commercial radionuclide calibrators tested do not have a calibration factor for the mixture but only for (90)Y. Activity measurements of a (90)Sr/(90)Y source with the (90)Y calibration factor are performed in order to correct for the extra-contribution of (90)Sr. The value of the correction factor was found to be 1.113 whereas Monte Carlo simulations of the radionuclide calibrators estimate the correction factor to be 1.117. Measurements with (18)F sources in a specific geometry are also performed. Since this radionuclide is widely used in Swiss hospitals equipped with PET and PET-CT, the metrology of the (18)F is very important. The (18)F response normalized to the (137)Cs response shows that the difference with a reference value does not exceed 3% for the three types of radionuclide calibrators.
Resumo:
In this work we analyze how patchy distributions of CO2 and brine within sand reservoirs may lead to significant attenuation and velocity dispersion effects, which in turn may have a profound impact on surface seismic data. The ultimate goal of this paper is to contribute to the understanding of these processes within the framework of the seismic monitoring of CO2 sequestration, a key strategy to mitigate global warming. We first carry out a Monte Carlo analysis to study the statistical behavior of attenuation and velocity dispersion of compressional waves traveling through rocks with properties similar to those at the Utsira Sand, Sleipner field, containing quasi-fractal patchy distributions of CO2 and brine. These results show that the mean patch size and CO2 saturation play key roles in the observed wave-induced fluid flow effects. The latter can be remarkably important when CO2 concentrations are low and mean patch sizes are relatively large. To analyze these effects on the corresponding surface seismic data, we perform numerical simulations of wave propagation considering reservoir models and CO2 accumulation patterns similar to the CO2 injection site in the Sleipner field. These numerical experiments suggest that wave-induced fluid flow effects may produce changes in the reservoir's seismic response, modifying significantly the main seismic attributes usually employed in the characterization of these environments. Consequently, the determination of the nature of the fluid distributions as well as the proper modeling of the seismic data constitute important aspects that should not be ignored in the seismic monitoring of CO2 sequestration problems.
Resumo:
The paper addresses the concept of multicointegration in panel data frame- work. The proposal builds upon the panel data cointegration procedures developed in Pedroni (2004), for which we compute the moments of the parametric statistics. When individuals are either cross-section independent or cross-section dependence can be re- moved by cross-section demeaning, our approach can be applied to the wider framework of mixed I(2) and I(1) stochastic processes analysis. The paper also deals with the issue of cross-section dependence using approximate common factor models. Finite sample performance is investigated through Monte Carlo simulations. Finally, we illustrate the use of the procedure investigating inventories, sales and production relationship for a panel of US industries.
Resumo:
Empirical studies have shown little evidence to support the presence of all unit roots present in the $^{\Delta_4}$ filter in quarterly seasonal time series. This paper analyses the performance of the Hylleberg, Engle, Granger and Yoo (1990) (HEGY) procedure when the roots under the null are not all present. We exploit the Vector of Quarters representation and cointegration relationship between the quarters when factors $(1-L),(1+L),\bigg(1+L^2\bigg),\bigg(1-L^2\bigg) y \bigg(1+L+L^2+L^3\bigg)$ are a source of nonstationarity in a process in order to obtain the distribution of tests of the HEGY procedure when the underlying processes have a root at the zero, Nyquist frequency, two complex conjugates of frequency $^{\pi/2}$ and two combinations of the previous cases. We show both theoretically and through a Monte-Carlo analysis that the t-ratios $^{t_{{\hat\pi}_1}}$ and $^{t_{{\hat\pi}_2}}$ and the F-type tests used in the HEGY procedure have the same distribution as under the null of a seasonal random walk when the root(s) is/are present, although this is not the case for the t-ratio tests associated with unit roots at frequency $^{\pi/2}$.
Resumo:
We work out a semiclassical theory of shot noise in ballistic n+-i-n+ semiconductor structures aiming at studying two fundamental physical correlations coming from Pauli exclusion principle and long-range Coulomb interaction. The theory provides a unifying scheme which, in addition to the current-voltage characteristics, describes the suppression of shot noise due to Pauli and Coulomb correlations in the whole range of system parameters and applied bias. The whole scenario is summarized by a phase diagram in the plane of two dimensionless variables related to the sample length and contact chemical potential. Here different regions of physical interest can be identified where only Coulomb or only Pauli correlations are active, or where both are present with different relevance. The predictions of the theory are proven to be fully corroborated by Monte Carlo simulations.
Resumo:
We present a theoretical investigation of shot-noise properties in nondegenerate elastic diffusive conductors. Both Monte Carlo simulations and analytical approaches are used. Two interesting phenomena are found: (i) the display of enhanced shot noise for given energy dependences of the scattering time, and (ii) the recovery of full shot noise for asymptotic high applied bias. The first phenomenon is associated with the onset of negative differential conductivity in energy space that drives the system towards a dynamical electrical instability in excellent agreement with analytical predictions. The enhancement is found to be strongly amplified when the dimensionality in momentum space is lowered from three to two dimensions. The second phenomenon is due to the suppression of the effects of long-range Coulomb correlations that takes place when the transit time becomes the shortest time scale in the system, and is common to both elastic and inelastic nondegenerate diffusive conductors. These phenomena shed different light in the understanding of the anomalous behavior of shot noise in mesoscopic conductors, which is a signature of correlations among different current pulses.
Resumo:
Coulomb suppression of shot noise in a ballistic diode connected to degenerate ideal contacts is analyzed in terms of the correlations taking place between current fluctuations due to carriers injected with different energies. By using Monte Carlo simulations we show that at low frequencies the origin of Coulomb suppression can be traced back to the negative correlations existing between electrons injected with an energy close to that of the potential barrier present in the diode active region and all other carriers injected with higher energies. Correlations between electrons with energy above the potential barrier with the rest of electrons are found to influence significantly the spectra at high frequency in the cutoff region.
Resumo:
The paper addresses the concept of multicointegration in panel data frame- work. The proposal builds upon the panel data cointegration procedures developed in Pedroni (2004), for which we compute the moments of the parametric statistics. When individuals are either cross-section independent or cross-section dependence can be re- moved by cross-section demeaning, our approach can be applied to the wider framework of mixed I(2) and I(1) stochastic processes analysis. The paper also deals with the issue of cross-section dependence using approximate common factor models. Finite sample performance is investigated through Monte Carlo simulations. Finally, we illustrate the use of the procedure investigating inventories, sales and production relationship for a panel of US industries.
Resumo:
Empirical studies have shown little evidence to support the presence of all unit roots present in the $^{\Delta_4}$ filter in quarterly seasonal time series. This paper analyses the performance of the Hylleberg, Engle, Granger and Yoo (1990) (HEGY) procedure when the roots under the null are not all present. We exploit the Vector of Quarters representation and cointegration relationship between the quarters when factors $(1-L),(1+L),\bigg(1+L^2\bigg),\bigg(1-L^2\bigg) y \bigg(1+L+L^2+L^3\bigg)$ are a source of nonstationarity in a process in order to obtain the distribution of tests of the HEGY procedure when the underlying processes have a root at the zero, Nyquist frequency, two complex conjugates of frequency $^{\pi/2}$ and two combinations of the previous cases. We show both theoretically and through a Monte-Carlo analysis that the t-ratios $^{t_{{\hat\pi}_1}}$ and $^{t_{{\hat\pi}_2}}$ and the F-type tests used in the HEGY procedure have the same distribution as under the null of a seasonal random walk when the root(s) is/are present, although this is not the case for the t-ratio tests associated with unit roots at frequency $^{\pi/2}$.
Resumo:
We propose an iterative procedure to minimize the sum of squares function which avoids the nonlinear nature of estimating the first order moving average parameter and provides a closed form of the estimator. The asymptotic properties of the method are discussed and the consistency of the linear least squares estimator is proved for the invertible case. We perform various Monte Carlo experiments in order to compare the sample properties of the linear least squares estimator with its nonlinear counterpart for the conditional and unconditional cases. Some examples are also discussed
Resumo:
In groundwater applications, Monte Carlo methods are employed to model the uncertainty on geological parameters. However, their brute-force application becomes computationally prohibitive for highly detailed geological descriptions, complex physical processes, and a large number of realizations. The Distance Kernel Method (DKM) overcomes this issue by clustering the realizations in a multidimensional space based on the flow responses obtained by means of an approximate (computationally cheaper) model; then, the uncertainty is estimated from the exact responses that are computed only for one representative realization per cluster (the medoid). Usually, DKM is employed to decrease the size of the sample of realizations that are considered to estimate the uncertainty. We propose to use the information from the approximate responses for uncertainty quantification. The subset of exact solutions provided by DKM is then employed to construct an error model and correct the potential bias of the approximate model. Two error models are devised that both employ the difference between approximate and exact medoid solutions, but differ in the way medoid errors are interpolated to correct the whole set of realizations. The Local Error Model rests upon the clustering defined by DKM and can be seen as a natural way to account for intra-cluster variability; the Global Error Model employs a linear interpolation of all medoid errors regardless of the cluster to which the single realization belongs. These error models are evaluated for an idealized pollution problem in which the uncertainty of the breakthrough curve needs to be estimated. For this numerical test case, we demonstrate that the error models improve the uncertainty quantification provided by the DKM algorithm and are effective in correcting the bias of the estimate computed solely from the MsFV results. The framework presented here is not specific to the methods considered and can be applied to other combinations of approximate models and techniques to select a subset of realizations
Resumo:
In this paper we analyse, using Monte Carlo simulation, the possible consequences of incorrect assumptions on the true structure of the random effects covariance matrix and the true correlation pattern of residuals, over the performance of an estimation method for nonlinear mixed models. The procedure under study is the well known linearization method due to Lindstrom and Bates (1990), implemented in the nlme library of S-Plus and R. Its performance is studied in terms of bias, mean square error (MSE), and true coverage of the associated asymptotic confidence intervals. Ignoring other criteria like the convenience of avoiding over parameterised models, it seems worst to erroneously assume some structure than do not assume any structure when this would be adequate.
Resumo:
Domain growth in a system with nonconserved order parameter is studied. We simulate the usual Ising model for binary alloys with concentration 0.5 on a two-dimensional square lattice by Monte Carlo techniques. Measurements of the energy, jump-acceptance ratio, and order parameters are performed. Dynamics based on the diffusion of a single vacancy in the system gives a growth law faster than the usual Allen-Cahn law. Allowing vacancy jumps to next-nearest-neighbor sites is essential to prevent vacancy trapping in the ordered regions. By measuring local order parameters we show that the vacancy prefers to be in the disordered regions (domain boundaries). This naturally concentrates the atomic jumps in the domain boundaries, accelerating the growth compared with the usual exchange mechanism that causes jumps to be homogeneously distributed on the lattice.