992 resultados para Monte Carlo.
Resumo:
In this paper we consider a stochastic process that may experience random reset events which suddenly bring the system to the starting value and analyze the relevant statistical magnitudes. We focus our attention on monotonic continuous-time random walks with a constant drift: The process increases between the reset events, either by the effect of the random jumps, or by the action of the deterministic drift. As a result of all these combined factors interesting properties emerge, like the existence (for any drift strength) of a stationary transition probability density function, or the faculty of the model to reproduce power-law-like behavior. General formulas for two extreme statistics, the survival probability, and the mean exit time, are also derived. To corroborate in an independent way the results of the paper, Monte Carlo methods were used. These numerical estimations are in full agreement with the analytical predictions.
Resumo:
This work presents new, efficient Markov chain Monte Carlo (MCMC) simulation methods for statistical analysis in various modelling applications. When using MCMC methods, the model is simulated repeatedly to explore the probability distribution describing the uncertainties in model parameters and predictions. In adaptive MCMC methods based on the Metropolis-Hastings algorithm, the proposal distribution needed by the algorithm learns from the target distribution as the simulation proceeds. Adaptive MCMC methods have been subject of intensive research lately, as they open a way for essentially easier use of the methodology. The lack of user-friendly computer programs has been a main obstacle for wider acceptance of the methods. This work provides two new adaptive MCMC methods: DRAM and AARJ. The DRAM method has been built especially to work in high dimensional and non-linear problems. The AARJ method is an extension to DRAM for model selection problems, where the mathematical formulation of the model is uncertain and we want simultaneously to fit several different models to the same observations. The methods were developed while keeping in mind the needs of modelling applications typical in environmental sciences. The development work has been pursued while working with several application projects. The applications presented in this work are: a winter time oxygen concentration model for Lake Tuusulanjärvi and adaptive control of the aerator; a nutrition model for Lake Pyhäjärvi and lake management planning; validation of the algorithms of the GOMOS ozone remote sensing instrument on board the Envisat satellite of European Space Agency and the study of the effects of aerosol model selection on the GOMOS algorithm.
Resumo:
Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.
Resumo:
Intravascular brachytherapy with beta sources has become a useful technique to prevent restenosis after cardiovascular intervention. In particular, the Beta-Cath high-dose-rate system, manufactured by Novoste Corporation, is a commercially available 90Sr 90Y source for intravascular brachytherapy that is achieving widespread use. Its dosimetric characterization has attracted considerable attention in recent years. Unfortunately, the short ranges of the emitted beta particles and the associated large dose gradients make experimental measurements particularly difficult. This circumstance has motivated the appearance of a number of papers addressing the characterization of this source by means of Monte Carlo simulation techniques.
Resumo:
Molecular dynamics simulations were performed to study the ion and water distribution around a spherical charged nanoparticle. A soft nanoparticle model was designed using a set of hydrophobic interaction sites distributed in six concentric spherical layers. In order to simulate the effect of charged functionalyzed groups on the nanoparticle surface, a set of charged sites were distributed in the outer layer. Four charged nanoparticle models, from a surface charge value of −0.035 Cm−2 to − 0.28 Cm−2, were studied in NaCl and CaCl2 salt solutions at 1 M and 0.1 M concentrations to evaluate the effect of the surface charge, counterion valence, and concentration of added salt. We obtain that Na + and Ca2 + ions enter inside the soft nanoparticle. Monovalent ions are more accumulated inside the nanoparticle surface, whereas divalent ions are more accumulated just in the plane of the nanoparticle surface sites. The increasing of the the salt concentration has little effect on the internalization of counterions, but significantly reduces the number of water molecules that enter inside the nanoparticle. The manner of distributing the surface charge in the nanoparticle (uniformly over all surface sites or discretely over a limited set of randomly selected sites) considerably affects the distribution of counterions in the proximities of the nanoparticle surface.
Resumo:
In the present work we focus on two indices that quantify directionality and skew-symmetrical patterns in social interactions as measures of social reciprocity: the Directional consistency (DC) and Skew symmetry indices. Although both indices enable researchers to describe social groups, most studies require statistical inferential tests. The main aims of the present study are: firstly, to propose an overall statistical technique for testing null hypotheses regarding social reciprocity in behavioral studies, using the DC and Skew symmetry statistics (Φ) at group level; and secondly, to compare both statistics in order to allow researchers to choose the optimal measure depending on the conditions. In order to allow researchers to make statistical decisions, statistical significance for both statistics has been estimated by means of a Monte Carlo simulation. Furthermore, this study will enable researchers to choose the optimal observational conditions for carrying out their research, as the power of the statistical tests has been estimated.
Resumo:
This study examined the independent effect of skewness and kurtosis on the robustness of the linear mixed model (LMM), with the Kenward-Roger (KR) procedure, when group distributions are different, sample sizes are small, and sphericity cannot be assumed. Methods: A Monte Carlo simulation study considering a split-plot design involving three groups and four repeated measures was performed. Results: The results showed that when group distributions are different, the effect of skewness on KR robustness is greater than that of kurtosis for the corresponding values. Furthermore, the pairings of skewness and kurtosis with group size were found to be relevant variables when applying this procedure. Conclusions: With sample sizes of 45 and 60, KR is a suitable option for analyzing data when the distributions are: (a) mesokurtic and not highly or extremely skewed, and (b) symmetric with different degrees of kurtosis. With total sample sizes of 30, it is adequate when group sizes are equal and the distributions are: (a) mesokurtic and slightly or moderately skewed, and sphericity is assumed; and (b) symmetric with a moderate or high/extreme violation of kurtosis. Alternative analyses should be considered when the distributions are highly or extremely skewed and samples sizes are small.
Resumo:
OBJETIVO: Analisar, por meio de um modelo computacional da região ocular, as características da distribuição da dose utilizando placas contendo iodo-125 e rutênio/ródio-106. MATERIAIS E MÉTODOS: Foi utilizado um modelo computacional de voxels da região ocular incluindo os diversos tecidos, com a placa posicionada sobre a esclera. O código Monte Carlo foi utilizado para simular a irradiação. A distribuição da dose é apresentada por curvas de isodoses. RESULTADOS: As simulações computacionais apresentam a distribuição da dose no interior do bulbo e nas estruturas externas. Os resultados permitem comparar a distribuição espacial das doses geradas por partículas beta e por fótons. As simulações mostram que a aplicação de sementes de iodo-125 implica alta dose no cristalino, enquanto o rutênio/ródio-106 produz alta dose na superfície da esclera. CONCLUSÃO: A dose no cristalino depende da espessura do tumor, da posição e do diâmetro da placa, e do radionuclídeo utilizado. No presente estudo, a fonte de rutênio/ródio-106 é recomendada para tumores de dimensões reduzidas. A irradiação com iodo-125 gera doses maiores no cristalino do que a irradiação com rutênio/ródio-106. O valor máximo de dose no cristalino corresponde a 12,75% do valor máximo de dose com iodo-125 e apenas 0,005% para rutênio/ródio-106.
Resumo:
A thorough literature review about the current situation on the implementation of eye lens monitoring has been performed in order to provide recommendations regarding dosemeter types, calibration procedures and practical aspects of eye lens monitoring for interventional radiology personnel. Most relevant data and recommendations from about 100 papers have been analysed and classified in the following topics: challenges of today in eye lens monitoring; conversion coefficients, phantoms and calibration procedures for eye lens dose evaluation; correction factors and dosemeters for eye lens dose measurements; dosemeter position and influence of protective devices. The major findings of the review can be summarised as follows: the recommended operational quantity for the eye lens monitoring is H p (3). At present, several dosemeters are available for eye lens monitoring and calibration procedures are being developed. However, in practice, very often, alternative methods are used to assess the dose to the eye lens. A summary of correction factors found in the literature for the assessment of the eye lens dose is provided. These factors can give an estimation of the eye lens dose when alternative methods, such as the use of a whole body dosemeter, are used. A wide range of values is found, thus indicating the large uncertainty associated with these simplified methods. Reduction factors from most common protective devices obtained experimentally and using Monte Carlo calculations are presented. The paper concludes that the use of a dosemeter placed at collar level outside the lead apron can provide a useful first estimate of the eye lens exposure. However, for workplaces with estimated annual equivalent dose to the eye lens close to the dose limit, specific eye lens monitoring should be performed. Finally, training of the involved medical staff on the risks of ionising radiation for the eye lens and on the correct use of protective systems is strongly recommended.
Resumo:
OBJETIVO: Utilizar o código PENELOPE e desenvolver geometrias onde estão presentes heterogeneidades para simular o comportamento do feixe de fótons nessas condições. MATERIAIS E MÉTODOS: Foram feitas simulações do comportamento da radiação ionizante para o caso homogêneo, apenas água, e para os casos heterogêneos, com diferentes materiais. Consideraram-se geometrias cúbicas para os fantomas e geometrias em forma de paralelepípedos para as heterogeneidades com a seguinte composição: tecido simulador de osso e pulmão, seguindo recomendações da International Commission on Radiological Protection, e titânio, alumínio e prata. Definiram-se, como parâmetros de entrada: a energia e o tipo de partícula da fonte, 6 MV de fótons; a distância fonte-superfície de 100 cm; e o campo de radiação de 10x 10 cm². RESULTADOS: Obtiveram-se curvas de percentual de dose em profundidade para todos os casos. Observou-se que em materiais com densidade eletrônica alta, como a prata, a dose absorvida é maior em relação à dose absorvida no fantoma homogêneo, enquanto no tecido simulador de pulmão a dose é menor. CONCLUSÃO: Os resultados obtidos demonstram a importância de se considerar heterogeneidades nos algoritmos dos sistemas de planejamento usados no cálculo da distribuição de dose nos pacientes, evitando-se sub ou superdosagem dos tecidos próximos às heterogeneidades.
Resumo:
Pressurized re-entrant (or 4 pi) ionization chambers (ICs) connected to current-measuring electronics are used for activity measurements of photon emitting radionuclides and some beta emitters in the fields of metrology and nuclear medicine. As a secondary method, these instruments need to be calibrated with appropriate activity standards from primary or direct standardization. The use of these instruments over 50 years has been well described in numerous publications, such as the Monographie BIPM-4 and the special issue of Metrologia on radionuclide metrology (Ratel 2007 Metrologia 44 S7-16, Schrader1997 Activity Measurements With Ionization Chambers (Monographie BIPM-4) Schrader 2007 Metrologia 44 S53-66, Cox et al 2007 Measurement Modelling of the International Reference System (SIR) for Gamma-Emitting Radionuclides (Monographie BIPM-7)). The present work describes the principles of activity measurements, calibrations, and impurity corrections using pressurized ionization chambers in the first part and the uncertainty analysis illustrated with example uncertainty budgets from routine source-calibration as well as from an international reference system (SIR) measurement in the second part.
Resumo:
OBJETIVO: Propõe-se avaliar os perfis de dose em profundidade e as distribuições espaciais de dose para protocolos de radioterapia ocular por prótons, a partir de simulações computacionais em código nuclear e modelo de olho discretizado em voxels. MATERIAIS E MÉTODOS: As ferramentas computacionais empregadas foram o código Geant4 (GEometry ANd Tracking) Toolkit e o SISCODES (Sistema Computacional para Dosimetria em Radioterapia). O Geant4 é um pacote de software livre, utilizado para simular a passagem de partículas nucleares com carga elétrica através da matéria, pelo método de Monte Carlo. Foram executadas simulações computacionais reprodutivas de radioterapia por próton baseada em instalações pré-existentes. RESULTADOS: Os dados das simulações foram integrados ao modelo de olho através do código SISCODES, para geração das distribuições espaciais de doses. Perfis de dose em profundidade reproduzindo o pico de Bragg puro e modulado são apresentados. Importantes aspectos do planejamento radioterápico com prótons são abordados, como material absorvedor, modulação, dimensões do colimador, energia incidente do próton e produção de isodoses. CONCLUSÃO: Conclui-se que a terapia por prótons, quando adequadamente modulada e direcionada, pode reproduzir condições ideais de deposição de dose em neoplasias oculares.
Resumo:
OBJETIVO: Comparar dados de dosimetria e fluência de fótons entre diferentes modelos de mama, discutindo as aplicações em testes de constância e estudos dosimétricos aplicados à mamografia. MATERIAIS E MÉTODOS: Foram simulados diferentes modelos homogêneos e um modelo antropomórfico de mama tipo voxel, sendo contabilizadas: a dose total absorvida no modelo, a dose absorvida pelo tecido glandular/material equivalente, e a dose absorvida e a fluência de fótons em diferentes profundidades dos modelos. Uma câmara de ionização simulada coletou o kerma de entrada na pele. As combinações alvo-filtro estudadas foram Mo-30Mo e Mo-25Rh, para diferentes potenciais aceleradores de 26 kVp até 34 kVp. RESULTADOS: A dose glandular normalizada, comparada ao modelo voxel, resultou em diferenças entre -15% até -21% para RMI, -10% para PhantomMama e 10% para os modelos Barts e Keithley. A variação dos valores da camada semirredutora entre modelos foi geralmente inferior a 10% para todos os volumes sensíveis. CONCLUSÃO: Para avaliar a dose glandular normalizada e a dose glandular, em mamas médias, recomenda-se o modelo de Dance. Os modelos homogêneos devem ser utilizados para realizar testes de constância em dosimetria, mas eles não são indicados para estimar a dosimetria em pacientes reais
Resumo:
OBJETIVO: Este artigo mostra um procedimento de conversão de imagens de tomografia computadorizada ou de ressonância magnética em modelo de voxels tridimensional para fim de dosimetria. Este modelo é uma representação personalizada do paciente que pode ser usado na simulação, via código MCNP (Monte Carlo N-Particle), de transporte de partículas nucleares, reproduzindo o processo estocástico de interação de partículas nucleares com os tecidos humanos. MATERIAIS E MÉTODOS: O sistema computacional desenvolvido, denominado SISCODES, é uma ferramenta para planejamento computacional tridimensional de tratamentos radioterápicos ou procedimentos radiológicos. Partindo de imagens tomográficas do paciente, o plano de tratamento é modelado e simulado. São então mostradas as doses absorvidas, por meio de curvas de isodoses superpostas ao modelo. O SISCODES acopla o modelo tridimensional ao código MCNP5, que simula o protocolo de exposição à radiação ionizante. RESULTADOS: O SISCODES vem sendo utilizado no grupo de pesquisa NRI/CNPq na criação de modelos de voxels antropomórficos e antropométricos que são acoplados ao código MCNP para modelar braquiterapias e teleterapias aplicadas a tumores em pulmões, pelve, coluna, cabeça, pescoço, e outros. Os módulos atualmente desenvolvidos no SISCODES são apresentados junto com casos exemplos de planejamento radioterápico. CONCLUSÃO: O SISCODES provê de maneira rápida a criação de modelos de voxels personalizados de qualquer paciente que podem ser usados em simulações por códigos estocásticos tipo MCNP. A combinação da simulação via MCNP com um modelo personalizado do paciente traz grandes melhorias na dosimetria de tratamentos radioterápicos.
Resumo:
Understanding and quantifying seismic energy dissipation, which manifests itself in terms of velocity dispersion and attenuation, in fluid-saturated porous rocks is of considerable interest, since it offers the perspective of extracting information with regard to the elastic and hydraulic rock properties. There is increasing evidence to suggest that wave-induced fluid flow, or simply WIFF, is the dominant underlying physical mechanism governing these phenomena throughout the seismic, sonic, and ultrasonic frequency ranges. This mechanism, which can prevail at the microscopic, mesoscopic, and macroscopic scale ranges, operates through viscous energy dissipation in response to fluid pressure gradients and inertial effects induced by the passing wavefield. In the first part of this thesis, we present an analysis of broad-band multi-frequency sonic log data from a borehole penetrating water-saturated unconsolidated glacio-fluvial sediments. An inherent complication arising in the interpretation of the observed P-wave attenuation and velocity dispersion is, however, that the relative importance of WIFF at the various scales is unknown and difficult to unravel. An important generic result of our work is that the levels of attenuation and velocity dispersion due to the presence of mesoscopic heterogeneities in water-saturated unconsolidated clastic sediments are expected to be largely negligible. Conversely, WIFF at the macroscopic scale allows for explaining most of the considered data while refinements provided by including WIFF at the microscopic scale in the analysis are locally meaningful. Using a Monte-Carlo-type inversion approach, we compare the capability of the different models describing WIFF at the macroscopic and microscopic scales with regard to their ability to constrain the dry frame elastic moduli and the permeability as well as their local probability distribution. In the second part of this thesis, we explore the issue of determining the size of a representative elementary volume (REV) arising in the numerical upscaling procedures of effective seismic velocity dispersion and attenuation of heterogeneous media. To this end, we focus on a set of idealized synthetic rock samples characterized by the presence of layers, fractures or patchy saturation in the mesocopic scale range. These scenarios are highly pertinent because they tend to be associated with very high levels of velocity dispersion and attenuation caused by WIFF in the mesoscopic scale range. The problem of determining the REV size for generic heterogeneous rocks is extremely complex and entirely unexplored in the given context. In this pilot study, we have therefore focused on periodic media, which assures the inherent self- similarity of the considered samples regardless of their size and thus simplifies the problem to a systematic analysis of the dependence of the REV size on the applied boundary conditions in the numerical simulations. Our results demonstrate that boundary condition effects are absent for layered media and negligible in the presence of patchy saturation, thus resulting in minimum REV sizes. Conversely, strong boundary condition effects arise in the presence of a periodic distribution of finite-length fractures, thus leading to large REV sizes. In the third part of the thesis, we propose a novel effective poroelastic model for periodic media characterized by mesoscopic layering, which accounts for WIFF at both the macroscopic and mesoscopic scales as well as for the anisotropy associated with the layering. Correspondingly, this model correctly predicts the existence of the fast and slow P-waves as well as quasi and pure S-waves for any direction of wave propagation as long as the corresponding wavelengths are much larger than the layer thicknesses. The primary motivation for this work is that, for formations of intermediate to high permeability, such as, for example, unconsolidated sediments, clean sandstones, or fractured rocks, these two WIFF mechanisms may prevail at similar frequencies. This scenario, which can be expected rather common, cannot be accounted for by existing models for layered porous media. Comparisons of analytical solutions of the P- and S-wave phase velocities and inverse quality factors for wave propagation perpendicular to the layering with those obtained from numerical simulations based on a ID finite-element solution of the poroelastic equations of motion show very good agreement as long as the assumption of long wavelengths remains valid. A limitation of the proposed model is its inability to account for inertial effects in mesoscopic WIFF when both WIFF mechanisms prevail at similar frequencies. Our results do, however, also indicate that the associated error is likely to be relatively small, as, even at frequencies at which both inertial and scattering effects are expected to be at play, the proposed model provides a solution that is remarkably close to its numerical benchmark. -- Comprendre et pouvoir quantifier la dissipation d'énergie sismique qui se traduit par la dispersion et l'atténuation des vitesses dans les roches poreuses et saturées en fluide est un intérêt primordial pour obtenir des informations à propos des propriétés élastique et hydraulique des roches en question. De plus en plus d'études montrent que le déplacement relatif du fluide par rapport au solide induit par le passage de l'onde (wave induced fluid flow en anglais, dont on gardera ici l'abréviation largement utilisée, WIFF), représente le principal mécanisme physique qui régit ces phénomènes, pour la gamme des fréquences sismiques, sonique et jusqu'à l'ultrasonique. Ce mécanisme, qui prédomine aux échelles microscopique, mésoscopique et macroscopique, est lié à la dissipation d'énergie visqueuse résultant des gradients de pression de fluide et des effets inertiels induits par le passage du champ d'onde. Dans la première partie de cette thèse, nous présentons une analyse de données de diagraphie acoustique à large bande et multifréquences, issues d'un forage réalisé dans des sédiments glaciaux-fluviaux, non-consolidés et saturés en eau. La difficulté inhérente à l'interprétation de l'atténuation et de la dispersion des vitesses des ondes P observées, est que l'importance des WIFF aux différentes échelles est inconnue et difficile à quantifier. Notre étude montre que l'on peut négliger le taux d'atténuation et de dispersion des vitesses dû à la présence d'hétérogénéités à l'échelle mésoscopique dans des sédiments clastiques, non- consolidés et saturés en eau. A l'inverse, les WIFF à l'échelle macroscopique expliquent la plupart des données, tandis que les précisions apportées par les WIFF à l'échelle microscopique sont localement significatives. En utilisant une méthode d'inversion du type Monte-Carlo, nous avons comparé, pour les deux modèles WIFF aux échelles macroscopique et microscopique, leur capacité à contraindre les modules élastiques de la matrice sèche et la perméabilité ainsi que leur distribution de probabilité locale. Dans une seconde partie de cette thèse, nous cherchons une solution pour déterminer la dimension d'un volume élémentaire représentatif (noté VER). Cette problématique se pose dans les procédures numériques de changement d'échelle pour déterminer l'atténuation effective et la dispersion effective de la vitesse sismique dans un milieu hétérogène. Pour ce faire, nous nous concentrons sur un ensemble d'échantillons de roches synthétiques idéalisés incluant des strates, des fissures, ou une saturation partielle à l'échelle mésoscopique. Ces scénarios sont hautement pertinents, car ils sont associés à un taux très élevé d'atténuation et de dispersion des vitesses causé par les WIFF à l'échelle mésoscopique. L'enjeu de déterminer la dimension d'un VER pour une roche hétérogène est très complexe et encore inexploré dans le contexte actuel. Dans cette étude-pilote, nous nous focalisons sur des milieux périodiques, qui assurent l'autosimilarité des échantillons considérés indépendamment de leur taille. Ainsi, nous simplifions le problème à une analyse systématique de la dépendance de la dimension des VER aux conditions aux limites appliquées. Nos résultats indiquent que les effets des conditions aux limites sont absents pour un milieu stratifié, et négligeables pour un milieu à saturation partielle : cela résultant à des dimensions petites des VER. Au contraire, de forts effets des conditions aux limites apparaissent dans les milieux présentant une distribution périodique de fissures de taille finie : cela conduisant à de grandes dimensions des VER. Dans la troisième partie de cette thèse, nous proposons un nouveau modèle poro- élastique effectif, pour les milieux périodiques caractérisés par une stratification mésoscopique, qui prendra en compte les WIFF à la fois aux échelles mésoscopique et macroscopique, ainsi que l'anisotropie associée à ces strates. Ce modèle prédit alors avec exactitude l'existence des ondes P rapides et lentes ainsi que les quasis et pures ondes S, pour toutes les directions de propagation de l'onde, tant que la longueur d'onde correspondante est bien plus grande que l'épaisseur de la strate. L'intérêt principal de ce travail est que, pour les formations à perméabilité moyenne à élevée, comme, par exemple, les sédiments non- consolidés, les grès ou encore les roches fissurées, ces deux mécanismes d'WIFF peuvent avoir lieu à des fréquences similaires. Or, ce scénario, qui est assez commun, n'est pas décrit par les modèles existants pour les milieux poreux stratifiés. Les comparaisons des solutions analytiques des vitesses des ondes P et S et de l'atténuation de la propagation des ondes perpendiculaires à la stratification, avec les solutions obtenues à partir de simulations numériques en éléments finis, fondées sur une solution obtenue en 1D des équations poro- élastiques, montrent un très bon accord, tant que l'hypothèse des grandes longueurs d'onde reste valable. Il y a cependant une limitation de ce modèle qui est liée à son incapacité à prendre en compte les effets inertiels dans les WIFF mésoscopiques quand les deux mécanismes d'WIFF prédominent à des fréquences similaires. Néanmoins, nos résultats montrent aussi que l'erreur associée est relativement faible, même à des fréquences à laquelle sont attendus les deux effets d'inertie et de diffusion, indiquant que le modèle proposé fournit une solution qui est remarquablement proche de sa référence numérique.