48 resultados para Direction of Arrival Estimator
Resumo:
It has been suggested that Ménière's disease is part of a polyganglionitis in which symptoms result from the reactivation of neurotropic virus within the internal auditory canal, and that intratympanic applications of an antiviral agent might be an efficient therapy. In 2002, we performed a pilot study ending with encouraging results. Control of vertigo was achieved in 80% of the 17 patients included. We present here a prospective, double-blind study, with a 2-year follow-up, in 29 patients referred by ENT practitioners for a surgical treatment after failure of a medical therapy. The participation in the study was offered to patients prior to surgery. A solution of ganciclovir 50 mg/ml or of NaCl 9% was delivered for 10 consecutive days via a microwick inserted into the tympanic membrane in the direction of the round window or through a ventilation tube. One patient was withdrawn from the study immediately after the end of the injections. He could not complete the follow-up period, because of persisting vertigo. As he had received the placebo, he was then treated with the solution of ganciclovir. Symptoms persisted and he underwent a vestibular neurectomy. Among the remaining 28 patients, surgery could be postponed in 22 (81%). Surgery remained necessary to control vertigo in 3 patients from the group that received the antiviral agent, and in 3 from the control group. Using an analogical scale, patients of both groups indicated a similar improvement of their health immediately after the intratympanic injections. The scores obtained with a 36-item short-form health survey quality of life questionnaire and the Dizziness Handicap Inventory were also similar for both groups. In conclusion, most patients were improved after the intratympanic injections, but there was no obvious difference between the treated and control groups. The benefit might be due to the middle ear ventilation or reflect an improvement in the patients' emotional state.
Resumo:
Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.
Resumo:
The objective was to analyze the situation in Switzerland regarding the prevalence of overweight or obesity in children, adolescents and adults. The data were compared with France, an adjacent much larger country. The results showed that there is a definitive lack of objective information in Switzerland on the prevalence of obesity at different ages. As in other European studies, the fact that many national surveys are classically based on subject interviews (self-reported weights and heights rather than measured values) implies that the overweight/obesity prevalence is largely underestimated in adulthood. For example, in a recent Swiss epidemiological study, the prevalence of obesity (BMI greater than 30 kg/m(2)) averaged 6-7% in young men and women (25-34 y), the prevalence being underestimated by a factor of two to three when body weight was self-reported rather than measured. This phenomenon has already been observed in previous European studies. It is concluded that National Surveys based on telephone interviews generally produce biased obesity prevalence results, although the direction of the changes in prevalence of obesity and its evolution with repeated surveys using strict standardized methodology may be evaluated correctly. Therefore, these surveys should be complemented by large-scale epidemiological studies (based on measured anthropomeric variables rather than declared) covering the different linguistic areas of Switzerland. An epidemiological body weight (BMI) monitoring surveillance system, using a harmonized methodology among European countries, would help to accurately assess differences in obesity prevalence across Europe without methodological bias. It will permit monitoring of the dynamic evolution of obesity prevalence as well as the development of appropriate strategies (taking into account the specificity of each country) for obesity prevention and treatment.
Resumo:
How phenomena like helping, dispersal, or the sex ratio evolve depends critically on demographic and life-history factors. One phenotype that is of particular interest to biologists is genomic imprinting, which results in parent-of-origin-specific gene expression and thus deviates from the predictions of Mendel's rules. The most prominent explanation for the evolution of genomic imprinting, the kinship theory, originally specified that multiple paternity can cause the evolution of imprinting when offspring affect maternal resource provisioning. Most models of the kinship theory do not detail how population subdivision, demography, and life history affect the evolution of imprinting. In this work, we embed the classic kinship theory within an island model of population structure and allow for diverse demographic and life-history features to affect the direction of selection on imprinting. We find that population structure does not change how multiple paternity affects the evolution of imprinting under the classic kinship theory. However, if the degree of multiple paternity is not too large, we find that sex-specific migration and survival and generation overlap are the primary factors determining which allele is silenced. This indicates that imprinting can evolve purely as a result of sex-related asymmetries in the demographic structure or life history of a species.
Resumo:
Natural selection favors alleles that increase the number of offspring produced by their carriers. But in a world that is inherently uncertain within generations, selection also favors alleles that reduce the variance in the number of offspring produced. If previous studies have established this principle, they have largely ignored fundamental aspects of sexual reproduction and therefore how selection on sex-specific reproductive variance operates. To study the evolution and consequences of sex-specific reproductive variance, we present a population-genetic model of phenotypic evolution in a dioecious population that incorporates previously neglected components of reproductive variance. First, we derive the probability of fixation for mutations that affect male and/or female reproductive phenotypes under sex-specific selection. We find that even in the simplest scenarios, the direction of selection is altered when reproductive variance is taken into account. In particular, previously unaccounted for covariances between the reproductive outputs of different individuals are expected to play a significant role in determining the direction of selection. Then, the probability of fixation is used to develop a stochastic model of joint male and female phenotypic evolution. We find that sex-specific reproductive variance can be responsible for changes in the course of long-term evolution. Finally, the model is applied to an example of parental-care evolution. Overall, our model allows for the evolutionary analysis of social traits in finite and dioecious populations, where interactions can occur within and between sexes under a realistic scenario of reproduction.
Resumo:
Background: Glutathione (GSH) dysregulation at the gene, protein and functional levels observed in schizophrenia patients, and schizophrenia-like anomalies in GSH deficit experimental models, suggest that genetic glutathione synthesis impairments represent one major risk factor for the disease (Do et al., 2009). In a randomized, double blind, placebo controlled, add-on clinical trial of 140 patients, the GSH precursor N-Acetyl-Cysteine (NAC, 2 g/day, 6 months) significantly improved the negative symptoms and reduced side-effects due to antipsychotics (Berk et al., 2008). In a subset of patients (n=7), NAC (2 g/day, 2 months, cross-over design) also improved auditory evoked potentials, the NMDAdependent mismatch negativity (Lavoie et al, 2008). Methods: To determine whether increased GSH levels would modulate the topography of functional brain connectivity, we applied a multivariate phase synchronization (MPS) estimator (Knyazeva et al, 2008) to dense-array EEGs recorded during rest with eyes closed at the protocol onset, the point of crossover, and at its end. Phase synchronization phenomena are appealing because they can be associated to synchronized phases while the amplitudes stay uncorrelated. MPS measures the degree of interactions among the recorded neuronal oscillators by quantifiying to what extent they behave like a macro-oscillator (i.e. the oscillators are phase synchronous). To assess the whole-head synchronization topography, we computed the MPS sensor-wise over the cluster of locations defined by the sensor itself and he surrounding ones belonging to its second-order neighborhood (Carmeli et al, 2005). Such a cluster spans about 12 cm on average. Abstracts 245 Results: The whole-head imaging revealed a specific synchronization landscape in NAC compared to placebo condition. In particular, NAC increased MPS over frontal and left temporal regions in a frequency-specific manner. Importantly, the topography and direction of MPS changes were similar and robust in all 7 patients. Moreover, these changes correlated with the changes in the Liddle's score of disorganization (Liddle, 1987) thus linking EEG synchronization to the improvement of clinical picture. Discussion: The data suggest an important pathway towards new therapeutic strategies that target GSH dysregulation in schizophrenia. They also show the utility of MPS mapping as a marker of treatment efficacy.
Resumo:
Introduction: Use of paracetamol has been associated with an increased risk of asthma in several epidemiological studies. In contrast, it has been suggested that non-steroidal anti-inflammatory drugs (NSAIDs) might be protective (Kanabar, Clin Ther 2007), but data relating to these drugs are scarce. Methods: Prevalence of asthma and intake of analgesics in the past 2 years were assessed by questionnaire in 2008 in young adults (≥;16 years) diagnosed with cancer between 1976 and 2003 (Swiss Childhood Cancer Survivor Study). In a multivariate logistic regression we analysed the association between asthma and intake of paracetamol only, NSAIDs only or their combination, adjusting for age, sex, cancer diagnosis, cancer therapy and time since diagnosis. Results: Of the 1293 participants (response rate 68%), 83 (6%) reported asthma and 845 (65%) intake of analgesics in the past 2 years. Of these, 257 (29%) took paracetamol only, 224 (25%) NSAIDs only, 312 (35%) a combination of both and 52 (6%) other analgesics. Adjusted Odds ratios for asthma were 2.2 (95% CI 1.0-4.7; p = 0.04), 1.9 (0.9-4.3; p = 0.12) and 2.9 (1.4-6.1; p <0.01) in those using paracetamol only, NSAIDs only or their combination respectively. Conclusion: These cross-sectional data in a selected population do not support a protective effect of NSAIDs against asthma, neither taken alone nor in combination with paracetamol. All analgesics were positively associated with reported asthma episodes in the past two years. This can be explained by reverse causation, with intake of analgesics being a result rather than a cause of asthma events. Randomised controlled trials in unselected populations are needed to clarify the direction of causation.
Resumo:
Phototropism enables plants to orient growth towards the direction of light and thereby maximizes photosynthesis in low-light environments. In angiosperms, blue-light photoreceptors called phototropins are primarily involved in sensing the direction of light. Phytochromes and cryptochromes (sensing red/far-red and blue light, respectively) also modulate asymmetric hypocotyl growth, leading to phototropism. Interactions between different light-signaling pathways regulating phototropism occur in cryptogams and angiosperms. In this review, we focus on the molecular mechanisms underlying the co-action between photosensory systems in the regulation of hypocotyl phototropism in Arabidopsis thaliana. Recent studies have shown that phytochromes and cryptochromes enhance phototropism by controlling the expression of important regulators of phototropin signaling. In addition, phytochromes may also regulate growth towards light via direct interaction with the phototropins.
Resumo:
The motivation for this research initiated from the abrupt rise and fall of minicomputers which were initially used both for industrial automation and business applications due to their significantly lower cost than their predecessors, the mainframes. Later industrial automation developed its own vertically integrated hardware and software to address the application needs of uninterrupted operations, real-time control and resilience to harsh environmental conditions. This has led to the creation of an independent industry, namely industrial automation used in PLC, DCS, SCADA and robot control systems. This industry employs today over 200'000 people in a profitable slow clockspeed context in contrast to the two mainstream computing industries of information technology (IT) focused on business applications and telecommunications focused on communications networks and hand-held devices. Already in 1990s it was foreseen that IT and communication would merge into one Information and communication industry (ICT). The fundamental question of the thesis is: Could industrial automation leverage a common technology platform with the newly formed ICT industry? Computer systems dominated by complex instruction set computers (CISC) were challenged during 1990s with higher performance reduced instruction set computers (RISC). RISC started to evolve parallel to the constant advancement of Moore's law. These developments created the high performance and low energy consumption System-on-Chip architecture (SoC). Unlike to the CISC processors RISC processor architecture is a separate industry from the RISC chip manufacturing industry. It also has several hardware independent software platforms consisting of integrated operating system, development environment, user interface and application market which enables customers to have more choices due to hardware independent real time capable software applications. An architecture disruption merged and the smartphone and tablet market were formed with new rules and new key players in the ICT industry. Today there are more RISC computer systems running Linux (or other Unix variants) than any other computer system. The astonishing rise of SoC based technologies and related software platforms in smartphones created in unit terms the largest installed base ever seen in the history of computers and is now being further extended by tablets. An underlying additional element of this transition is the increasing role of open source technologies both in software and hardware. This has driven the microprocessor based personal computer industry with few dominating closed operating system platforms into a steep decline. A significant factor in this process has been the separation of processor architecture and processor chip production and operating systems and application development platforms merger into integrated software platforms with proprietary application markets. Furthermore the pay-by-click marketing has changed the way applications development is compensated: Three essays on major trends in a slow clockspeed industry: The case of industrial automation 2014 freeware, ad based or licensed - all at a lower price and used by a wider customer base than ever before. Moreover, the concept of software maintenance contract is very remote in the app world. However, as a slow clockspeed industry, industrial automation has remained intact during the disruptions based on SoC and related software platforms in the ICT industries. Industrial automation incumbents continue to supply systems based on vertically integrated systems consisting of proprietary software and proprietary mainly microprocessor based hardware. They enjoy admirable profitability levels on a very narrow customer base due to strong technology-enabled customer lock-in and customers' high risk leverage as their production is dependent on fault-free operation of the industrial automation systems. When will this balance of power be disrupted? The thesis suggests how industrial automation could join the mainstream ICT industry and create an information, communication and automation (ICAT) industry. Lately the Internet of Things (loT) and weightless networks, a new standard leveraging frequency channels earlier occupied by TV broadcasting, have gradually started to change the rigid world of Machine to Machine (M2M) interaction. It is foreseeable that enough momentum will be created that the industrial automation market will in due course face an architecture disruption empowered by these new trends. This thesis examines the current state of industrial automation subject to the competition between the incumbents firstly through a research on cost competitiveness efforts in captive outsourcing of engineering, research and development and secondly researching process re- engineering in the case of complex system global software support. Thirdly we investigate the industry actors', namely customers, incumbents and newcomers, views on the future direction of industrial automation and conclude with our assessments of the possible routes industrial automation could advance taking into account the looming rise of the Internet of Things (loT) and weightless networks. Industrial automation is an industry dominated by a handful of global players each of them focusing on maintaining their own proprietary solutions. The rise of de facto standards like IBM PC, Unix and Linux and SoC leveraged by IBM, Compaq, Dell, HP, ARM, Apple, Google, Samsung and others have created new markets of personal computers, smartphone and tablets and will eventually also impact industrial automation through game changing commoditization and related control point and business model changes. This trend will inevitably continue, but the transition to a commoditized industrial automation will not happen in the near future.
Resumo:
BACKGROUND: Examination of patterns and intensity of physical activity (PA) across cultures where obesity prevalence varies widely provides insight into one aspect of the ongoing epidemiologic transition. The primary hypothesis being addressed is whether low levels of PA are associated with excess weight and adiposity. METHODS: We recruited young adults from five countries (500 per country, 2500 total, ages 25-45 years), spanning the range of obesity prevalence. Men and women were recruited from a suburb of Chicago, Illinois, USA; urban Jamaica; rural Ghana; peri-urban South Africa; and the Seychelles. PA was measured using accelerometry and expressed as minutes per day of moderate-to-vigorous activity or sedentary behavior. RESULTS: Obesity (BMI ≥ 30) prevalence ranged from 1.4% (Ghanaian men) to 63.8% (US women). South African men were the most active, followed by Ghanaian men. Relatively small differences were observed across sites among women; however, women in Ghana accumulated the most activity. Within site-gender sub-groups, the correlation of activity with BMI and other measures of adiposity was inconsistent; the combined correlation across sites was -0.17 for men and -0.11 for women. In the ecological analysis time spent in moderate-to-vigorous activity was inversely associated with BMI (r = -0.71). CONCLUSION: These analyses suggest that persons with greater adiposity tend to engage in less PA, although the associations are weak and the direction of causality cannot be inferred because measurements are cross-sectional. Longitudinal data will be required to elucidate direction of association.
Resumo:
This article examines institutional change in a case that was expected to be particularly resilient but showed considerable structural transformation: the institutionalization of the regulatory state in Switzerland. This process is illustrated through the establishment of independent regulatory agencies (IRAs) in four areas: banking and finance; telecommunications; electricity; and competition. The theoretical framework developed by Streeck, Thelen and Mahoney is used to explore hypotheses about the modes of institutional change, with the methodology of diachronic within-case study. Results confirm only partially the expectations, pointing to layering and displacement as the prevalent modes of change. The concluding part discusses the type and the direction of change as additional explanatory factors.
Resumo:
Background: The desire to improve the quality of health care for an aging population with multiple chronic diseases is fostering a rapid growth in inter-professional team care, supported by health professionals, governments, businesses and public institutions. However, the weight of evidence measuring the impact of team care on patient and health system outcomes has not, heretofore, been clear. To address this deficiency, we evaluated published evidence for the clinical effectiveness of team care within a chronic disease management context in a systematic overview. Methods: A search strategy was built for Medline using medical subject headings and other relevant keywords. After testing for perform- ance, the search strategy was adapted to other databases (Cinhal, Cochrane, Embase, PsychInfo) using their specific descriptors. The searches were limited to reviews published between 1996 and 2011, in English and French languages. The results were analyzed by the number of studies favouring team intervention, based on the direction of effect and statistical significance for all reported outcomes. Results: Sixteen systematic and 7 narrative reviews were included. Diseases most frequently targeted were depression, followed by heart failure, diabetes and mental disorders. Effective- ness outcome measures most commonly used were clinical endpoints, resource utilization (e.g., emergency room visits, hospital admissions), costs, quality of life and medication adherence. Briefly, while improved clinical and resource utilization endpoints were commonly reported as positive outcomes, mixed directional results were often found among costs, medication adherence, mortality and patient satisfaction outcomes. Conclusions: We conclude that, although suggestive of some specific benefits, the overall weight of evidence for team care efficacy remains equivocal. Further studies that examine the causal interactions between multidisciplinary team care and clinical and economic outcomes of disease management are needed to more accurately assess its net program efficacy and population effectiveness.
Resumo:
The eccentric contraction mode was proposed to be the primary stimulus for optimum angle (angle at which peak torque occurs) shift. However, the training range of motion (or muscle excursion range) could be a stimulus as important. The aim of this study was to assess the influence of the training range of motion stimulus on the hamstring optimum length. It was hypothesised that performing a single set of concentric contractions beyond optimal length (seated at 80° of hip flexion) would lead to an immediate shift of the optimum angle to longer muscle length while performing it below (supine at 0° of hip flexion) would not provide any shift. Eleven male participants were assessed on an isokinetic dynamometer. In both positions, the test consisted of 30 consecutive knee flexions at 4.19 rad · s⁻¹. The optimum angle was significantly shifted by ∼15° in the direction of longer muscle length after the contractions at 80° of hip flexion, while a non-significant shift of 3° was found at 0°. The hamstring fatigability was not influenced by the hip position. It was concluded that the training range of motion seems to be a relevant stimulus for shifting the optimum angle to longer muscle length. Moreover, fatigue appears as a mechanism partly responsible for the observed shift.
Resumo:
The linking of North and South America by the Isthmus of Panama had major impacts on global climate, oceanic and atmospheric currents, and biodiversity, yet the timing of this critical event remains contentious. The Isthmus is traditionally understood to have fully closed by ca. 3.5 million years ago (Ma), and this date has been used as a benchmark for oceanographic, climatic, and evolutionary research, but recent evidence suggests a more complex geological formation. Here, we analyze both molecular and fossil data to evaluate the tempo of biotic exchange across the Americas in light of geological evidence. We demonstrate significant waves of dispersal of terrestrial organisms at approximately ca. 20 and 6 Ma and corresponding events separating marine organisms in the Atlantic and Pacific oceans at ca. 23 and 7 Ma. The direction of dispersal and their rates were symmetrical until the last ca. 6 Ma, when northern migration of South American lineages increased significantly. Variability among taxa in their timing of dispersal or vicariance across the Isthmus is not explained by the ecological factors tested in these analyses, including biome type, dispersal ability, and elevation preference. Migration was therefore not generally regulated by intrinsic traits but more likely reflects the presence of emergent terrain several millions of years earlier than commonly assumed. These results indicate that the dramatic biotic turnover associated with the Great American Biotic Interchange was a long and complex process that began as early as the Oligocene-Miocene transition.
Resumo:
Understanding and quantifying seismic energy dissipation, which manifests itself in terms of velocity dispersion and attenuation, in fluid-saturated porous rocks is of considerable interest, since it offers the perspective of extracting information with regard to the elastic and hydraulic rock properties. There is increasing evidence to suggest that wave-induced fluid flow, or simply WIFF, is the dominant underlying physical mechanism governing these phenomena throughout the seismic, sonic, and ultrasonic frequency ranges. This mechanism, which can prevail at the microscopic, mesoscopic, and macroscopic scale ranges, operates through viscous energy dissipation in response to fluid pressure gradients and inertial effects induced by the passing wavefield. In the first part of this thesis, we present an analysis of broad-band multi-frequency sonic log data from a borehole penetrating water-saturated unconsolidated glacio-fluvial sediments. An inherent complication arising in the interpretation of the observed P-wave attenuation and velocity dispersion is, however, that the relative importance of WIFF at the various scales is unknown and difficult to unravel. An important generic result of our work is that the levels of attenuation and velocity dispersion due to the presence of mesoscopic heterogeneities in water-saturated unconsolidated clastic sediments are expected to be largely negligible. Conversely, WIFF at the macroscopic scale allows for explaining most of the considered data while refinements provided by including WIFF at the microscopic scale in the analysis are locally meaningful. Using a Monte-Carlo-type inversion approach, we compare the capability of the different models describing WIFF at the macroscopic and microscopic scales with regard to their ability to constrain the dry frame elastic moduli and the permeability as well as their local probability distribution. In the second part of this thesis, we explore the issue of determining the size of a representative elementary volume (REV) arising in the numerical upscaling procedures of effective seismic velocity dispersion and attenuation of heterogeneous media. To this end, we focus on a set of idealized synthetic rock samples characterized by the presence of layers, fractures or patchy saturation in the mesocopic scale range. These scenarios are highly pertinent because they tend to be associated with very high levels of velocity dispersion and attenuation caused by WIFF in the mesoscopic scale range. The problem of determining the REV size for generic heterogeneous rocks is extremely complex and entirely unexplored in the given context. In this pilot study, we have therefore focused on periodic media, which assures the inherent self- similarity of the considered samples regardless of their size and thus simplifies the problem to a systematic analysis of the dependence of the REV size on the applied boundary conditions in the numerical simulations. Our results demonstrate that boundary condition effects are absent for layered media and negligible in the presence of patchy saturation, thus resulting in minimum REV sizes. Conversely, strong boundary condition effects arise in the presence of a periodic distribution of finite-length fractures, thus leading to large REV sizes. In the third part of the thesis, we propose a novel effective poroelastic model for periodic media characterized by mesoscopic layering, which accounts for WIFF at both the macroscopic and mesoscopic scales as well as for the anisotropy associated with the layering. Correspondingly, this model correctly predicts the existence of the fast and slow P-waves as well as quasi and pure S-waves for any direction of wave propagation as long as the corresponding wavelengths are much larger than the layer thicknesses. The primary motivation for this work is that, for formations of intermediate to high permeability, such as, for example, unconsolidated sediments, clean sandstones, or fractured rocks, these two WIFF mechanisms may prevail at similar frequencies. This scenario, which can be expected rather common, cannot be accounted for by existing models for layered porous media. Comparisons of analytical solutions of the P- and S-wave phase velocities and inverse quality factors for wave propagation perpendicular to the layering with those obtained from numerical simulations based on a ID finite-element solution of the poroelastic equations of motion show very good agreement as long as the assumption of long wavelengths remains valid. A limitation of the proposed model is its inability to account for inertial effects in mesoscopic WIFF when both WIFF mechanisms prevail at similar frequencies. Our results do, however, also indicate that the associated error is likely to be relatively small, as, even at frequencies at which both inertial and scattering effects are expected to be at play, the proposed model provides a solution that is remarkably close to its numerical benchmark. -- Comprendre et pouvoir quantifier la dissipation d'énergie sismique qui se traduit par la dispersion et l'atténuation des vitesses dans les roches poreuses et saturées en fluide est un intérêt primordial pour obtenir des informations à propos des propriétés élastique et hydraulique des roches en question. De plus en plus d'études montrent que le déplacement relatif du fluide par rapport au solide induit par le passage de l'onde (wave induced fluid flow en anglais, dont on gardera ici l'abréviation largement utilisée, WIFF), représente le principal mécanisme physique qui régit ces phénomènes, pour la gamme des fréquences sismiques, sonique et jusqu'à l'ultrasonique. Ce mécanisme, qui prédomine aux échelles microscopique, mésoscopique et macroscopique, est lié à la dissipation d'énergie visqueuse résultant des gradients de pression de fluide et des effets inertiels induits par le passage du champ d'onde. Dans la première partie de cette thèse, nous présentons une analyse de données de diagraphie acoustique à large bande et multifréquences, issues d'un forage réalisé dans des sédiments glaciaux-fluviaux, non-consolidés et saturés en eau. La difficulté inhérente à l'interprétation de l'atténuation et de la dispersion des vitesses des ondes P observées, est que l'importance des WIFF aux différentes échelles est inconnue et difficile à quantifier. Notre étude montre que l'on peut négliger le taux d'atténuation et de dispersion des vitesses dû à la présence d'hétérogénéités à l'échelle mésoscopique dans des sédiments clastiques, non- consolidés et saturés en eau. A l'inverse, les WIFF à l'échelle macroscopique expliquent la plupart des données, tandis que les précisions apportées par les WIFF à l'échelle microscopique sont localement significatives. En utilisant une méthode d'inversion du type Monte-Carlo, nous avons comparé, pour les deux modèles WIFF aux échelles macroscopique et microscopique, leur capacité à contraindre les modules élastiques de la matrice sèche et la perméabilité ainsi que leur distribution de probabilité locale. Dans une seconde partie de cette thèse, nous cherchons une solution pour déterminer la dimension d'un volume élémentaire représentatif (noté VER). Cette problématique se pose dans les procédures numériques de changement d'échelle pour déterminer l'atténuation effective et la dispersion effective de la vitesse sismique dans un milieu hétérogène. Pour ce faire, nous nous concentrons sur un ensemble d'échantillons de roches synthétiques idéalisés incluant des strates, des fissures, ou une saturation partielle à l'échelle mésoscopique. Ces scénarios sont hautement pertinents, car ils sont associés à un taux très élevé d'atténuation et de dispersion des vitesses causé par les WIFF à l'échelle mésoscopique. L'enjeu de déterminer la dimension d'un VER pour une roche hétérogène est très complexe et encore inexploré dans le contexte actuel. Dans cette étude-pilote, nous nous focalisons sur des milieux périodiques, qui assurent l'autosimilarité des échantillons considérés indépendamment de leur taille. Ainsi, nous simplifions le problème à une analyse systématique de la dépendance de la dimension des VER aux conditions aux limites appliquées. Nos résultats indiquent que les effets des conditions aux limites sont absents pour un milieu stratifié, et négligeables pour un milieu à saturation partielle : cela résultant à des dimensions petites des VER. Au contraire, de forts effets des conditions aux limites apparaissent dans les milieux présentant une distribution périodique de fissures de taille finie : cela conduisant à de grandes dimensions des VER. Dans la troisième partie de cette thèse, nous proposons un nouveau modèle poro- élastique effectif, pour les milieux périodiques caractérisés par une stratification mésoscopique, qui prendra en compte les WIFF à la fois aux échelles mésoscopique et macroscopique, ainsi que l'anisotropie associée à ces strates. Ce modèle prédit alors avec exactitude l'existence des ondes P rapides et lentes ainsi que les quasis et pures ondes S, pour toutes les directions de propagation de l'onde, tant que la longueur d'onde correspondante est bien plus grande que l'épaisseur de la strate. L'intérêt principal de ce travail est que, pour les formations à perméabilité moyenne à élevée, comme, par exemple, les sédiments non- consolidés, les grès ou encore les roches fissurées, ces deux mécanismes d'WIFF peuvent avoir lieu à des fréquences similaires. Or, ce scénario, qui est assez commun, n'est pas décrit par les modèles existants pour les milieux poreux stratifiés. Les comparaisons des solutions analytiques des vitesses des ondes P et S et de l'atténuation de la propagation des ondes perpendiculaires à la stratification, avec les solutions obtenues à partir de simulations numériques en éléments finis, fondées sur une solution obtenue en 1D des équations poro- élastiques, montrent un très bon accord, tant que l'hypothèse des grandes longueurs d'onde reste valable. Il y a cependant une limitation de ce modèle qui est liée à son incapacité à prendre en compte les effets inertiels dans les WIFF mésoscopiques quand les deux mécanismes d'WIFF prédominent à des fréquences similaires. Néanmoins, nos résultats montrent aussi que l'erreur associée est relativement faible, même à des fréquences à laquelle sont attendus les deux effets d'inertie et de diffusion, indiquant que le modèle proposé fournit une solution qui est remarquablement proche de sa référence numérique.