66 resultados para GALAXIES: DISTANCES AND REDSHIFTS

em Université de Lausanne, Switzerland


Relevância:

100.00% 100.00%

Publicador:

Resumo:

AimHigh intra-specific genetic diversity is necessary for species adaptation to novel environments under climate change, but species tracking suitable conditions are losing alleles through successive founder events during range shift. Here, we investigated the relationship between range shift since the Last Glacial Maximum (LGM) and extant population genetic diversity across multiple plant species to understand variability in species responses. LocationThe circumpolar Arctic and northern temperate alpine ranges. MethodsWe estimated the climatic niches of 30 cold-adapted plant species using range maps coupled with species distribution models and hindcasted species suitable areas to reconstructions of the mid-Holocene and LGM climates. We computed the species-specific migration distances from the species glacial refugia to their current distribution and correlated distances to extant genetic diversity in 1295 populations. Differential responses among species were related to life-history traits. ResultsWe found a negative association between inferred migration distances from refugia and genetic diversities in 25 species, but only 11 had statistically significant negative slopes. The relationships between inferred distance and population genetic diversity were steeper for insect-pollinated species than wind-pollinated species, but the difference among pollination system was marginally independent from phylogenetic autocorrelation. Main conclusionThe relationships between inferred migration distances and genetic diversities in 11 species, independent from current isolation, indicate that past range shifts were associated with a genetic bottleneck effect with an average of 21% loss of genetic diversity per 1000km(-1). In contrast, the absence of relationship in many species also indicates that the response is species specific and may be modulated by plant pollination strategies or result from more complex historical contingencies than those modelled here.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Transgressive segregation describes the occurrence of novel phenotypes in hybrids with extreme trait values not observed in either parental species. A previously experimentally untested prediction is that the amount of transgression increases with the genetic distance between hybridizing species. This follows from QTL studies suggesting that transgression is most commonly due to complementary gene action or epistasis, which become more frequent at larger genetic distances. This is because the number of QTLs fixed for alleles with opposing signs in different species should increase with time since speciation provided that speciation is not driven by disruptive selection. We measured the amount of transgression occurring in hybrids of cichlid fish bred from species pairs with gradually increasing genetic distances and varying phenotypic similarity. Transgression in multi-trait shape phenotypes was quantified using landmark-based geometric morphometric methods. RESULTS: We found that genetic distance explained 52% and 78% of the variation in transgression frequency in F1 and F2 hybrids, respectively. Confirming theoretical predictions, transgression when measured in F2 hybrids, increased linearly with genetic distance between hybridizing species. Phenotypic similarity of species on the other hand was not related to the amount of transgression. CONCLUSION: The commonness and ease with which novel phenotypes are produced in cichlid hybrids between unrelated species has important implications for the interaction of hybridization with adaptation and speciation. Hybridization may generate new genotypes with adaptive potential that did not reside as standing genetic variation in either parental population, potentially enhancing a population's responsiveness to selection. Our results make it conceivable that hybridization contributed to the rapid rates of phenotypic evolution in the large and rapid adaptive radiations of haplochromine cichlids.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cortical folding (gyrification) is determined during the first months of life, so that adverse events occurring during this period leave traces that will be identifiable at any age. As recently reviewed by Mangin and colleagues(2), several methods exist to quantify different characteristics of gyrification. For instance, sulcal morphometry can be used to measure shape descriptors such as the depth, length or indices of inter-hemispheric asymmetry(3). These geometrical properties have the advantage of being easy to interpret. However, sulcal morphometry tightly relies on the accurate identification of a given set of sulci and hence provides a fragmented description of gyrification. A more fine-grained quantification of gyrification can be achieved with curvature-based measurements, where smoothed absolute mean curvature is typically computed at thousands of points over the cortical surface(4). The curvature is however not straightforward to comprehend, as it remains unclear if there is any direct relationship between the curvedness and a biologically meaningful correlate such as cortical volume or surface. To address the diverse issues raised by the measurement of cortical folding, we previously developed an algorithm to quantify local gyrification with an exquisite spatial resolution and of simple interpretation. Our method is inspired of the Gyrification Index(5), a method originally used in comparative neuroanatomy to evaluate the cortical folding differences across species. In our implementation, which we name local Gyrification Index (lGI(1)), we measure the amount of cortex buried within the sulcal folds as compared with the amount of visible cortex in circular regions of interest. Given that the cortex grows primarily through radial expansion(6), our method was specifically designed to identify early defects of cortical development. In this article, we detail the computation of local Gyrification Index, which is now freely distributed as a part of the FreeSurfer Software (http://surfer.nmr.mgh.harvard.edu/, Martinos Center for Biomedical Imaging, Massachusetts General Hospital). FreeSurfer provides a set of automated reconstruction tools of the brain's cortical surface from structural MRI data. The cortical surface extracted in the native space of the images with sub-millimeter accuracy is then further used for the creation of an outer surface, which will serve as a basis for the lGI calculation. A circular region of interest is then delineated on the outer surface, and its corresponding region of interest on the cortical surface is identified using a matching algorithm as described in our validation study(1). This process is repeatedly iterated with largely overlapping regions of interest, resulting in cortical maps of gyrification for subsequent statistical comparisons (Fig. 1). Of note, another measurement of local gyrification with a similar inspiration was proposed by Toro and colleagues(7), where the folding index at each point is computed as the ratio of the cortical area contained in a sphere divided by the area of a disc with the same radius. The two implementations differ in that the one by Toro et al. is based on Euclidian distances and thus considers discontinuous patches of cortical area, whereas ours uses a strict geodesic algorithm and include only the continuous patch of cortical area opening at the brain surface in a circular region of interest.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One major methodological problem in analysis of sequence data is the determination of costs from which distances between sequences are derived. Although this problem is currently not optimally dealt with in the social sciences, it has some similarity with problems that have been solved in bioinformatics for three decades. In this article, the authors propose an optimization of substitution and deletion/insertion costs based on computational methods. The authors provide an empirical way of determining costs for cases, frequent in the social sciences, in which theory does not clearly promote one cost scheme over another. Using three distinct data sets, the authors tested the distances and cluster solutions produced by the new cost scheme in comparison with solutions based on cost schemes associated with other research strategies. The proposed method performs well compared with other cost-setting strategies, while it alleviates the justification problem of cost schemes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Que ce soit d'un point de vue, urbanistique, social, ou encore de la gouvernance, l'évolution des villes est un défi majeur de nos sociétés contemporaines. En offrant la possibilité d'analyser des configurations spatiales et sociales existantes ou en tentant de simuler celles à venir, les systèmes d'information géographique sont devenus incontournables dans la gestion et dans la planification urbaine. En cinq ans la population de la ville de Lausanne est passée de 134'700 à 140'570 habitants, alors que les effectifs de l'école publique ont crû de 12'200 à 13'500 élèves. Cet accroissement démographique associé à un vaste processus d'harmonisation de la scolarité obligatoire en Suisse ont amené le Service des écoles à mettre en place et à développer en collaboration avec l'université de Lausanne des solutions SIG à même de répondre à différentes problématiques spatiales. Établies en 1989, les limites des établissements scolaires (bassins de recrutement) ont dû être redéfinies afin de les réadapter aux réalités d'un paysage urbain et politique en pleine mutation. Dans un contexte de mobilité et de durabilité, un système d'attribution de subventions pour les transports publics basé sur la distance domicile-école et sur l'âge des écoliers, a été conçu. La réalisation de ces projets a nécessité la construction de bases de données géographiques ainsi que l'élaboration de nouvelles méthodes d'analyses exposées dans ce travail. Cette thèse s'est ainsi faite selon une dialectique permanente entre recherches théoriques et nécessités pratiques. La première partie de ce travail porte sur l'analyse du réseau piéton de la ville. La morphologie du réseau est investiguée au travers d'approches multi-échelles du concept de centralité. La première conception, nommée sinuo-centralité ("straightness centrality"), stipule qu'être central c'est être relié aux autres en ligne droite. La deuxième, sans doute plus intuitive, est intitulée centricité ("closeness centrality") et exprime le fait qu'être central c'est être proche des autres (fig. 1, II). Les méthodes développées ont pour but d'évaluer la connectivité et la marchabilité du réseau, tout en suggérant de possibles améliorations (création de raccourcis piétons). Le troisième et dernier volet théorique expose et développe un algorithme de transport optimal régularisé. En minimisant la distance domicile-école et en respectant la taille des écoles, l'algorithme permet de réaliser des scénarios d'enclassement. L'implémentation des multiplicateurs de Lagrange offre une visualisation du "coût spatial" des infrastructures scolaires et des lieux de résidence des écoliers. La deuxième partie de cette thèse retrace les aspects principaux de trois projets réalisés dans le cadre de la gestion scolaire. À savoir : la conception d'un système d'attribution de subventions pour les transports publics, la redéfinition de la carte scolaire, ou encore la simulation des flux d'élèves se rendant à l'école à pied. *** May it be from an urbanistic, a social or from a governance point of view, the evolution of cities is a major challenge in our contemporary societies. By giving the opportunity to analyse spatial and social configurations or attempting to simulate future ones, geographic information systems cannot be overlooked in urban planning and management. In five years, the population of the city of Lausanne has grown from 134'700 to 140'570 inhabitants while the numbers in public schools have increased from 12'200 to 13'500 students. Associated to a considerable harmonisation process of compulsory schooling in Switzerland, this demographic rise has driven schooling services, in collaboration with the University of Lausanne, to set up and develop GIS capable of tackling various spatial issues. Established in 1989, the school districts had to be altered so that they might fit the reality of a continuously changing urban and political landscape. In a context of mobility and durability, an attribution system for public transport subventions based on the distance between residence and school and on the age of the students was designed. The implementation of these projects required the built of geographical databases as well as the elaboration of new analysis methods exposed in this thesis. The first part of this work focuses on the analysis of the city's pedestrian network. Its morphology is investigated through multi-scale approaches of the concept of centrality. The first conception, named the straightness centrality, stipulates that being central is being connected to the others in a straight line. The second, undoubtedly more intuitive, is called closeness centrality and expresses the fact that being central is being close to the others. The goal of the methods developed is to evaluate the connectivity and walkability of the network along with suggesting possible improvements (creation of pedestrian shortcuts).The third and final theoretical section exposes and develops an algorithm of regularised optimal transport. By minimising home to school distances and by respecting school capacity, the algorithm enables the production of student allocation scheme. The implementation of the Lagrange multipliers offers a visualisation of the spatial cost associated to the schooling infrastructures and to the student home locations. The second part of this thesis recounts the principal aspects of three projects fulfilled in the context of school management. It focuses namely on the built of an attribution system for public transport subventions, a school redistricting process and on simulating student pedestrian flows.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Animal dispersal in a fragmented landscape depends on the complex interaction between landscape structure and animal behavior. To better understand how individuals disperse, it is important to explicitly represent the properties of organisms and the landscape in which they move. A common approach to modelling dispersal includes representing the landscape as a grid of equal sized cells and then simulating individual movement as a correlated random walk. This approach uses a priori scale of resolution, which limits the representation of all landscape features and how different dispersal abilities are modelled. We develop a vector-based landscape model coupled with an object-oriented model for animal dispersal. In this spatially explicit dispersal model, landscape features are defined based on their geographic and thematic properties and dispersal is modelled through consideration of an organism's behavior, movement rules and searching strategies (such as visual cues). We present the model's underlying concepts, its ability to adequately represent landscape features and provide simulation of dispersal according to different dispersal abilities. We demonstrate the potential of the model by simulating two virtual species in a real Swiss landscape. This illustrates the model's ability to simulate complex dispersal processes and provides information about dispersal such as colonization probability and spatial distribution of the organism's path

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A stringent branch-site codon model was used to detect positive selection in vertebrate evolution. We show that the test is robust to the large evolutionary distances involved. Positive selection was detected in 77% of 884 genes studied. Most positive selection concerns a few sites on a single branch of the phylogenetic tree: Between 0.9% and 4.7% of sites are affected by positive selection depending on the branches. No functional category was overrepresented among genes under positive selection. Surprisingly, whole genome duplication had no effect on the prevalence of positive selection, whether the fish-specific genome duplication or the two rounds at the origin of vertebrates. Thus positive selection has not been limited to a few gene classes, or to specific evolutionary events such as duplication, but has been pervasive during vertebrate evolution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are some striking similarities and some differences between the seismic reflection sections recorded across the fold and thrust belts of the southeast Canadian Cordillera, Quebec-Maine Appalachians and Swiss Alps. In the fold and thrust belts of all three mountain ranges, seismic reflection surveys have yielded high-quality images of. (1) nappes (thin thrust sheets) stacked on top of ancient continental margins; (2) ramp anticlines in the hanging walls of faults that have ramp-flat or listric geometries; (3) back thrusts and back folds that developed during the terminal phases of orogeny; and (4) tectonic wedges and regional decollements. A principal result of the Cordilleran and Appalachian deep crustal studies has been the recognition of master decollements along which continental margin strata have been transported long distances, whereas a principal result of the Swiss Alpine deep crustal program has been the identification of the Adriatic indenter, a crustal-scale wedge that caused delamination of the European lithosphere. Significant crustal roots are observed beneath the fold and thrust belts of the Alps, southeast Canadian Cordillera and parts of the southern Appalachians, but such structures beneath the northern Appalachians have probably been removed by post-orogenic collapse and/or crustal attenuation associated with the Mesozoic opening of the Atlantic Ocean.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Detection and discrimination of visuospatial input involve at least extracting, selecting and encoding relevant information and decision-making processes allowing selecting a response. These two operations are altered, respectively, by attentional mechanisms that change discrimination capacities, and by beliefs concerning the likelihood of uncertain events. Information processing is tuned by the attentional level that acts like a filter on perception, while decision-making processes are weighed by subjective probability of risk. In addition, it has been shown that anxiety could affect the detection of unexpected events through the modification of the level of arousal. Consequently, purpose of this study concerns whether and how decision-making and brain dynamics are affected by anxiety. To investigate these questions, the performance of women with either a high (12) or a low (12) STAI-T (State-Trait Anxiety Inventory, Spielberger, 1983) was examined in a decision-making visuospatial task where subjects have to recognize a target visual pattern from non-target patterns. The target pattern was a schematic image of furniture arranged in such a way as to give the impression of a living room. Non-target patterns were created by either the compression or the dilatation of the distances between objects. Target and non-target patterns were always presented in the same configuration. Preliminary behavioral results show no group difference in reaction time. In addition, visuo-spatial abilities were analyzed trough the signal detection theory for quantifying perceptual decisions in the presence of uncertainty (Green and Swets, 1966). This theory treats detection of a stimulus as a decision-making process determined by the nature of the stimulus and cognitive factors. Astonishingly, no difference in d' (corresponding to the distance between means of the distributions) and c (corresponds to the likelihood ratio) indexes was observed. Comparison of Event-related potentials (ERP) reveals that brain dynamics differ according to anxiety. It shows differences in component latencies, particularly a delay in anxious subjects over posterior electrode sites. However, these differences are compensated during later components by shorter latencies in anxious subjects compared to non-anxious one. These inverted effects seem indicate that the absence of difference in reaction time rely on a compensation of attentional level that tunes cortical activation in anxious subjects, but they have to hammer away to maintain performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Concentration gradients formed by the lipid-modified morphogens of the Wnt family are known for their pivotal roles during embryogenesis and adult tissue homeostasis. Wnt morphogens are also implicated in a variety of human diseases, especially cancer. Therefore, the signaling cascades triggered by Wnts have received considerable attention during recent decades. However, how Wnts are secreted and how concentration gradients are formed remains poorly understood. The use of model organisms such as Drosophila melanogaster has provided important advances in this area. For instance, we have previously shown that the lipid raft-associated reggie/flotillin proteins influence Wnt secretion and spreading in Drosophila. Our work supports the notion that producing cells secrete Wnt molecules in at least two pools: a poorly diffusible one and a reggie/flotillin-dependent highly diffusible pool which allows morphogen spreading over long distances away from its source of production. Here we revise the current views of Wnt secretion and spreading, and propose two models for the role of the reggie/flotillin proteins in these processes: (i) reggies/flotillins regulate the basolateral endocytosis of the poorly diffusible, membrane-bound Wnt pool, which is then sorted and secreted to apical compartments for long-range diffusion, and (ii) lipid rafts organized by reggies/flotillins serve as "dating points" where extracellular Wnt transiently interacts with lipoprotein receptors to allow its capture and further spreading via lipoprotein particles. We further discuss these processes in the context of human breast cancer. A better understanding of these phenomena may be relevant for identification of novel drug targets and therapeutic strategies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: To study the clinical outcome in hippocampal deep brain stimulation (DBS) for the treatment of patients with refractory mesial temporal lobe epilepsy (MTLE) according to the electrode location. METHODS: Eight MTLE patients implanted in the hippocampus and stimulated with high-frequency DBS were included in this study. Five underwent invasive recordings with depth electrodes to localize ictal onset zone prior to chronic DBS. Position of the active contacts of the electrode was calculated on postoperative imaging. The distances to the ictal onset zone were measured as well as atlas-based hippocampus structures impacted by stimulation were identified. Both were correlated with seizure frequency reduction. RESULTS: The distances between active electrode location and estimated ictal onset zone were 11±4.3 or 9.1±2.3mm for patients with a >50% or <50% reduction in seizure frequency. In patients (N=6) showing a >50% seizure frequency reduction, 100% had the active contacts located <3mm from the subiculum (p<0.05). The 2 non-responders patients were stimulated on contacts located >3mm to the subiculum. CONCLUSION: Decrease of epileptogenic activity induced by hippocampal DBS in refractory MTLE: (1) seems not directly associated with the vicinity of active electrode to the ictal focus determined by invasive recordings; (2) might be obtained through the neuromodulation of the subiculum.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: Intraoperative EMG based neurophysiological monitoring is increasingly used to assist pedicle screw insertion. We carried out a study comparing the final screw position in the pedicle measured on CT images in relation to its corresponding intraoperative muscle compound action potential (CMAP) values. Material and methods: A total of 189 screws were inserted in thoracolumbar spines of 31 patients during instrumented fusion under EMG control. An observer, blinded to the CMAP value, assessed the horizontal and vertical 'screw edge to pedicle edge' distance perpendicular to the longitudinal axis of the screw on reformatted CT reconstructions using OsiriX software. These distances were analysed with their corresponding CMAP values. Data from 62 thoracic and 127 lumbar screws were processed separately. Interobserver reliability of distance measurements was assessed. Results: No patient suffered neurological injury secondary to screw insertion. Distance measurements were reliable (paired t-test, P = 0.13/0.98 horizontal/vertical). Two screws had their position altered due to low CMAP values suggesting close proximity of nerve tissue. Seventy five percent of screws had CMAP results above 10mA and had an average distance of 0.35cm (SD 0.23) horizontally and 0.46cm (SD 0.26) vertically from the pedicle edge. Additional 12% had a distance from the edge of the pedicle less than 0mm indicating cortical breach but had CMAP values above 10mA. A poor correlation between CMAP values and screw position was found. Discussion: In this study CMAP values above 10mA indicated correct screw position in the majority of cases. The zone of 10-20mA CMAP carries highest risk of a misplaced screw despite high CMAP value (17% of screws this CMAP range). In order to improve accuracy of EMG predictive value further research is warranted including improvement of probing techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending the corresponding approaches to the scale of a field site represents a major, and as-of-yet largely unresolved, challenge. To address this problem, we have developed downscaling procedure based on a non-linear Bayesian sequential simulation approach. The main objective of this algorithm is to estimate the value of the sparsely sampled hydraulic conductivity at non-sampled locations based on its relation to the electrical conductivity logged at collocated wells and surface resistivity measurements, which are available throughout the studied site. The in situ relationship between the hydraulic and electrical conductivities is described through a non-parametric multivariatekernel density function. Then a stochastic integration of low-resolution, large-scale electrical resistivity tomography (ERT) data in combination with high-resolution, local-scale downhole measurements of the hydraulic and electrical conductivities is applied. The overall viability of this downscaling approach is tested and validated by comparing flow and transport simulation through the original and the upscaled hydraulic conductivity fields. Our results indicate that the proposed procedure allows obtaining remarkably faithful estimates of the regional-scale hydraulic conductivity structure and correspondingly reliable predictions of the transport characteristics over relatively long distances.