872 resultados para heterogeneous data sources


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study aims to provide a passive sampling approach which can be routinely used to investigate polychlorinated biphenyl (PCB) sources in rivers. The approach consists of deploying low density polyethylene (LDPE) strips downstream and upstream of potential PCB sources as well as in their water discharges. Concentrations of indicator PCBs (iPCBs) absorbed in samplers (Cs) from upstream and downstream sites are compared with each other to reveal increases of PCB levels. Cs measured in water discharges are used to determine if released amounts of PCBs are compatible with increases revealed in the river. As water velocity can greatly vary along a river stretch and influences the uptake at each site in a different way, differences in velocity have to be taken into account to correctly interpret Cs. LDPE strips were exposed to velocities between 1.6 and 37 cm s−1 using a channel system built in the field. Relationships between velocity and Cs were established for each iPCB to determine the expected change in Cs due to velocity variations. For PCBs 28 and 52, this change does not exceed a factor 2 for velocity variations in the range from 1.6 to 100 cm s−1 (extrapolated data above 37 cm s−1). For PCBs 101, 138, 153 and 180, this change only exceeds a factor 2 in the case of large velocity variations. The approach was applied in the Swiss river Venoge to first conduct a primary investigation of potential PCB sources and then conduct thorough investigations of two suspected sources.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Relationships between porosity and hydraulic conductivity tend to be strongly scale- and site-dependent and are thus very difficult to establish. As a result, hydraulic conductivity distributions inferred from geophysically derived porosity models must be calibrated using some measurement of aquifer response. This type of calibration is potentially very valuable as it may allow for transport predictions within the considered hydrological unit at locations where only geophysical measurements are available, thus reducing the number of well tests required and thereby the costs of management and remediation. Here, we explore this concept through a series of numerical experiments. Considering the case of porosity characterization in saturated heterogeneous aquifers using crosshole ground-penetrating radar and borehole porosity log data, we use tracer test measurements to calibrate a relationship between porosity and hydraulic conductivity that allows the best prediction of the observed hydrological behavior. To examine the validity and effectiveness of the obtained relationship, we examine its performance at alternate locations not used in the calibration procedure. Our results indicate that this methodology allows us to obtain remarkably reliable hydrological predictions throughout the considered hydrological unit based on the geophysical data only. This was also found to be the case when significant uncertainty was considered in the underlying relationship between porosity and hydraulic conductivity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A considerable fraction of the -ray sources discovered with the Energetic Gamma-Ray Experiment Telescope (EGRET) remain unidentified. The EGRET sources that have been properly identified are either pulsars or variable sources at both radio and gamma-ray wavelengths. Most of the variable sources are strong radio blazars. However, some low galactic-latitude EGRET sources, with highly variable -ray emission, lack any evident counterpart according to the radio data available until now. Aims. The primary goal of this paper is to identify and characterise the potential radio counterparts of four highly variable -ray sources in the galactic plane through mapping the radio surroundings of the EGRET confidence contours and determining the variable radio sources in the field whenever possible. Methods. We have carried out a radio exploration of the fields of the selected EGRET sources using the Giant Metrewave Radio Telescope (GMRT) interferometer at 21 cm wavelength, with pointings being separated by months. Results. We detected a total of 151 radio sources. Among them, we identified a few radio sources whose flux density has apparently changed on timescales of months. Despite the limitations of our search, their possible variability makes these objects a top-priority target for multiwavelength studies of the potential counterparts of highly variable, unidentified gamma-ray sources.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel two-component system, CbrA-CbrB, was discovered in Pseudomonas aeruginosa; cbrA and cbrB mutants of strain PAO were found to be unable to use several amino acids (such as arginine, histidine and proline), polyamines and agmatine as sole carbon and nitrogen sources. These mutants were also unable to use, or used poorly, many other carbon sources, including mannitol, glucose, pyruvate and citrate. A 7 kb EcoRI fragment carrying the cbrA and cbrB genes was cloned and sequenced. The cbrA and cbrB genes encode a sensor/histidine kinase (Mr 108 379, 983 residues) and a cognate response regulator (Mr 52 254, 478 residues) respectively. The amino-terminal half (490 residues) of CbrA appears to be a sensor membrane domain, as predicted by 12 possible transmembrane helices, whereas the carboxy-terminal part shares homology with the histidine kinases of the NtrB family. The CbrB response regulator shows similarity to the NtrC family members. Complementation and primer extension experiments indicated that cbrA and cbrB are transcribed from separate promoters. In cbrA or cbrB mutants, as well as in the allelic argR9901 and argR9902 mutants, the aot-argR operon was not induced by arginine, indicating an essential role for this two-component system in the expression of the ArgR-dependent catabolic pathways, including the aruCFGDB operon specifying the major aerobic arginine catabolic pathway. The histidine catabolic enzyme histidase was not expressed in cbrAB mutants, even in the presence of histidine. In contrast, proline dehydrogenase, responsible for proline utilization (Pru), was expressed in a cbrB mutant at a level comparable with that of the wild-type strain. When succinate or other C4-dicarboxylates were added to proline medium at 1 mM, the cbrB mutant was restored to a Pru+ phenotype. Such a succinate-dependent Pru+ property was almost abolished by 20 mM ammonia. In conclusion, the CbrA-CbrB system controls the expression of several catabolic pathways and, perhaps together with the NtrB-NtrC system, appears to ensure the intracellular carbon: nitrogen balance in P. aeruginosa.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A major constraint to agricultural production in acid soils of tropical regions is the low soil P availability, due to the high adsorption capacity, low P level in the source material and low efficiency of P uptake and use by most of the modern varieties grown commercially. This study was carried out to evaluate the biomass production and P use by forage grasses on two soils fertilized with two P sources of different solubility. Two experiments were carried out, one for each soil (Cambisol and Latosol), using pots filled with 4 dm³ soil in a completely randomized design and a 4 x 2 factorial scheme. The treatments consisted of a combination of four forage plants (Brachiaria decumbens, Brachiaria brizantha, Pennisetum glaucum and Sorghum bicolor) with two P sources (Triple Superphosphate - TSP and Arad Reactive Phosphate - ARP), with four replications. The forage grasses were harvested at pre-flowering, when dry matter weight and P concentrations were measured. Based on the P concentration and dry matter production, the total P accumulation was calculated. With these data, the following indices were calculated: the P uptake efficiency of roots, P use efficiency, use efficiency of available P, use efficiency of applied P and agronomic efficiency. The use of the source with higher solubility (TSP) resulted, generally, in higher total dry matter and total P accumulation in the forage grasses, in both soils. For the less reactive source (ARP), the means found in the forage grasses, for use efficiency and efficient use of available P, were always higher when grown in Latosol, indicating favorable conditions for the solubility of ARP. The total dry matter of Brachiaria brizantha was generally higher, with low P uptake, accumulation and translocation, which indicated good P use efficiency for both P sources and soils. The forage plants differed in the P use potential, due to the sources of the applied P and of the soils used. Less than 10 % of the applied P was immobilized in the forage dry matter. Highest values were observed for TSP, but this was not reflected in a higher use efficiency of P from this source.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La présente étude est à la fois une évaluation du processus de la mise en oeuvre et des impacts de la police de proximité dans les cinq plus grandes zones urbaines de Suisse - Bâle, Berne, Genève, Lausanne et Zurich. La police de proximité (community policing) est à la fois une philosophie et une stratégie organisationnelle qui favorise un partenariat renouvelé entre la police et les communautés locales dans le but de résoudre les problèmes relatifs à la sécurité et à l'ordre public. L'évaluation de processus a analysé des données relatives aux réformes internes de la police qui ont été obtenues par l'intermédiaire d'entretiens semi-structurés avec des administrateurs clés des cinq départements de police, ainsi que dans des documents écrits de la police et d'autres sources publiques. L'évaluation des impacts, quant à elle, s'est basée sur des variables contextuelles telles que des statistiques policières et des données de recensement, ainsi que sur des indicateurs d'impacts construit à partir des données du Swiss Crime Survey (SCS) relatives au sentiment d'insécurité, à la perception du désordre public et à la satisfaction de la population à l'égard de la police. Le SCS est un sondage régulier qui a permis d'interroger des habitants des cinq grandes zones urbaines à plusieurs reprises depuis le milieu des années 1980. L'évaluation de processus a abouti à un « Calendrier des activités » visant à créer des données de panel permettant de mesurer les progrès réalisés dans la mise en oeuvre de la police de proximité à l'aide d'une grille d'évaluation à six dimensions à des intervalles de cinq ans entre 1990 et 2010. L'évaluation des impacts, effectuée ex post facto, a utilisé un concept de recherche non-expérimental (observational design) dans le but d'analyser les impacts de différents modèles de police de proximité dans des zones comparables à travers les cinq villes étudiées. Les quartiers urbains, délimités par zone de code postal, ont ainsi été regroupés par l'intermédiaire d'une typologie réalisée à l'aide d'algorithmes d'apprentissage automatique (machine learning). Des algorithmes supervisés et non supervisés ont été utilisés sur les données à haute dimensionnalité relatives à la criminalité, à la structure socio-économique et démographique et au cadre bâti dans le but de regrouper les quartiers urbains les plus similaires dans des clusters. D'abord, les cartes auto-organisatrices (self-organizing maps) ont été utilisées dans le but de réduire la variance intra-cluster des variables contextuelles et de maximiser simultanément la variance inter-cluster des réponses au sondage. Ensuite, l'algorithme des forêts d'arbres décisionnels (random forests) a permis à la fois d'évaluer la pertinence de la typologie de quartier élaborée et de sélectionner les variables contextuelles clés afin de construire un modèle parcimonieux faisant un minimum d'erreurs de classification. Enfin, pour l'analyse des impacts, la méthode des appariements des coefficients de propension (propensity score matching) a été utilisée pour équilibrer les échantillons prétest-posttest en termes d'âge, de sexe et de niveau d'éducation des répondants au sein de chaque type de quartier ainsi identifié dans chacune des villes, avant d'effectuer un test statistique de la différence observée dans les indicateurs d'impacts. De plus, tous les résultats statistiquement significatifs ont été soumis à une analyse de sensibilité (sensitivity analysis) afin d'évaluer leur robustesse face à un biais potentiel dû à des covariables non observées. L'étude relève qu'au cours des quinze dernières années, les cinq services de police ont entamé des réformes majeures de leur organisation ainsi que de leurs stratégies opérationnelles et qu'ils ont noué des partenariats stratégiques afin de mettre en oeuvre la police de proximité. La typologie de quartier développée a abouti à une réduction de la variance intra-cluster des variables contextuelles et permet d'expliquer une partie significative de la variance inter-cluster des indicateurs d'impacts avant la mise en oeuvre du traitement. Ceci semble suggérer que les méthodes de géocomputation aident à équilibrer les covariables observées et donc à réduire les menaces relatives à la validité interne d'un concept de recherche non-expérimental. Enfin, l'analyse des impacts a révélé que le sentiment d'insécurité a diminué de manière significative pendant la période 2000-2005 dans les quartiers se trouvant à l'intérieur et autour des centres-villes de Berne et de Zurich. Ces améliorations sont assez robustes face à des biais dus à des covariables inobservées et covarient dans le temps et l'espace avec la mise en oeuvre de la police de proximité. L'hypothèse alternative envisageant que les diminutions observées dans le sentiment d'insécurité soient, partiellement, un résultat des interventions policières de proximité semble donc être aussi plausible que l'hypothèse nulle considérant l'absence absolue d'effet. Ceci, même si le concept de recherche non-expérimental mis en oeuvre ne peut pas complètement exclure la sélection et la régression à la moyenne comme explications alternatives. The current research project is both a process and impact evaluation of community policing in Switzerland's five major urban areas - Basel, Bern, Geneva, Lausanne, and Zurich. Community policing is both a philosophy and an organizational strategy that promotes a renewed partnership between the police and the community to solve problems of crime and disorder. The process evaluation data on police internal reforms were obtained through semi-structured interviews with key administrators from the five police departments as well as from police internal documents and additional public sources. The impact evaluation uses official crime records and census statistics as contextual variables as well as Swiss Crime Survey (SCS) data on fear of crime, perceptions of disorder, and public attitudes towards the police as outcome measures. The SCS is a standing survey instrument that has polled residents of the five urban areas repeatedly since the mid-1980s. The process evaluation produced a "Calendar of Action" to create panel data to measure community policing implementation progress over six evaluative dimensions in intervals of five years between 1990 and 2010. The impact evaluation, carried out ex post facto, uses an observational design that analyzes the impact of the different community policing models between matched comparison areas across the five cities. Using ZIP code districts as proxies for urban neighborhoods, geospatial data mining algorithms serve to develop a neighborhood typology in order to match the comparison areas. To this end, both unsupervised and supervised algorithms are used to analyze high-dimensional data on crime, the socio-economic and demographic structure, and the built environment in order to classify urban neighborhoods into clusters of similar type. In a first step, self-organizing maps serve as tools to develop a clustering algorithm that reduces the within-cluster variance in the contextual variables and simultaneously maximizes the between-cluster variance in survey responses. The random forests algorithm then serves to assess the appropriateness of the resulting neighborhood typology and to select the key contextual variables in order to build a parsimonious model that makes a minimum of classification errors. Finally, for the impact analysis, propensity score matching methods are used to match the survey respondents of the pretest and posttest samples on age, gender, and their level of education for each neighborhood type identified within each city, before conducting a statistical test of the observed difference in the outcome measures. Moreover, all significant results were subjected to a sensitivity analysis to assess the robustness of these findings in the face of potential bias due to some unobserved covariates. The study finds that over the last fifteen years, all five police departments have undertaken major reforms of their internal organization and operating strategies and forged strategic partnerships in order to implement community policing. The resulting neighborhood typology reduced the within-cluster variance of the contextual variables and accounted for a significant share of the between-cluster variance in the outcome measures prior to treatment, suggesting that geocomputational methods help to balance the observed covariates and hence to reduce threats to the internal validity of an observational design. Finally, the impact analysis revealed that fear of crime dropped significantly over the 2000-2005 period in the neighborhoods in and around the urban centers of Bern and Zurich. These improvements are fairly robust in the face of bias due to some unobserved covariate and covary temporally and spatially with the implementation of community policing. The alternative hypothesis that the observed reductions in fear of crime were at least in part a result of community policing interventions thus appears at least as plausible as the null hypothesis of absolutely no effect, even if the observational design cannot completely rule out selection and regression to the mean as alternative explanations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The emergence of powerful new technologies, the existence of large quantities of data, and increasing demands for the extraction of added value from these technologies and data have created a number of significant challenges for those charged with both corporate and information technology management. The possibilities are great, the expectations high, and the risks significant. Organisations seeking to employ cloud technologies and exploit the value of the data to which they have access, be this in the form of "Big Data" available from different external sources or data held within the organisation, in structured or unstructured formats, need to understand the risks involved in such activities. Data owners have responsibilities towards the subjects of the data and must also, frequently, demonstrate that they are in compliance with current standards, laws and regulations. This thesis sets out to explore the nature of the technologies that organisations might utilise, identify the most pertinent constraints and risks, and propose a framework for the management of data from discovery to external hosting that will allow the most significant risks to be managed through the definition, implementation, and performance of appropriate internal control activities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Natural processes that determine soil and plant litter properties are controlled by multiple factors. However, little attention has been given to distinguishing the effects of environmental factors from the effects of spatial structure of the area on the distribution of soil and litter properties in tropical ecosystems covering heterogeneous topographies. The aim of this study was to assess patterns of soil and litter variation in a tropical area that intercepts different levels of solar radiation throughout the year since its topography has slopes predominantly facing opposing geographic directions. Soil data (pH, C, N, P, H+Al, Ca, Mg, K, Al, Na, sand, and silt) and plant litter data (N, K, Ca, P, and Mg) were gathered together with the geographic coordinates (to model the spatial structure) of 40 sampling units established at two sites composed of slopes predominantly facing northwest and southeast (20 units each). Soil and litter chemical properties varied more among slopes within similar geographic orientations than between the slopes facing opposing directions. Both the incident solar radiation and the spatial structure of the area were relevant in explaining the patterns detected in variation of soil and plant litter. Individual contributions of incident solar radiation to explain the variation in the properties evaluated suggested that this and other environmental factors may play a particularly relevant role in determining soil and plant litter distribution in tropical areas with heterogeneous topography. Furthermore, this study corroborates that the spatial structure of the area also plays an important role in the distribution of soil and litter within this type of landscape, which appears to be consistent with the action of water movement mechanisms in such areas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract : The human body is composed of a huge number of cells acting together in a concerted manner. The current understanding is that proteins perform most of the necessary activities in keeping a cell alive. The DNA, on the other hand, stores the information on how to produce the different proteins in the genome. Regulating gene transcription is the first important step that can thus affect the life of a cell, modify its functions and its responses to the environment. Regulation is a complex operation that involves specialized proteins, the transcription factors. Transcription factors (TFs) can bind to DNA and activate the processes leading to the expression of genes into new proteins. Errors in this process may lead to diseases. In particular, some transcription factors have been associated with a lethal pathological state, commonly known as cancer, associated with uncontrolled cellular proliferation, invasiveness of healthy tissues and abnormal responses to stimuli. Understanding cancer-related regulatory programs is a difficult task, often involving several TFs interacting together and influencing each other's activity. This Thesis presents new computational methodologies to study gene regulation. In addition we present applications of our methods to the understanding of cancer-related regulatory programs. The understanding of transcriptional regulation is a major challenge. We address this difficult question combining computational approaches with large collections of heterogeneous experimental data. In detail, we design signal processing tools to recover transcription factors binding sites on the DNA from genome-wide surveys like chromatin immunoprecipitation assays on tiling arrays (ChIP-chip). We then use the localization about the binding of TFs to explain expression levels of regulated genes. In this way we identify a regulatory synergy between two TFs, the oncogene C-MYC and SP1. C-MYC and SP1 bind preferentially at promoters and when SP1 binds next to C-NIYC on the DNA, the nearby gene is strongly expressed. The association between the two TFs at promoters is reflected by the binding sites conservation across mammals, by the permissive underlying chromatin states 'it represents an important control mechanism involved in cellular proliferation, thereby involved in cancer. Secondly, we identify the characteristics of TF estrogen receptor alpha (hERa) target genes and we study the influence of hERa in regulating transcription. hERa, upon hormone estrogen signaling, binds to DNA to regulate transcription of its targets in concert with its co-factors. To overcome the scarce experimental data about the binding sites of other TFs that may interact with hERa, we conduct in silico analysis of the sequences underlying the ChIP sites using the collection of position weight matrices (PWMs) of hERa partners, TFs FOXA1 and SP1. We combine ChIP-chip and ChIP-paired-end-diTags (ChIP-pet) data about hERa binding on DNA with the sequence information to explain gene expression levels in a large collection of cancer tissue samples and also on studies about the response of cells to estrogen. We confirm that hERa binding sites are distributed anywhere on the genome. However, we distinguish between binding sites near promoters and binding sites along the transcripts. The first group shows weak binding of hERa and high occurrence of SP1 motifs, in particular near estrogen responsive genes. The second group shows strong binding of hERa and significant correlation between the number of binding sites along a gene and the strength of gene induction in presence of estrogen. Some binding sites of the second group also show presence of FOXA1, but the role of this TF still needs to be investigated. Different mechanisms have been proposed to explain hERa-mediated induction of gene expression. Our work supports the model of hERa activating gene expression from distal binding sites by interacting with promoter bound TFs, like SP1. hERa has been associated with survival rates of breast cancer patients, though explanatory models are still incomplete: this result is important to better understand how hERa can control gene expression. Thirdly, we address the difficult question of regulatory network inference. We tackle this problem analyzing time-series of biological measurements such as quantification of mRNA levels or protein concentrations. Our approach uses the well-established penalized linear regression models where we impose sparseness on the connectivity of the regulatory network. We extend this method enforcing the coherence of the regulatory dependencies: a TF must coherently behave as an activator, or a repressor on all its targets. This requirement is implemented as constraints on the signs of the regressed coefficients in the penalized linear regression model. Our approach is better at reconstructing meaningful biological networks than previous methods based on penalized regression. The method is tested on the DREAM2 challenge of reconstructing a five-genes/TFs regulatory network obtaining the best performance in the "undirected signed excitatory" category. Thus, these bioinformatics methods, which are reliable, interpretable and fast enough to cover large biological dataset, have enabled us to better understand gene regulation in humans.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cuscuta spp. are holoparasitic plants that can simultaneously parasitise several host plants. It has been suggested that Cuscuta has evolved a foraging strategy based on a positive relationship between preuptake investment and subsequent reward on different host species. Here we establish reliable parasite size measures and show that parasitism on individuals of different host species alters the biomass of C. campestris but that within host species size and age also contributes to the heterogeneous resource landscape. We then performed two additional experiments to test whether C. campestris achieves greater resource acquisition by parasitising two host species rather than one and whether C. campestris forages in communities of hosts offering different rewards (a choice experiment). There was no evidence in either experiment for direct benefits of a mixed host diet. Cuscuta campestris foraged by parasitising the most rewarding hosts the fastest and then investing the most on them. We conclude that our data present strong evidence for foraging in the parasitic plant C. campestris.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Huqf Supergroup in Oman contains an exceptionally well-preserved and complete sedimentary record of the Middle to Late Neoproterozoic Era. Outcrops of the Huqf Supergroup in northern and central Oman are now well documented, but their correlation with a key succession in the Mirbat area of southern Oman, containing a sedimentary record of two Neoproterozoic glaciations, is poorly understood. Integration of lithostratigraphic, chemostratigraphic and new U-Pb detrital zircon data suggests that the Mirbat Group is best placed within the Cryogenian (c. 850-635 Ma) part of the Huqf Supergroup. The c. I km thick marine deposits of the Arkahawl and Marsham Formations of the Mirbat Group are thought to represent a stratigraphic interval between older Cryogenian and younger Cryogenian glaciations that is not preserved elsewhere in Oman. The bulk of detrital zircons in the Huqf Supergroup originate from Neoproterozoic parent rocks. However, older Mesoproterozoic, Palaeoproterozoic and even Archaean zircons can be recognized in the detrital population from the upper Mahara Group (Fiq Formation) and Nafun Group, suggesting the tapping of exotic sources, probably from the Arabian-Nubian Shield.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study investigated the contribution of sources and establishment characteristics, on the exposure to fine particulate matter (PM(2.5)) in the non-smoking sections of bars, cafes, and restaurants in central Zurich. PM(2.5)-exposure was determined with a nephelometer. A random sample of hospitality establishments was investigated on all weekdays, from morning until midnight. Each visit lasted 30 min. Numbers of smokers and other sources, such as candles and cooking processes, were recorded, as were seats, open windows, and open doors. Ambient air pollution data were obtained from public authorities. Data were analysed using robust MM regression. Over 14 warm, sunny days, 102 establishments were measured. Average establishment PM(2.5) concentrations were 64.7 microg/m(3) (s.d. = 73.2 microg/m(3), 30-min maximum 452.2 microg/m(3)). PM(2.5) was significantly associated with the number of smokers, percentage of seats occupied by smokers, and outdoor PM. Each smoker increased PM(2.5) on average by 15 microg/m(3). No associations were found with other sources, open doors or open windows. Bars had more smoking guests and showed significantly higher concentrations than restaurants and cafes. Smokers were the most important PM(2.5)-source in hospitality establishments, while outdoor PM defined the baseline. Concentrations are expected to be even higher during colder, unpleasant times of the year. PRACTICAL IMPLICATIONS: Smokers and ambient air pollution are the most important sources of fine airborne particulate matter (PM(2.5)) in the non-smoking sections of bars, restaurants, and cafes. Other sources do not significantly contribute to PM(2.5)-levels, while opening doors and windows is not an efficient means of removing pollutants. First, this demonstrates the impact that even a few smokers can have in affecting particle levels. Second, it implies that creating non-smoking sections, and using natural ventilation, is not sufficient to bring PM(2.5) to levels that imply no harm for employees and non-smoking clients. [Authors]

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Surface-based ground penetrating radar (GPR) and electrical resistance tomography (ERT) are common tools for aquifer characterization, because both methods provide data that are sensitive to hydrogeologically relevant quantities. To retrieve bulk subsurface properties at high resolution, we suggest incorporating structural information derived from GPR reflection data when inverting surface ERT data. This reduces resolution limitations, which might hinder quantitative interpretations. Surface-based GPR reflection and ERT data have been recorded on an exposed gravel bar within a restored section of a previously channelized river in northeastern Switzerland to characterize an underlying gravel aquifer. The GPR reflection data acquired over an area of 240×40 m map the aquifer's thickness and two internal sub-horizontal regions with different depositional patterns. The interface between these two regions and the boundary of the aquifer with then underlying clay are incorporated in an unstructured ERT mesh. Subsequent inversions are performed without applying smoothness constraints across these boundaries. Inversion models obtained by using these structural constraints contain subtle resistivity variations within the aquifer that are hardly visible in standard inversion models as a result of strong vertical smearing in the latter. In the upper aquifer region, with high GPR coherency and horizontal layering, the resistivity is moderately high (N300 Ωm). We suggest that this region consists of sediments that were rearranged during more than a century of channelized flow. In the lower low coherency region, the GPR image reveals fluvial features (e.g., foresets) and generally more heterogeneous deposits. In this region, the resistivity is lower (~200 Ωm), which we attribute to increased amounts of fines in some of the well-sorted fluvial deposits. We also find elongated conductive anomalies that correspond to the location of river embankments that were removed in 2002.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Simulated-annealing-based conditional simulations provide a flexible means of quantitatively integrating diverse types of subsurface data. Although such techniques are being increasingly used in hydrocarbon reservoir characterization studies, their potential in environmental, engineering and hydrological investigations is still largely unexploited. Here, we introduce a novel simulated annealing (SA) algorithm geared towards the integration of high-resolution geophysical and hydrological data which, compared to more conventional approaches, provides significant advancements in the way that large-scale structural information in the geophysical data is accounted for. Model perturbations in the annealing procedure are made by drawing from a probability distribution for the target parameter conditioned to the geophysical data. This is the only place where geophysical information is utilized in our algorithm, which is in marked contrast to other approaches where model perturbations are made through the swapping of values in the simulation grid and agreement with soft data is enforced through a correlation coefficient constraint. Another major feature of our algorithm is the way in which available geostatistical information is utilized. Instead of constraining realizations to match a parametric target covariance model over a wide range of spatial lags, we constrain the realizations only at smaller lags where the available geophysical data cannot provide enough information. Thus we allow the larger-scale subsurface features resolved by the geophysical data to have much more due control on the output realizations. Further, since the only component of the SA objective function required in our approach is a covariance constraint at small lags, our method has improved convergence and computational efficiency over more traditional methods. Here, we present the results of applying our algorithm to the integration of porosity log and tomographic crosshole georadar data to generate stochastic realizations of the local-scale porosity structure. Our procedure is first tested on a synthetic data set, and then applied to data collected at the Boise Hydrogeophysical Research Site.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Quantifying the spatial configuration of hydraulic conductivity (K) in heterogeneous geological environments is essential for accurate predictions of contaminant transport, but is difficult because of the inherent limitations in resolution and coverage associated with traditional hydrological measurements. To address this issue, we consider crosshole and surface-based electrical resistivity geophysical measurements, collected in time during a saline tracer experiment. We use a Bayesian Markov-chain-Monte-Carlo (McMC) methodology to jointly invert the dynamic resistivity data, together with borehole tracer concentration data, to generate multiple posterior realizations of K that are consistent with all available information. We do this within a coupled inversion framework, whereby the geophysical and hydrological forward models are linked through an uncertain relationship between electrical resistivity and concentration. To minimize computational expense, a facies-based subsurface parameterization is developed. The Bayesian-McMC methodology allows us to explore the potential benefits of including the geophysical data into the inverse problem by examining their effect on our ability to identify fast flowpaths in the subsurface, and their impact on hydrological prediction uncertainty. Using a complex, geostatistically generated, two-dimensional numerical example representative of a fluvial environment, we demonstrate that flow model calibration is improved and prediction error is decreased when the electrical resistivity data are included. The worth of the geophysical data is found to be greatest for long spatial correlation lengths of subsurface heterogeneity with respect to wellbore separation, where flow and transport are largely controlled by highly connected flowpaths.