893 resultados para SPARSE
Resumo:
Abstract Background: There are sparse data on the performance of different types of drug-eluting stents (DES) in acute and real-life setting. Objective: The aim of the study was to compare the safety and efficacy of first- versus second-generation DES in patients with acute coronary syndromes (ACS). Methods: This all-comer registry enrolled consecutive patients diagnosed with ACS and treated with percutaneous coronary intervention with the implantation of first- or second-generation DES in one-year follow-up. The primary efficacy endpoint was defined as major adverse cardiac and cerebrovascular event (MACCE), a composite of all-cause death, nonfatal myocardial infarction, target-vessel revascularization and stroke. The primary safety outcome was definite stent thrombosis (ST) at one year. Results: From the total of 1916 patients enrolled into the registry, 1328 patients were diagnosed with ACS. Of them, 426 were treated with first- and 902 with second-generation DES. There was no significant difference in the incidence of MACCE between two types of DES at one year. The rate of acute and subacute ST was higher in first- vs. second-generation DES (1.6% vs. 0.1%, p < 0.001, and 1.2% vs. 0.2%, p = 0.025, respectively), but there was no difference regarding late ST (0.7% vs. 0.2%, respectively, p = 0.18) and gastrointestinal bleeding (2.1% vs. 1.1%, p = 0.21). In Cox regression, first-generation DES was an independent predictor for cumulative ST (HR 3.29 [1.30-8.31], p = 0.01). Conclusions: In an all-comer registry of ACS, the one-year rate of MACCE was comparable in groups treated with first- and second-generation DES. The use of first-generation DES was associated with higher rates of acute and subacute ST and was an independent predictor of cumulative ST.
Resumo:
BACKGROUND: Only a few studies have explored the relation between coffee and tea intake and head and neck cancers, with inconsistent results. METHODS: We pooled individual-level data from nine case-control studies of head and neck cancers, including 5,139 cases and 9,028 controls. Logistic regression was used to estimate odds ratios (OR) and 95% confidence intervals (95% CI), adjusting for potential confounders. RESULTS: Caffeinated coffee intake was inversely related with the risk of cancer of the oral cavity and pharynx: the ORs were 0.96 (95% CI, 0.94-0.98) for an increment of 1 cup per day and 0.61 (95% CI, 0.47-0.80) in drinkers of >4 cups per day versus nondrinkers. This latter estimate was consistent for different anatomic sites (OR, 0.46; 95% CI, 0.30-0.71 for oral cavity; OR, 0.58; 95% CI, 0.41-0.82 for oropharynx/hypopharynx; and OR, 0.61; 95% CI, 0.37-1.01 for oral cavity/pharynx not otherwise specified) and across strata of selected covariates. No association of caffeinated coffee drinking was found with laryngeal cancer (OR, 0.96; 95% CI, 0.64-1.45 in drinkers of >4 cups per day versus nondrinkers). Data on decaffeinated coffee were too sparse for detailed analysis, but indicated no increased risk. Tea intake was not associated with head and neck cancer risk (OR, 0.99; 95% CI, 0.89-1.11 for drinkers versus nondrinkers). CONCLUSIONS: This pooled analysis of case-control studies supports the hypothesis of an inverse association between caffeinated coffee drinking and risk of cancer of the oral cavity and pharynx. IMPACT: Given widespread use of coffee and the relatively high incidence and low survival of head and neck cancers, the observed inverse association may have appreciable public health relevance.
Resumo:
Several studies have reported high levels of inflammatory biomarkers in hypertension, but data coming from the general population are sparse, and sex differences have been little explored. The CoLaus Study is a cross-sectional examination survey in a random sample of 6067 Caucasians aged 35-75 years in Lausanne, Switzerland. Blood pressure (BP) was assessed using a validated oscillometric device. Anthropometric parameters were also measured, including body composition, using electrical bioimpedance. Crude serum levels of interleukin-6 (IL-6), tumor necrosis factor α (TNF-α) and ultrasensitive C-reactive protein (hsCRP) were positively and IL-1β (IL-1β) negatively (P<0.001 for all values), associated with BP. For IL-6, IL-1β and TNF-α, the association disappeared in multivariable analysis, largely explained by differences in age and body mass index, in particular fat mass. On the contrary, hsCRP remained independently and positively associated with systolic (β (95% confidence interval): 1.15 (0.64; 1.65); P<0.001) and diastolic (0.75 (0.42; 1.08); P<0.001) BP. Relationships of hsCRP, IL-6 and TNF-α with BP tended to be stronger in women than in men, partly related to the difference in fat mass, yet the interaction between sex and IL-6 persisted after correction for all tested confounders. In the general population, the associations between inflammatory biomarkers and rising levels of BP are mainly driven by age and fat mass. The stronger associations in women suggest that sex differences might exist in the complex interplay between BP and inflammation.
Resumo:
1. Model-based approaches have been used increasingly in conservation biology over recent years. Species presence data used for predictive species distribution modelling are abundant in natural history collections, whereas reliable absence data are sparse, most notably for vagrant species such as butterflies and snakes. As predictive methods such as generalized linear models (GLM) require absence data, various strategies have been proposed to select pseudo-absence data. However, only a few studies exist that compare different approaches to generating these pseudo-absence data. 2. Natural history collection data are usually available for long periods of time (decades or even centuries), thus allowing historical considerations. However, this historical dimension has rarely been assessed in studies of species distribution, although there is great potential for understanding current patterns, i.e. the past is the key to the present. 3. We used GLM to model the distributions of three 'target' butterfly species, Melitaea didyma, Coenonympha tullia and Maculinea teleius, in Switzerland. We developed and compared four strategies for defining pools of pseudo-absence data and applied them to natural history collection data from the last 10, 30 and 100 years. Pools included: (i) sites without target species records; (ii) sites where butterfly species other than the target species were present; (iii) sites without butterfly species but with habitat characteristics similar to those required by the target species; and (iv) a combination of the second and third strategies. Models were evaluated and compared by the total deviance explained, the maximized Kappa and the area under the curve (AUC). 4. Among the four strategies, model performance was best for strategy 3. Contrary to expectations, strategy 2 resulted in even lower model performance compared with models with pseudo-absence data simulated totally at random (strategy 1). 5. Independent of the strategy model, performance was enhanced when sites with historical species presence data were not considered as pseudo-absence data. Therefore, the combination of strategy 3 with species records from the last 100 years achieved the highest model performance. 6. Synthesis and applications. The protection of suitable habitat for species survival or reintroduction in rapidly changing landscapes is a high priority among conservationists. Model-based approaches offer planning authorities the possibility of delimiting priority areas for species detection or habitat protection. The performance of these models can be enhanced by fitting them with pseudo-absence data relying on large archives of natural history collection species presence data rather than using randomly sampled pseudo-absence data.
Resumo:
1. Statistical modelling is often used to relate sparse biological survey data to remotely derived environmental predictors, thereby providing a basis for predictively mapping biodiversity across an entire region of interest. The most popular strategy for such modelling has been to model distributions of individual species one at a time. Spatial modelling of biodiversity at the community level may, however, confer significant benefits for applications involving very large numbers of species, particularly if many of these species are recorded infrequently. 2. Community-level modelling combines data from multiple species and produces information on spatial pattern in the distribution of biodiversity at a collective community level instead of, or in addition to, the level of individual species. Spatial outputs from community-level modelling include predictive mapping of community types (groups of locations with similar species composition), species groups (groups of species with similar distributions), axes or gradients of compositional variation, levels of compositional dissimilarity between pairs of locations, and various macro-ecological properties (e.g. species richness). 3. Three broad modelling strategies can be used to generate these outputs: (i) 'assemble first, predict later', in which biological survey data are first classified, ordinated or aggregated to produce community-level entities or attributes that are then modelled in relation to environmental predictors; (ii) 'predict first, assemble later', in which individual species are modelled one at a time as a function of environmental variables, to produce a stack of species distribution maps that is then subjected to classification, ordination or aggregation; and (iii) 'assemble and predict together', in which all species are modelled simultaneously, within a single integrated modelling process. These strategies each have particular strengths and weaknesses, depending on the intended purpose of modelling and the type, quality and quantity of data involved. 4. Synthesis and applications. The potential benefits of modelling large multispecies data sets using community-level, as opposed to species-level, approaches include faster processing, increased power to detect shared patterns of environmental response across rarely recorded species, and enhanced capacity to synthesize complex data into a form more readily interpretable by scientists and decision-makers. Community-level modelling therefore deserves to be considered more often, and more widely, as a potential alternative or supplement to modelling individual species.
Resumo:
Aging is ubiquitous to the human condition. The MRI correlates of healthy aging have been extensively investigated using a range of modalities, including volumetric MRI, quantitative MRI (qMRI), and diffusion tensor imaging. Despite this, the reported brainstem related changes remain sparse. This is, in part, due to the technical and methodological limitations in quantitatively assessing and statistically analyzing this region. By utilizing a new method of brainstem segmentation, a large cohort of 100 healthy adults were assessed in this study for the effects of aging within the human brainstem in vivo. Using qMRI, tensor-based morphometry (TBM), and voxel-based quantification (VBQ), the volumetric and quantitative changes across healthy adults between 19 and 75 years were characterized. In addition to the increased R2* in substantia nigra corresponding to increasing iron deposition with age, several novel findings were reported in the current study. These include selective volumetric loss of the brachium conjunctivum, with a corresponding decrease in magnetization transfer and increase in proton density (PD), accounting for the previously described "midbrain shrinkage." Additionally, we found increases in R1 and PD in several pontine and medullary structures. We consider these changes in the context of well-characterized, functional age-related changes, and propose potential biophysical mechanisms. This study provides detailed quantitative analysis of the internal architecture of the brainstem and provides a baseline for further studies of neurodegenerative diseases that are characterized by early, pre-clinical involvement of the brainstem, such as Parkinson's and Alzheimer's diseases.
Resumo:
Patients with defective ectodysplasin A (EDA) are affected by X-linked hypohidrotic ectodermal dysplasia (XLHED), a condition characterized by sparse hair, inability to sweat, decreased lacrimation, frequent pulmonary infections, and missing and malformed teeth. The canine model of XLHED was used to study the developmental impact of EDA on secondary dentition, since dogs have an entirely brachyodont, diphyodont dentition similar to that in humans, as opposed to mice, which have only permanent teeth (monophyodont dentition), some of which are very different (aradicular hypsodont) than brachyodont human teeth. Also, clinical signs in humans and dogs with XLHED are virtually identical, whereas several are missing in the murine equivalent. In our model, the genetically missing EDA was compensated for by postnatal intravenous administration of soluble recombinant EDA. Untreated XLHED dogs have an incomplete set of conically shaped teeth similar to those seen in human patients with XLHED. After treatment with EDA, significant normalization of adult teeth was achieved in four of five XLHED dogs. Moreover, treatment restored normal lacrimation and resistance to eye and airway infections and improved sweating ability. These results not only provide proof of concept for a potential treatment of this orphan disease but also demonstrate an essential role of EDA in the development of secondary dentition.
Resumo:
1. Wind pollination is thought to have evolved in response to selection for mechanisms to promote pollination success, when animal pollinators become scarce or unreliable. We might thus expect wind-pollinated plants to be less prone to pollen limitation than their insect-pollinated counterparts. Yet, if pollen loads on stigmas of wind-pollinated species decline with distance from pollen donors, seed set might nevertheless be pollen-limited in populations of plants that cannot self-fertilize their progeny, but not in self-compatible hermaphroditic populations.2. Here, we test this hypothesis by comparing pollen limitation between dioecious and hermaphroditic (monoecious) populations of the wind-pollinated herb Mercurialis annua.3. In natural populations, seed set was pollen-limited in low-density patches of dioecious, but not hermaphroditic, M. annua, a finding consistent with patterns of distance-dependent seed set by females in an experimental array. Nevertheless, seed set was incomplete in both dioecious and hermaphroditic populations, even at high local densities. Further, both factors limited the seed set of females and hermaphrodites, after we manipulated pollen and resource availability in a common garden experiment.4. Synthesis. Our results are consistent with the idea that pollen limitation plays a role in the evolution of combined vs. separate sexes in M. annua. Taken together, they point to the potential importance of pollen transfer between flowers on the same plant (geitonogamy) by wind as a mechanism of reproductive assurance and to the dual roles played by pollen and resource availability in limiting seed set. Thus, seed set can be pollen-limited in sparse populations of a wind-pollinated species, where mates are rare or absent, having potentially important demographic and evolutionary implications.
Resumo:
Aquest projecte resol les fases inicials d'un altre projecte més gran que té com a objectiu la conversió automàtica de seqüències d'imatges a 3D. El projecte s'ha centrat en la reconstrucció calibrada de col·leccions d'imatges mitjançant la tècnica anomenada structure from motion. Aquesta tècnica forma part de l'àmbit de la visió per computador i s'utilitza per obtenir la posició i l'orientació de les diferents càmeres juntament amb una reconstrucció 3D de l'escena en forma de núvol de punts.
Resumo:
Debido al gran número de transistores por mm2 que hoy en día podemos encontrar en las GPU convencionales, en los últimos años éstas se vienen utilizando para propósitos generales gracias a que ofrecen un mayor rendimiento para computación paralela. Este proyecto implementa el producto sparse matrix-vector sobre OpenCL. En los primeros capítulos hacemos una revisión de la base teórica necesaria para comprender el problema. Después veremos los fundamentos de OpenCL y del hardware sobre el que se ejecutarán las librerías desarrolladas. En el siguiente capítulo seguiremos con una descripción del código de los kernels y de su flujo de datos. Finalmente, el software es evaluado basándose en comparativas con la CPU.
Resumo:
Long-term observations of individuals with the so-called Langer-Giedion (LGS) or tricho-rhino-phalangeal type II (TRPS2) are scarce. We report here a on follow-up of four LGS individuals, including one first described by Andres Giedion in 1969, and review the sparse publications on adults with this syndrome which comprises ectodermal dysplasia, multiple cone-shaped epiphyses prior to puberty, multiple cartilaginous exostoses, and mostly mild intellectual impairment. LGS is caused by deletion of the chromosomal segment 8q24.11-q24.13 containing among others the genes EXT1 and TRPS1. Most patients with TRPS2 are only borderline or mildly cognitively delayed, and few are of normal intelligence. Their practical skills are better than their intellectual capability, and, for this reason and because of their low self-esteem, they are often underestimated. Some patients develop seizures at variable age. Osteomas on processes of cervical vertebrae may cause pressure on cervical nerves or dissection of cerebral arteries. Joint stiffness is observed during childhood and changes later to joint laxity causing instability and proneness to trauma. Perthes disease is not rare. Almost all males become bald at or soon after puberty, and some develop (pseudo) gynecomastia. Growth hormone deficiency was found in a few patients, TSH deficiency so far only in one. Puberty and fertility are diminished, and no instance of transmission of the deletion from a non-mosaic parent to a child has been observed so far. Several affected females had vaginal atresia with consequent hydrometrocolpos.
Resumo:
Noonan syndrome (NS) and cardio-facio-cutaneous (CFC) syndrome are autosomal dominant disorders characterized by heart defects, facial dysmorphism, ectodermal abnormalities, and mental retardation. There is a significant clinical overlap between NS and CFC syndrome, but ectodermal abnormalities and mental retardation are more frequent in CFC syndrome. Mutations in PTPN11 and KRAS have been identified in patients with NS and those in KRAS, BRAF and MAP2K1/2 have been identified in patients with CFC syndrome, establishing a new role of the RAS/MAPK pathway in human development. Recently, mutations in the son of sevenless gene (SOS1) have also been identified in patients with NS. To clarify the clinical spectrum of patients with SOS1 mutations, we analyzed 24 patients with NS, including 3 patients in a three-generation family, and 30 patients with CFC syndrome without PTPN11, KRAS, HRAS, BRAF, and MAP2K1/2 (MEK1/2) mutations. We identified two SOS1 mutations in four NS patients, including three patients in the above-mentioned three-generation family. In the patients with a CFC phenotype, three mutations, including a novel three amino-acid insertion, were identified in one CFC patient and two patients with both NS and CFC phenotypes. These three patients exhibited ectodermal abnormalities, such as curly hair, sparse eyebrows, and dry skin, and two of them showed mental retardation. Our results suggest that patients with SOS1 mutations range from NS to CFC syndrome.
Resumo:
Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.
Resumo:
In the PhD thesis “Sound Texture Modeling” we deal with statistical modelling or textural sounds like water, wind, rain, etc. For synthesis and classification. Our initial model is based on a wavelet tree signal decomposition and the modeling of the resulting sequence by means of a parametric probabilistic model, that can be situated within the family of models trainable via expectation maximization (hidden Markov tree model ). Our model is able to capture key characteristics of the source textures (water, rain, fire, applause, crowd chatter ), and faithfully reproduces some of the sound classes. In terms of a more general taxonomy of natural events proposed by Graver, we worked on models for natural event classification and segmentation. While the event labels comprise physical interactions between materials that do not have textural propierties in their enterity, those segmentation models can help in identifying textural portions of an audio recording useful for analysis and resynthesis. Following our work on concatenative synthesis of musical instruments, we have developed a pattern-based synthesis system, that allows to sonically explore a database of units by means of their representation in a perceptual feature space. Concatenative syntyhesis with “molecules” built from sparse atomic representations also allows capture low-level correlations in perceptual audio features, while facilitating the manipulation of textural sounds based on their physical and perceptual properties. We have approached the problem of sound texture modelling for synthesis from different directions, namely a low-level signal-theoretic point of view through a wavelet transform, and a more high-level point of view driven by perceptual audio features in the concatenative synthesis setting. The developed framework provides unified approach to the high-quality resynthesis of natural texture sounds. Our research is embedded within the Metaverse 1 European project (2008-2011), where our models are contributting as low level building blocks within a semi-automated soundscape generation system.
Resumo:
Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale for the purpose of improving predictions of groundwater flow and solute transport. However, extending corresponding approaches to the regional scale still represents one of the major challenges in the domain of hydrogeophysics. To address this problem, we have developed a regional-scale data integration methodology based on a two-step Bayesian sequential simulation approach. Our objective is to generate high-resolution stochastic realizations of the regional-scale hydraulic conductivity field in the common case where there exist spatially exhaustive but poorly resolved measurements of a related geophysical parameter, as well as highly resolved but spatially sparse collocated measurements of this geophysical parameter and the hydraulic conductivity. To integrate this multi-scale, multi-parameter database, we first link the low- and high-resolution geophysical data via a stochastic downscaling procedure. This is followed by relating the downscaled geophysical data to the high-resolution hydraulic conductivity distribution. After outlining the general methodology of the approach, we demonstrate its application to a realistic synthetic example where we consider as data high-resolution measurements of the hydraulic and electrical conductivities at a small number of borehole locations, as well as spatially exhaustive, low-resolution estimates of the electrical conductivity obtained from surface-based electrical resistivity tomography. The different stochastic realizations of the hydraulic conductivity field obtained using our procedure are validated by comparing their solute transport behaviour with that of the underlying ?true? hydraulic conductivity field. We find that, even in the presence of strong subsurface heterogeneity, our proposed procedure allows for the generation of faithful representations of the regional-scale hydraulic conductivity structure and reliable predictions of solute transport over long, regional-scale distances.