157 resultados para EFFICIENT SIMULATION
Resumo:
The likelihood of significant exposure to drugs in infants through breast milk is poorly defined, given the difficulties of conducting pharmacokinetics (PK) studies. Using fluoxetine (FX) as an example, we conducted a proof-of-principle study applying population PK (popPK) modeling and simulation to estimate drug exposure in infants through breast milk. We simulated data for 1,000 mother-infant pairs, assuming conservatively that the FX clearance in an infant is 20% of the allometrically adjusted value in adults. The model-generated estimate of the milk-to-plasma ratio for FX (mean: 0.59) was consistent with those reported in other studies. The median infant-to-mother ratio of FX steady-state plasma concentrations predicted by the simulation was 8.5%. Although the disposition of the active metabolite, norfluoxetine, could not be modeled, popPK-informed simulation may be valid for other drugs, particularly those without active metabolites, thereby providing a practical alternative to conventional PK studies for exposure risk assessment in this population.
Resumo:
In this paper we propose a highly accurate approximation procedure for ruin probabilities in the classical collective risk model, which is based on a quadrature/rational approximation procedure proposed in [2]. For a certain class of claim size distributions (which contains the completely monotone distributions) we give a theoretical justification for the method. We also show that under weaker assumptions on the claim size distribution, the method may still perform reasonably well in some cases. This in particular provides an efficient alternative to a related method proposed in [3]. A number of numerical illustrations for the performance of this procedure is provided for both completely monotone and other types of random variables.
Resumo:
Les instabilités engendrées par des gradients de densité interviennent dans une variété d'écoulements. Un exemple est celui de la séquestration géologique du dioxyde de carbone en milieux poreux. Ce gaz est injecté à haute pression dans des aquifères salines et profondes. La différence de densité entre la saumure saturée en CO2 dissous et la saumure environnante induit des courants favorables qui le transportent vers les couches géologiques profondes. Les gradients de densité peuvent aussi être la cause du transport indésirable de matières toxiques, ce qui peut éventuellement conduire à la pollution des sols et des eaux. La gamme d'échelles intervenant dans ce type de phénomènes est très large. Elle s'étend de l'échelle poreuse où les phénomènes de croissance des instabilités s'opèrent, jusqu'à l'échelle des aquifères à laquelle interviennent les phénomènes à temps long. Une reproduction fiable de la physique par la simulation numérique demeure donc un défi en raison du caractère multi-échelles aussi bien au niveau spatial et temporel de ces phénomènes. Il requiert donc le développement d'algorithmes performants et l'utilisation d'outils de calculs modernes. En conjugaison avec les méthodes de résolution itératives, les méthodes multi-échelles permettent de résoudre les grands systèmes d'équations algébriques de manière efficace. Ces méthodes ont été introduites comme méthodes d'upscaling et de downscaling pour la simulation d'écoulements en milieux poreux afin de traiter de fortes hétérogénéités du champ de perméabilité. Le principe repose sur l'utilisation parallèle de deux maillages, le premier est choisi en fonction de la résolution du champ de perméabilité (grille fine), alors que le second (grille grossière) est utilisé pour approximer le problème fin à moindre coût. La qualité de la solution multi-échelles peut être améliorée de manière itérative pour empêcher des erreurs trop importantes si le champ de perméabilité est complexe. Les méthodes adaptatives qui restreignent les procédures de mise à jour aux régions à forts gradients permettent de limiter les coûts de calculs additionnels. Dans le cas d'instabilités induites par des gradients de densité, l'échelle des phénomènes varie au cours du temps. En conséquence, des méthodes multi-échelles adaptatives sont requises pour tenir compte de cette dynamique. L'objectif de cette thèse est de développer des algorithmes multi-échelles adaptatifs et efficaces pour la simulation des instabilités induites par des gradients de densité. Pour cela, nous nous basons sur la méthode des volumes finis multi-échelles (MsFV) qui offre l'avantage de résoudre les phénomènes de transport tout en conservant la masse de manière exacte. Dans la première partie, nous pouvons démontrer que les approximations de la méthode MsFV engendrent des phénomènes de digitation non-physiques dont la suppression requiert des opérations de correction itératives. Les coûts de calculs additionnels de ces opérations peuvent toutefois être compensés par des méthodes adaptatives. Nous proposons aussi l'utilisation de la méthode MsFV comme méthode de downscaling: la grille grossière étant utilisée dans les zones où l'écoulement est relativement homogène alors que la grille plus fine est utilisée pour résoudre les forts gradients. Dans la seconde partie, la méthode multi-échelle est étendue à un nombre arbitraire de niveaux. Nous prouvons que la méthode généralisée est performante pour la résolution de grands systèmes d'équations algébriques. Dans la dernière partie, nous focalisons notre étude sur les échelles qui déterminent l'évolution des instabilités engendrées par des gradients de densité. L'identification de la structure locale ainsi que globale de l'écoulement permet de procéder à un upscaling des instabilités à temps long alors que les structures à petite échelle sont conservées lors du déclenchement de l'instabilité. Les résultats présentés dans ce travail permettent d'étendre les connaissances des méthodes MsFV et offrent des formulations multi-échelles efficaces pour la simulation des instabilités engendrées par des gradients de densité. - Density-driven instabilities in porous media are of interest for a wide range of applications, for instance, for geological sequestration of CO2, during which CO2 is injected at high pressure into deep saline aquifers. Due to the density difference between the C02-saturated brine and the surrounding brine, a downward migration of CO2 into deeper regions, where the risk of leakage is reduced, takes place. Similarly, undesired spontaneous mobilization of potentially hazardous substances that might endanger groundwater quality can be triggered by density differences. Over the last years, these effects have been investigated with the help of numerical groundwater models. Major challenges in simulating density-driven instabilities arise from the different scales of interest involved, i.e., the scale at which instabilities are triggered and the aquifer scale over which long-term processes take place. An accurate numerical reproduction is possible, only if the finest scale is captured. For large aquifers, this leads to problems with a large number of unknowns. Advanced numerical methods are required to efficiently solve these problems with today's available computational resources. Beside efficient iterative solvers, multiscale methods are available to solve large numerical systems. Originally, multiscale methods have been developed as upscaling-downscaling techniques to resolve strong permeability contrasts. In this case, two static grids are used: one is chosen with respect to the resolution of the permeability field (fine grid); the other (coarse grid) is used to approximate the fine-scale problem at low computational costs. The quality of the multiscale solution can be iteratively improved to avoid large errors in case of complex permeability structures. Adaptive formulations, which restrict the iterative update to domains with large gradients, enable limiting the additional computational costs of the iterations. In case of density-driven instabilities, additional spatial scales appear which change with time. Flexible adaptive methods are required to account for these emerging dynamic scales. The objective of this work is to develop an adaptive multiscale formulation for the efficient and accurate simulation of density-driven instabilities. We consider the Multiscale Finite-Volume (MsFV) method, which is well suited for simulations including the solution of transport problems as it guarantees a conservative velocity field. In the first part of this thesis, we investigate the applicability of the standard MsFV method to density- driven flow problems. We demonstrate that approximations in MsFV may trigger unphysical fingers and iterative corrections are necessary. Adaptive formulations (e.g., limiting a refined solution to domains with large concentration gradients where fingers form) can be used to balance the extra costs. We also propose to use the MsFV method as downscaling technique: the coarse discretization is used in areas without significant change in the flow field whereas the problem is refined in the zones of interest. This enables accounting for the dynamic change in scales of density-driven instabilities. In the second part of the thesis the MsFV algorithm, which originally employs one coarse level, is extended to an arbitrary number of coarse levels. We prove that this keeps the MsFV method efficient for problems with a large number of unknowns. In the last part of this thesis, we focus on the scales that control the evolution of density fingers. The identification of local and global flow patterns allows a coarse description at late times while conserving fine-scale details during onset stage. Results presented in this work advance the understanding of the Multiscale Finite-Volume method and offer efficient dynamic multiscale formulations to simulate density-driven instabilities. - Les nappes phréatiques caractérisées par des structures poreuses et des fractures très perméables représentent un intérêt particulier pour les hydrogéologues et ingénieurs environnementaux. Dans ces milieux, une large variété d'écoulements peut être observée. Les plus communs sont le transport de contaminants par les eaux souterraines, le transport réactif ou l'écoulement simultané de plusieurs phases non miscibles, comme le pétrole et l'eau. L'échelle qui caractérise ces écoulements est définie par l'interaction de l'hétérogénéité géologique et des processus physiques. Un fluide au repos dans l'espace interstitiel d'un milieu poreux peut être déstabilisé par des gradients de densité. Ils peuvent être induits par des changements locaux de température ou par dissolution d'un composé chimique. Les instabilités engendrées par des gradients de densité revêtent un intérêt particulier puisque qu'elles peuvent éventuellement compromettre la qualité des eaux. Un exemple frappant est la salinisation de l'eau douce dans les nappes phréatiques par pénétration d'eau salée plus dense dans les régions profondes. Dans le cas des écoulements gouvernés par les gradients de densité, les échelles caractéristiques de l'écoulement s'étendent de l'échelle poreuse où les phénomènes de croissance des instabilités s'opèrent, jusqu'à l'échelle des aquifères sur laquelle interviennent les phénomènes à temps long. Etant donné que les investigations in-situ sont pratiquement impossibles, les modèles numériques sont utilisés pour prédire et évaluer les risques liés aux instabilités engendrées par les gradients de densité. Une description correcte de ces phénomènes repose sur la description de toutes les échelles de l'écoulement dont la gamme peut s'étendre sur huit à dix ordres de grandeur dans le cas de grands aquifères. Il en résulte des problèmes numériques de grande taille qui sont très couteux à résoudre. Des schémas numériques sophistiqués sont donc nécessaires pour effectuer des simulations précises d'instabilités hydro-dynamiques à grande échelle. Dans ce travail, nous présentons différentes méthodes numériques qui permettent de simuler efficacement et avec précision les instabilités dues aux gradients de densité. Ces nouvelles méthodes sont basées sur les volumes finis multi-échelles. L'idée est de projeter le problème original à une échelle plus grande où il est moins coûteux à résoudre puis de relever la solution grossière vers l'échelle de départ. Cette technique est particulièrement adaptée pour résoudre des problèmes où une large gamme d'échelle intervient et évolue de manière spatio-temporelle. Ceci permet de réduire les coûts de calculs en limitant la description détaillée du problème aux régions qui contiennent un front de concentration mobile. Les aboutissements sont illustrés par la simulation de phénomènes tels que l'intrusion d'eau salée ou la séquestration de dioxyde de carbone.
Resumo:
Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.
Resumo:
Whole-body counting is a technique of choice for assessing the intake of gamma-emitting radionuclides. An appropriate calibration is necessary, which is done either by experimental measurement or by Monte Carlo (MC) calculation. The aim of this work was to validate a MC model for calibrating whole-body counters (WBCs) by comparing the results of computations with measurements performed on an anthropomorphic phantom and to investigate the effect of a change in phantom's position on the WBC counting sensitivity. GEANT MC code was used for the calculations, and an IGOR phantom loaded with several types of radionuclides was used for the experimental measurements. The results show a reasonable agreement between measurements and MC computation. A 1-cm error in phantom positioning changes the activity estimation by >2%. Considering that a 5-cm deviation of the positioning of the phantom may occur in a realistic counting scenario, this implies that the uncertainty of the activity measured by a WBC is ∼10-20%.
Resumo:
BACKGROUND: Physician training in smoking cessation counseling has been shown to be effective as a means to increase quit success. We assessed the cost-effectiveness ratio of a smoking cessation counseling training programme. Its effectiveness was previously demonstrated in a cluster randomized, control trial performed in two Swiss university outpatients clinics, in which residents were randomized to receive training in smoking interventions or a control educational intervention. DESIGN AND METHODS: We used a Markov simulation model for effectiveness analysis. This model incorporates the intervention efficacy, the natural quit rate, and the lifetime probability of relapse after 1-year abstinence. We used previously published results in addition to hospital service and outpatient clinic cost data. The time horizon was 1 year, and we opted for a third-party payer perspective. RESULTS: The incremental cost of the intervention amounted to US$2.58 per consultation by a smoker, translating into a cost per life-year saved of US$25.4 for men and 35.2 for women. One-way sensitivity analyses yielded a range of US$4.0-107.1 in men and US$9.7-148.6 in women. Variations in the quit rate of the control intervention, the length of training effectiveness, and the discount rate yielded moderately large effects on the outcome. Variations in the natural cessation rate, the lifetime probability of relapse, the cost of physician training, the counseling time, the cost per hour of physician time, and the cost of the booklets had little effect on the cost-effectiveness ratio. CONCLUSIONS: Training residents in smoking cessation counseling is a very cost-effective intervention and may be more efficient than currently accepted tobacco control interventions.
Resumo:
Patients with rectal cancer are at high risk of disease recurrence despite neoadjuvant radiochemotherapy with 5-Fluorouracil (5FU), a regimen that is now widely applied. In order to develop a regimen with increased antitumour activity, we previously established the recommended dose of neoadjuvant CPT-11 (three times weekly 90 mg m(-2)) concomitant to hyperfractionated accelerated radiotherapy (HART) followed by surgery within 1 week. Thirty-three patients (20 men) with a locally advanced adenocarcinoma of the rectum were enrolled in this prospective phase II trial (1 cT2, 29 cT3, 3 cT4 and 21 cN+). Median age was 60 years (range 43-75 years). All patients received all three injections of CPT-11 and all but two patients completed radiotherapy as planned. Surgery with total mesorectal excision (TME) was performed within 1 week (range 2-15 days). The preoperative chemoradiotherapy was overall well tolerated, 24% of the patients experienced grade 3 diarrhoea that was easily manageable. At a median follow-up of 2 years no local recurrence occurred, however, nine patients developed distant metastases. The 2-year disease-free survival was 66% (95% confidence interval 0.48-0.83). Neoadjuvant CPT-11 and HART allow for excellent local control; however, distant relapse remains a concern in this patient population.
Resumo:
The identification of CTL-defined tumor-associated Ags has allowed the development of new strategies for cancer immunotherapy. To potentiate the CTL responses, peptide-based vaccines require the coadministration of adjuvants. Because oligodeoxynucleotides (ODN) containing CpG motifs are strong immunostimulators, we analyzed the ability of CpG ODN to act as adjuvant of the CTL response against tumor-derived synthetic peptide in the absence or presence of IFA. Mice transgenic for a chimeric MHC class I molecule were immunized with a peptide analog of MART-1/Melan-A(26-35) in the presence of CpG ODN alone or CpG ODN emulsified in IFA. The CTL response was monitored ex vivo by tetramer staining of lymphocytes. In blood, spleen, and lymph nodes, peptide mixed with CpG ODN alone was able to elicit a stronger systemic CTL response as compared with peptide emulsified in IFA. Moreover, CpG ODN in combination with IFA further enhanced the CTL response in terms of the frequency of tetramer+CD8+ T cells ex vivo. The CTL induced in vivo against peptide analog in the presence of CpG ODN are functional, as they were able to recognize and kill melanoma cells in vitro. Overall, these results indicate that CpG ODN by itself is a good candidate adjuvant of CTL response and can also enhance the effect of classical adjuvant.
Resumo:
BACKGROUND: Silicone breast implants are used to a wide extent in the field of plastic surgery. However, capsular contracture remains a considerable concern. This study aimed to analyze the effectiveness and applicability of an ultracision knife for capsulectomy breast surgery. METHODS: A prospective, single-center, randomized study was performed in 2009. The inclusion criteria specified female patients 20-80 years of age with capsular contracture (Baker 3-4). Ventral capsulectomy was performed using an ultracision knife on one side and the conventional Metzenbaum-type scissors and surgical knife on the collateral side of the breast. Measurements of the resected capsular ventral fragment, operative time, remaining breast tissue, drainage time, seroma and hematoma formation, visual analog scale pain score, and sensory function of the nipple-areola complex were assessed. In addition, histologic analysis of the resected capsule was performed. RESULTS: Five patients (median age, 59.2 years) were included in this study with a mean follow-up period of 6 months. Three patients had Baker grade 3 capsular contracture, and two patients had Baker grade 4 capsular contracture. The ultracision knife was associated with a significantly lower pain score, shorter operative time, smaller drainage volume, and shorter drainage time and resulted in a larger amount of remaining breast tissue. Histologic analysis of the resected capsule showed no apoptotic cells in the study group or control group. CONCLUSIONS: The results suggest that ventral capsulectomy with Baker grade 3 or 4 contracture using the ultracision knife is feasible, safe, and more efficient than blunt dissection and monopolar cutting diathermy and has a short learning curve. LEVEL OF EVIDENCE II: This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors at www.springer.com/00266 .
Resumo:
Flexible intramedullary nailing (FIN) is the gold standard treatment for femur fracture in school-aged children. It has been performed successfully in younger children, although Spica cast immobilisation (SCI) has been the most widely used strategy to date. METHOD: A retrospective analysis was performed between two comparable groups of children aged 1-4 years with a femoral shaft fracture. Two University hospitals, each using specific treatment guidelines, participated in the study: SCI in Group I (Basel, Switzerland) and FIN in Group II (Lausanne, Switzerland). RESULTS: Group I included 19 children with a median age of 26 months (range 12-46 months). Median hospital stay was 1 day (range 0-5 days) and casts were retained for a median duration of 21 days (range 12-29 days). General anaesthesia was used in six children and sedation in four. Skin breakdown secondary to cast irritation occurred in two children (10.5%). The median follow-up was 114 months (range 37-171 months). No significant malunion was noted. Group II included 27 children with a median age of 38.4 months (range 18.7-46.7 months). Median hospital stay was 4 days (range 1-13 days). All children required general anaesthesia for insertion and removal of the nails. Free mobilisation and full weight bearing were allowed at a median of 2 days (range 1-10 days) and 7 days (range 1-30 days), respectively, postoperatively. Nail exteriorisation was noted in three children (11%). The median follow-up was 16.5 months (range 8-172 months). No significant malunion was reported. CONCLUSIONS: Young children with a femoral shaft fracture treated by SCI or FIN had similarly favourable outcomes and complication rates. FIN allowed earlier mobilisation and full weight bearing. Compared to SCI, a greater number of children required general anaesthesia. In a pre-school child with a femoral shaft fracture, immediate SCI applied by a paediatric orthopaedic team following specific guidelines allowed early discharge from hospital with few complications.
Resumo:
We study the dynamics of a water-oil meniscus moving from a smaller to a larger pore. The process is characterised by an abrupt change in the configuration, yielding a sudden energy release. A theoretic study for static conditions provides analytical solutions of the surface energy content of the system. Although the configuration after the sudden energy release is energetically more convenient, an energy barrier must be overcome before the process can happen spontaneously. The energy barrier depends on the system geometry and on the flow parameters. The analytical results are compared to numerical simulations that solve the full Navier-Stokes equation in the pore space and employ the Volume Of Fluid (VOF) method to track the evolution of the interface. First, the numerical simulations of a quasi-static process are validated by comparison with the analytical solutions for a static meniscus, then numerical simulations with varying injection velocity are used to investigate dynamic effects on the configuration change. During the sudden energy jump the system exhibits an oscillatory behaviour. Extension to more complex geometries might elucidate the mechanisms leading to a dynamic capillary pressure and to bifurcations in final distributions of fluid phases in porous
Resumo:
PPARalpha and PPARbeta are expressed in the mouse epidermis during fetal development, but their expression progressively disappears after birth. However, the expression of PPARbeta is reactivated in adult mice upon proliferative stimuli, such as cutaneous injury. We show here that PPARbeta protects keratinocytes from growth factor deprivation, anoikis and TNF-alpha-induced apoptosis, by modulating both early and late apoptotic events via the Akt1 signaling pathway and DNA fragmentation, respectively. The control mechanisms involve direct transcriptional upregulation of ILK, PDK1, and ICAD-L. In accordance with the anti-apoptotic role of PPARbeta observed in vitro, the balance between proliferation and apoptosis is altered in the epidermis of wounded PPARbeta mutant mice, with increased keratinocyte proliferation and apoptosis. In addition, primary keratinocytes deleted for PPARbeta show defects in both cell-matrix and cell-cell contacts, and impaired cell migration. Together, these results suggest that the delayed wound closure observed in PPARbeta mutant mice involves the alteration of several key processes. Finally, comparison of PPARbeta and Akt1 knock-out mice reveals many similarities, and suggests that the ability of PPARbeta to modulate the Akt1 pathway has significant impact during skin wound healing.
Resumo:
Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending the corresponding approaches to the regional scale represents a major, and as-of-yet largely unresolved, challenge. To address this problem, we have developed a downscaling procedure based on a non-linear Bayesian sequential simulation approach. The basic objective of this algorithm is to estimate the value of the sparsely sampled hydraulic conductivity at non-sampled locations based on its relation to the electrical conductivity, which is available throughout the model space. The in situ relationship between the hydraulic and electrical conductivities is described through a non-parametric multivariate kernel density function. This method is then applied to the stochastic integration of low-resolution, re- gional-scale electrical resistivity tomography (ERT) data in combination with high-resolution, local-scale downhole measurements of the hydraulic and electrical conductivities. Finally, the overall viability of this downscaling approach is tested and verified by performing and comparing flow and transport simulation through the original and the downscaled hydraulic conductivity fields. Our results indicate that the proposed procedure does indeed allow for obtaining remarkably faithful estimates of the regional-scale hydraulic conductivity structure and correspondingly reliable predictions of the transport characteristics over relatively long distances.