936 resultados para errors-in-variables model
Resumo:
BACKGROUND: Assessment of lung volume (FRC) and ventilation inhomogeneities with ultrasonic flowmeter and multiple breath washout (MBW) has been used to provide important information about lung disease in infants. Sub-optimal adjustment of the mainstream molar mass (MM) signal for temperature and external deadspace may lead to analysis errors in infants with critically small tidal volume changes during breathing. METHODS: We measured expiratory temperature in human infants at 5 weeks of age and examined the influence of temperature and deadspace changes on FRC results with computer simulation modeling. A new analysis method with optimized temperature and deadspace settings was then derived, tested for robustness to analysis errors and compared with the previously used analysis methods. RESULTS: Temperature in the facemask was higher and variations of deadspace volumes larger than previously assumed. Both showed considerable impact upon FRC and LCI results with high variability when obtained with the previously used analysis model. Using the measured temperature we optimized model parameters and tested a newly derived analysis method, which was found to be more robust to variations in deadspace. Comparison between both analysis methods showed systematic differences and a wide scatter. CONCLUSION: Corrected deadspace and more realistic temperature assumptions improved the stability of the analysis of MM measurements obtained by ultrasonic flowmeter in infants. This new analysis method using the only currently available commercial ultrasonic flowmeter in infants may help to improve stability of the analysis and further facilitate assessment of lung volume and ventilation inhomogeneities in infants.
Resumo:
We introduce a diagnostic test for the mixing distribution in a generalised linear mixed model. The test is based on the difference between the marginal maximum likelihood and conditional maximum likelihood estimates of a subset of the fixed effects in the model. We derive the asymptotic variance of this difference, and propose a test statistic that has a limiting chi-square distribution under the null hypothesis that the mixing distribution is correctly specified. For the important special case of the logistic regression model with random intercepts, we evaluate via simulation the power of the test in finite samples under several alternative distributional forms for the mixing distribution. We illustrate the method by applying it to data from a clinical trial investigating the effects of hormonal contraceptives in women.
Resumo:
Medical errors originating in health care facilities are a significant source of preventable morbidity, mortality, and healthcare costs. Voluntary error report systems that collect information on the causes and contributing factors of medi- cal errors regardless of the resulting harm may be useful for developing effective harm prevention strategies. Some patient safety experts question the utility of data from errors that did not lead to harm to the patient, also called near misses. A near miss (a.k.a. close call) is an unplanned event that did not result in injury to the patient. Only a fortunate break in the chain of events prevented injury. We use data from a large voluntary reporting system of 836,174 medication errors from 1999 to 2005 to provide evidence that the causes and contributing factors of errors that result in harm are similar to the causes and contributing factors of near misses. We develop Bayesian hierarchical models for estimating the log odds of selecting a given cause (or contributing factor) of error given harm has occurred and the log odds of selecting the same cause given that harm did not occur. The posterior distribution of the correlation between these two vectors of log-odds is used as a measure of the evidence supporting the use of data from near misses and their causes and contributing factors to prevent medical errors. In addition, we identify the causes and contributing factors that have the highest or lowest log-odds ratio of harm versus no harm. These causes and contributing factors should also be a focus in the design of prevention strategies. This paper provides important evidence on the utility of data from near misses, which constitute the vast majority of errors in our data.
Resumo:
Heart rate variability (HRV) exhibits fluctuations characterized by a power law behavior of its power spectrum. The interpretation of this nonlinear HRV behavior, resulting from interactions between extracardiac regulatory mechanisms, could be clinically useful. However, the involvement of intrinsic variations of pacemaker rate in HRV has scarcely been investigated. We examined beating variability in spontaneously active incubating cultures of neonatal rat ventricular myocytes using microelectrode arrays. In networks of mathematical model pacemaker cells, we evaluated the variability induced by the stochastic gating of transmembrane currents and of calcium release channels and by the dynamic turnover of ion channels. In the cultures, spontaneous activity originated from a mobile focus. Both the beat-to-beat movement of the focus and beat rate variability exhibited a power law behavior. In the model networks, stochastic fluctuations in transmembrane currents and stochastic gating of calcium release channels did not reproduce the spatiotemporal patterns observed in vitro. In contrast, long-term correlations produced by the turnover of ion channels induced variability patterns with a power law behavior similar to those observed experimentally. Therefore, phenomena leading to long-term correlated variations in pacemaker cellular function may, in conjunction with extracardiac regulatory mechanisms, contribute to the nonlinear characteristics of HRV.
Resumo:
The antibacterial activities of amoxicillin-gentamicin, trovafloxacin, trimethoprim-sulfamethoxazole (TMP-SMX) and the combination of trovafloxacin with TMP-SMX were compared in a model of meningoencephalitis due to Listeria monocytogenes in infant rats. At 22 h after intracisternal infection, the cerebrospinal fluid was cultured to document meningitis, and the treatment was started. Treatment was instituted for 48 h, and efficacy was evaluated 24 h after administration of the last dose. All tested treatment regimens exhibited significant activities in brain, liver, and blood compared to infected rats receiving saline (P < 0.001). In the brain, amoxicillin plus gentamicin was more active than all of the other regimens, and trovafloxacin was more active than TMP-SMX (bacterial titers of 4.1 +/- 0.5 log10 CFU/ml for amoxicillin-gentamicin, 5.0 +/- 0.4 log10 CFU/ml for trovafloxacin, and 5.8 +/- 0.5 log10 CFU/ml for TMP-SMX; P < 0.05). In liver, amoxicillin-gentamicin and trovafloxacin were similarly active (2.8 +/- 0.8 and 2.7 +/- 0.8 log10 CFU/ml, respectively) but more active than TMP-SMX (4.4 +/- 0. 6 log10 CFU/ml; P < 0.05). The combination of trovafloxacin with TMP-SMX did not alter the antibacterial effect in the brain, but it did reduce the effect of trovafloxacin in the liver. Amoxicillin-gentamicin was the most active therapy in this study, but the activity of trovafloxacin suggests that further studies with this drug for the treatment of Listeria infections may be warranted.
Resumo:
Metals price risk management is a key issue related to financial risk in metal markets because of uncertainty of commodity price fluctuation, exchange rate, interest rate changes and huge price risk either to metals’ producers or consumers. Thus, it has been taken into account by all participants in metal markets including metals’ producers, consumers, merchants, banks, investment funds, speculators, traders and so on. Managing price risk provides stable income for both metals’ producers and consumers, so it increases the chance that a firm will invest in attractive projects. The purpose of this research is to evaluate risk management strategies in the copper market. The main tools and strategies of price risk management are hedging and other derivatives such as futures contracts, swaps and options contracts. Hedging is a transaction designed to reduce or eliminate price risk. Derivatives are financial instruments, whose returns are derived from other financial instruments and they are commonly used for managing financial risks. Although derivatives have been around in some form for centuries, their growth has accelerated rapidly during the last 20 years. Nowadays, they are widely used by financial institutions, corporations, professional investors, and individuals. This project is focused on the over-the-counter (OTC) market and its products such as exotic options, particularly Asian options. The first part of the project is a description of basic derivatives and risk management strategies. In addition, this part discusses basic concepts of spot and futures (forward) markets, benefits and costs of risk management and risks and rewards of positions in the derivative markets. The second part considers valuations of commodity derivatives. In this part, the options pricing model DerivaGem is applied to Asian call and put options on London Metal Exchange (LME) copper because it is important to understand how Asian options are valued and to compare theoretical values of the options with their market observed values. Predicting future trends of copper prices is important and would be essential to manage market price risk successfully. Therefore, the third part is a discussion about econometric commodity models. Based on this literature review, the fourth part of the project reports the construction and testing of an econometric model designed to forecast the monthly average price of copper on the LME. More specifically, this part aims at showing how LME copper prices can be explained by means of a simultaneous equation structural model (two-stage least squares regression) connecting supply and demand variables. A simultaneous econometric model for the copper industry is built: {█(Q_t^D=e^((-5.0485))∙P_((t-1))^((-0.1868) )∙〖GDP〗_t^((1.7151) )∙e^((0.0158)∙〖IP〗_t ) @Q_t^S=e^((-3.0785))∙P_((t-1))^((0.5960))∙T_t^((0.1408))∙P_(OIL(t))^((-0.1559))∙〖USDI〗_t^((1.2432))∙〖LIBOR〗_((t-6))^((-0.0561))@Q_t^D=Q_t^S )┤ P_((t-1))^CU=e^((-2.5165))∙〖GDP〗_t^((2.1910))∙e^((0.0202)∙〖IP〗_t )∙T_t^((-0.1799))∙P_(OIL(t))^((0.1991))∙〖USDI〗_t^((-1.5881))∙〖LIBOR〗_((t-6))^((0.0717) Where, Q_t^D and Q_t^Sare world demand for and supply of copper at time t respectively. P(t-1) is the lagged price of copper, which is the focus of the analysis in this part. GDPt is world gross domestic product at time t, which represents aggregate economic activity. In addition, industrial production should be considered here, so the global industrial production growth that is noted as IPt is included in the model. Tt is the time variable, which is a useful proxy for technological change. A proxy variable for the cost of energy in producing copper is the price of oil at time t, which is noted as POIL(t ) . USDIt is the U.S. dollar index variable at time t, which is an important variable for explaining the copper supply and copper prices. At last, LIBOR(t-6) is the 6-month lagged 1-year London Inter bank offering rate of interest. Although, the model can be applicable for different base metals' industries, the omitted exogenous variables such as the price of substitute or a combined variable related to the price of substitutes have not been considered in this study. Based on this econometric model and using a Monte-Carlo simulation analysis, the probabilities that the monthly average copper prices in 2006 and 2007 will be greater than specific strike price of an option are defined. The final part evaluates risk management strategies including options strategies, metal swaps and simple options in relation to the simulation results. The basic options strategies such as bull spreads, bear spreads and butterfly spreads, which are created by using both call and put options in 2006 and 2007 are evaluated. Consequently, each risk management strategy in 2006 and 2007 is analyzed based on the day of data and the price prediction model. As a result, applications stemming from this project include valuing Asian options, developing a copper price prediction model, forecasting and planning, and decision making for price risk management in the copper market.
Resumo:
Simulations of forest stand dynamics in a modelling framework including Forest Vegetation Simulator (FVS) are diameter driven, thus the diameter or basal area increment model needs a special attention. This dissertation critically evaluates diameter or basal area increment models and modelling approaches in the context of the Great Lakes region of the United States and Canada. A set of related studies are presented that critically evaluate the sub-model for change in individual tree basal diameter used in the Forest Vegetation Simulator (FVS), a dominant forestry model in the Great Lakes region. Various historical implementations of the STEMS (Stand and Tree Evaluation and Modeling System) family of diameter increment models, including the current public release of the Lake States variant of FVS (LS-FVS), were tested for the 30 most common tree species using data from the Michigan Forest Inventory and Analysis (FIA) program. The results showed that current public release of the LS-FVS diameter increment model over-predicts 10-year diameter increment by 17% on average. Also the study affirms that a simple adjustment factor as a function of a single predictor, dbh (diameter at breast height) used in the past versions, provides an inadequate correction of model prediction bias. In order to re-engineer the basal diameter increment model, the historical, conceptual and philosophical differences among the individual tree increment model families and their modelling approaches were analyzed and discussed. Two underlying conceptual approaches toward diameter or basal area increment modelling have been often used: the potential-modifier (POTMOD) and composite (COMP) approaches, which are exemplified by the STEMS/TWIGS and Prognosis models, respectively. It is argued that both approaches essentially use a similar base function and neither is conceptually different from a biological perspective, even though they look different in their model forms. No matter what modelling approach is used, the base function is the foundation of an increment model. Two base functions – gamma and Box-Lucas – were identified as candidate base functions for forestry applications. The results of a comparative analysis of empirical fits showed that quality of fit is essentially similar, and both are sufficiently detailed and flexible for forestry applications. The choice of either base function in order to model diameter or basal area increment is dependent upon personal preference; however, the gamma base function may be preferred over the Box-Lucas, as it fits the periodic increment data in both a linear and nonlinear composite model form. Finally, the utility of site index as a predictor variable has been criticized, as it has been widely used in models for complex, mixed species forest stands though not well suited for this purpose. An alternative to site index in an increment model was explored, using site index and a combination of climate variables and Forest Ecosystem Classification (FEC) ecosites and data from the Province of Ontario, Canada. The results showed that a combination of climate and FEC ecosites variables can replace site index in the diameter increment model.
Resumo:
BACKGROUND: Reperfusion injury is insufficiently addressed in current clinical management of acute limb ischemia. Controlled reperfusion carries an enormous clinical potential and was tested in a new reality-driven rodent model. METHODS AND RESULTS: Acute hind-limb ischemia was induced in Wistar rats and maintained for 4 hours. Unlike previous tourniquets models, femoral vessels were surgically prepared to facilitate controlled reperfusion and to prevent venous stasis. Rats were randomized into an experimental group (n=7), in which limbs were selectively perfused with a cooled isotone heparin solution at a limited flow rate before blood flow was restored, and a conventional group (n=7; uncontrolled blood reperfusion). Rats were killed 4 hours after blood reperfusion. Nonischemic limbs served as controls. Ischemia/reperfusion injury was significant in both groups; total wet-to-dry ratio was 159+/-44% of normal (P=0.016), whereas muscle viability and contraction force were reduced to 65+/-13% (P=0.016) and 45+/-34% (P=0.045), respectively. Controlled reperfusion, however, attenuated reperfusion injury significantly. Tissue edema was less pronounced (132+/-16% versus 185+/-42%; P=0.011) and muscle viability (74+/-11% versus 57+/-9%; P=0.004) and contraction force (68+/-40% versus 26+/-7%; P=0.045) were better preserved than after uncontrolled reperfusion. Moreover, subsequent blood circulation as assessed by laser Doppler recovered completely after controlled reperfusion but stayed durably impaired after uncontrolled reperfusion (P=0.027). CONCLUSIONS: Reperfusion injury was significantly alleviated by basic modifications of the initial reperfusion period in a new in vivo model of acute limb ischemia. With this model, systematic optimizations of according protocols may eventually translate into improved clinical management of acute limb ischemia.
Resumo:
Our goal was to validate accuracy, consistency, and reproducibility/reliability of a new method for determining cup orientation in total hip arthroplasty (THA). This method allows matching the 3D-model from CT images or slices with the projected pelvis on an anteroposterior pelvic radiograph using a fully automated registration procedure. Cup orientation (inclination and anteversion) is calculated relative to the anterior pelvic plane, corrected for individual malposition of the pelvis during radiograph acquisition. Measurements on blinded and randomized radiographs of 80 cadaver and 327 patient hips were investigated. The method showed a mean accuracy of 0.7 +/- 1.7 degrees (-3.7 degrees to 4.0 degrees) for inclination and 1.2 +/- 2.4 degrees (-5.3 degrees to 5.6 degrees) for anteversion in the cadaver trials and 1.7 +/- 1.7 degrees (-4.6 degrees to 5.5 degrees) for inclination and 0.9 +/- 2.8 degrees (-5.2 degrees to 5.7 degrees) for anteversion in the clinical data when compared to CT-based measurements. No systematic errors in accuracy were detected with the Bland-Altman analysis. The software consistency and the reproducibility/reliability were very good. This software is an accurate, consistent, reliable, and reproducible method to measure cup orientation in THA using a sophisticated 2D/3D-matching technique. Its robust and accurate matching algorithm can be expanded to statistical models.
Resumo:
This review of late-Holocene palaeoclimatology represents the results from a PAGES/CLIVAR Intersection Panel meeting that took place in June 2006. The review is in three parts: the principal high-resolution proxy disciplines (trees, corals, ice cores and documentary evidence), emphasizing current issues in their use for climate reconstruction; the various approaches that have been adopted to combine multiple climate proxy records to provide estimates of past annual-to-decadal timescale Northern Hemisphere surface temperatures and other climate variables, such as large-scale circulation indices; and the forcing histories used in climate model simulations of the past millennium. We discuss the need to develop a framework through which current and new approaches to interpreting these proxy data may be rigorously assessed using pseudo-proxies derived from climate model runs, where the `answer' is known. The article concludes with a list of recommendations. First, more raw proxy data are required from the diverse disciplines and from more locations, as well as replication, for all proxy sources, of the basic raw measurements to improve absolute dating, and to better distinguish the proxy climate signal from noise. Second, more effort is required to improve the understanding of what individual proxies respond to, supported by more site measurements and process studies. These activities should also be mindful of the correlation structure of instrumental data, indicating which adjacent proxy records ought to be in agreement and which not. Third, large-scale climate reconstructions should be attempted using a wide variety of techniques, emphasizing those for which quantified errors can be estimated at specified timescales. Fourth, a greater use of climate model simulations is needed to guide the choice of reconstruction techniques (the pseudo-proxy concept) and possibly help determine where, given limited resources, future sampling should be concentrated.
Resumo:
We analyze the impact of stratospheric volcanic aerosols on the diurnal temperature range (DTR) over Europe using long-term subdaily station records. We compare the results with a 28-member ensemble of European Centre/Hamburg version 5.4 (ECHAM5.4) general circulation model simulations. Eight stratospheric volcanic eruptions during the instrumental period are investigated. Seasonal all- and clear-sky DTR anomalies are compared with contemporary (approximately 20 year) reference periods. Clear sky is used to eliminate cloud effects and better estimate the signal from the direct radiative forcing of the volcanic aerosols. We do not find a consistent effect of stratospheric aerosols on all-sky DTR. For clear skies, we find average DTR anomalies of −0.08°C (−0.13°C) in the observations (in the model), with the largest effect in the second winter after the eruption. Although the clear-sky DTR anomalies from different stations, volcanic eruptions, and seasons show heterogeneous signals in terms of order of magnitude and sign, the significantly negative DTR anomalies (e.g., after the Tambora eruption) are qualitatively consistent with other studies. Referencing with clear-sky DTR anomalies to the radiative forcing from stratospheric volcanic eruptions, we find the resulting sensitivity to be of the same order of magnitude as previously published estimates for tropospheric aerosols during the so-called “global dimming” period (i.e., 1950s to 1980s). Analyzing cloud cover changes after volcanic eruptions reveals an increase in clear-sky days in both data sets. Quantifying the impact of stratospheric volcanic eruptions on clear-sky DTR over Europe provides valuable information for the study of the radiative effect of stratospheric aerosols and for geo-engineering purposes.
Resumo:
BACKGROUND Curcumin (CUR) is a dietary spice and food colorant (E100). Its potent anti-inflammatory activity by inhibiting the activation of Nuclear Factor-kappaB is well established. METHODS The aim of this study was to compare natural purified CUR (nCUR) with synthetically manufactured CUR (sCUR) with respect to their capacity to inhibit detrimental effects in an in vitro model of oral mucositis. The hypothesis was to demonstrate bioequivalence of nCUR and sCUR. RESULTS The purity of sCUR was HPLC-confirmed. Adherence and invasion assays for bacteria to human pharyngeal epithelial cells demonstrated equivalence of nCUR and sCUR. Standard assays also demonstrated an identical inhibitory effect on pro-inflammatory cytokine/chemokine secretion (e.g., interleukin-8, interleukin-6) by Detroit pharyngeal cells exposed to bacterial stimuli. There was bioequivalence of sCUR and nCUR with respect to their antibacterial effects against various pharyngeal species. CONCLUSION nCUR and sCUR are equipotent in in vitro assays mimicking aspects of oral mucositis. The advantages of sCUR include that it is odorless and tasteless, more easily soluble in DMSO, and that it is a single, highly purified molecule, lacking the batch-to-batch variation of CUR content in nCUR. sCUR is a promising agent for the development of an oral anti-mucositis agent.
Papain-induced in vitro disc degeneration model for the study of injectable nucleus pulposus therapy
Resumo:
BACKGROUND CONTEXT Proteolytic enzyme digestion of the intervertebral disc (IVD) offers a method to simulate a condition of disc degeneration for the study of cell-scaffold constructs in the degenerated disc. PURPOSE To characterize an in vitro disc degeneration model (DDM) of different severities of glycosaminoglycans (GAG) and water loss by using papain, and to determine the initial response of the human mesenchymal stem cells (MSCs) introduced into this DDM. STUDY DESIGN Disc degeneration model of a bovine disc explant with an end plate was induced by the injection of papain at various concentrations. Labeled MSCs were later introduced in this model. METHODS Phosphate-buffered saline (PBS control) or papain in various concentrations (3, 15, 30, 60, and 150 U/mL) were injected into the bovine caudal IVD explants. Ten days after the injection, GAG content of the discs was evaluated by dimethylmethylene blue assay and cell viability was determined by live/dead staining together with confocal microscopy. Overall matrix composition was evaluated by histology, and water content was visualized by magnetic resonance imaging. Compressive and torsional stiffness of the DDM were also recorded. In the second part, MSCs were labeled with a fluorescence cell membrane tracker and injected into the nucleus of the DDM or a PBS control. Mesenchymal stem cell viability and distribution were evaluated by confocal microscopy. RESULTS A large drop of GAG and water content of the bovine disc were obtained by injecting >30 U/mL papain. Magnetic resonance imaging showed Grade II, III, and IV disc degeneration by injecting 30, 60, and 150 U/mL papain. A cavity in the center of the disc could facilitate later injection of the nucleus pulposus tissue engineering construct while retaining an intact annulus fibrosus. The remaining disc cell viability was not affected. Mesenchymal stem cells injected into the protease-treated DDM disc showed significantly higher cell viability than when injected into the PBS-injected control disc. CONCLUSIONS By varying the concentration of papain for injection, an increasing amount of GAG and water loss could be induced to simulate the different severities of disc degeneration. MSC suspension introduced into the disc has a very low short-term survival. However, it should be clear that this bovine IVD DDM does not reflect a clinical situation but offers exciting possibilities to test novel tissue engineering protocols.
Resumo:
Recombinant human erythropoietin (EPO) has been successfully tested as neuroprotectant in brain injury models. The first large clinical trial with stroke patients, however, revealed negative results. Reasons are manifold and may include side-effects such as thrombotic complications or interactions with other medication, EPO concentration, penetration of the blood-brain-barrier and/or route of application. The latter is restricted to systemic application. Here we hypothesize that EPO is neuroprotective in a rat model of acute subdural hemorrhage (ASDH) and that direct cortical application is a feasible route of application in this injury type. The subdural hematoma was surgically evacuated and EPO was applied directly onto the surface of the brain. We injected NaCl, 200, 2000 or 20,000IU EPO per rat i.v. at 15min post-ASDH (400μl autologous venous blood) or NaCl, 0.02, 0.2 or 2IU per rat onto the cortical surface after removal of the subdurally infused blood t at 70min post-ASDH. Arterial blood pressure (MAP), blood chemistry, intracranial pressure (ICP), cerebral blood flow (CBF) and brain tissue oxygen (ptiO2) were assessed during the first hour and lesion volume at 2days after ASDH. EPO 20,000IU/rat (i.v.) elevated ICP significantly. EPO at 200 and 2000IU reduced lesion volume from 38.2±0.6mm(3) (NaCl-treated group) to 28.5±0.9 and 22.2±1.3mm(3) (all p<0.05 vs. NaCl). Cortical application of 0.02IU EPO after ASDH evacuation reduced injury from 36.0±5.2 to 11.2±2.1mm(3) (p=0.007), whereas 0.2IU had no effect (38.0±9.0mm(3)). The highest dose of both application routes (i.v. 20,000IU; cortical 2IU) enlarged the ASDH-induced damage significantly to 46.5±1.7 and 67.9±10.4mm(3) (all p<0.05 vs. NaCl). In order to test whether Tween-20, a solvent of EPO formulation 'NeoRecomon®' was responsible for adverse effects two groups were treated with NaCl or Tween-20 after the evacuation of ASDH, but no difference in lesion volume was detected. In conclusion, EPO is neuroprotective in a model of ASDH in rats and was most efficacious at a very low dose in combination with subdural blood removal. High systemic and topically applied concentrations caused adverse effects on lesion size which were partially due to increased ICP. Thus, patients with traumatic ASDH could be treated with cortically applied EPO but with caution concerning concentration.
Resumo:
Water-conducting faults and fractures were studied in the granite-hosted A¨ spo¨ Hard Rock Laboratory (SE Sweden). On a scale of decametres and larger, steeply dipping faults dominate and contain a variety of different fault rocks (mylonites, cataclasites, fault gouges). On a smaller scale, somewhat less regular fracture patterns were found. Conceptual models of the fault and fracture geometries and of the properties of rock types adjacent to fractures were derived and used as input for the modelling of in situ dipole tracer tests that were conducted in the framework of the Tracer Retention Understanding Experiment (TRUE-1) on a scale of metres. After the identification of all relevant transport and retardation processes, blind predictions of the breakthroughs of conservative to moderately sorbing tracers were calculated and then compared with the experimental data. This paper provides the geological basis and model calibration, while the predictive and inverse modelling work is the topic of the companion paper [J. Contam. Hydrol. 61 (2003) 175]. The TRUE-1 experimental volume is highly fractured and contains the same types of fault rocks and alterations as on the decametric scale. The experimental flow field was modelled on the basis of a 2D-streamtube formalism with an underlying homogeneous and isotropic transmissivity field. Tracer transport was modelled using the dual porosity medium approach, which is linked to the flow model by the flow porosity. Given the substantial pumping rates in the extraction borehole, the transport domain has a maximum width of a few centimetres only. It is concluded that both the uncertainty with regard to the length of individual fractures and the detailed geometry of the network along the flowpath between injection and extraction boreholes are not critical because flow is largely one-dimensional, whether through a single fracture or a network. Process identification and model calibration were based on a single uranine breakthrough (test PDT3), which clearly showed that matrix diffusion had to be included in the model even over the short experimental time scales, evidenced by a characteristic shape of the trailing edge of the breakthrough curve. Using the geological information and therefore considering limited matrix diffusion into a thin fault gouge horizon resulted in a good fit to the experiment. On the other hand, fresh granite was found not to interact noticeably with the tracers over the time scales of the experiments. While fracture-filling gouge materials are very efficient in retarding tracers over short periods of time (hours–days), their volume is very small and, with time progressing, retardation will be dominated by altered wall rock and, finally, by fresh granite. In such rocks, both porosity (and therefore the effective diffusion coefficient) and sorption Kds are more than one order of magnitude smaller compared to fault gouge, thus indicating that long-term retardation is expected to occur but to be less pronounced.