16 resultados para Subgrid-scale Modelling
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
The evolution of porosity due to dissolution/precipitation processes of minerals and the associated change of transport parameters are of major interest for natural geological environments and engineered underground structures. We designed a reproducible and fast to conduct 2D experiment, which is flexible enough to investigate several process couplings implemented in the numerical code OpenGeosys-GEM (OGS-GEM). We investigated advective-diffusive transport of solutes, effect of liquid phase density on advective transport, and kinetically controlled dissolution/precipitation reactions causing porosity changes. In addition, the system allowed to investigate the influence of microscopic (pore scale) processes on macroscopic (continuum scale) transport. A Plexiglas tank of dimension 10 × 10 cm was filled with a 1 cm thick reactive layer consisting of a bimodal grain size distribution of celestite (SrSO4) crystals, sandwiched between two layers of sand. A barium chloride solution was injected into the tank causing an asymmetric flow field to develop. As the barium chloride reached the celestite region, dissolution of celestite was initiated and barite precipitated. Due to the higher molar volume of barite, its precipitation caused a porosity decrease and thus also a decrease in the permeability of the porous medium. The change of flow in space and time was observed via injection of conservative tracers and analysis of effluents. In addition, an extensive post-mortem analysis of the reacted medium was conducted. We could successfully model the flow (with and without fluid density effects) and the transport of conservative tracers with a (continuum scale) reactive transport model. The prediction of the reactive experiments initially failed. Only the inclusion of information from post-mortem analysis gave a satisfactory match for the case where the flow field changed due to dissolution/precipitation reactions. We concentrated on the refinement of post-mortem analysis and the investigation of the dissolution/precipitation mechanisms at the pore scale. Our analytical techniques combined scanning electron microscopy (SEM) and synchrotron X-ray micro-diffraction/micro-fluorescence performed at the XAS beamline (Swiss Light Source). The newly formed phases include an epitaxial growth of barite micro-crystals on large celestite crystals (epitaxial growth) and a nano-crystalline barite phase (resulting from the dissolution of small celestite crystals) with residues of celestite crystals in the pore interstices. Classical nucleation theory, using well-established and estimated parameters describing barite precipitation, was applied to explain the mineralogical changes occurring in our system. Our pore scale investigation showed limits of the continuum scale reactive transport model. Although kinetic effects were implemented by fixing two distinct rates for the dissolution of large and small celestite crystals, instantaneous precipitation of barite was assumed as soon as oversaturation occurred. Precipitation kinetics, passivation of large celestite crystals and metastability of supersaturated solutions, i.e. the conditions under which nucleation cannot occur despite high supersaturation, were neglected. These results will be used to develop a pore scale model that describes precipitation and dissolution of crystals at the pore scale for various transport and chemical conditions. Pore scale modelling can be used to parameterize constitutive equations to introduce pore-scale corrections into macroscopic (continuum) reactive transport models. Microscopic understanding of the system is fundamental for modelling from the pore to the continuum scale.
Resumo:
Reactive transport modelling was used to simulate solute transport, thermodynamic reactions, ion exchange and biodegradation in the Porewater Chemistry (PC) experiment at the Mont Terri Rock Laboratory. Simulations show that the most important chemical processes controlling the fluid composition within the borehole and the surrounding formation during the experiment are ion exchange, biodegradation and dissolution/precipitation reactions involving pyrite and carbonate minerals. In contrast, thermodynamic mineral dissolution/precipitation reactions involving alumo-silicate minerals have little impact on the fluid composition on the time-scale of the experiment. With the accurate description of the initial chemical condition in the formation in combination with kinetic formulations describing the different stages of bacterial activities, it has been possible to reproduce the evolution of important system parameters, such as the pH, redox potential, total organic C. dissolved inorganic C and SO(4) concentration. Leaching of glycerol from the pH-electrode may be the primary source of organic material that initiated bacterial growth, which caused the chemical perturbation in the borehole. Results from these simulations are consistent with data from the over-coring and demonstrate that the Opalinus Clay has a high buffering capacity in terms of chemical perturbations caused by bacterial activity. This buffering capacity can be attributed to the carbonate system as well as to the reactivity of clay surfaces.
Resumo:
The development of susceptibility maps for debris flows is of primary importance due to population pressure in hazardous zones. However, hazard assessment by process-based modelling at a regional scale is difficult due to the complex nature of the phenomenon, the variability of local controlling factors, and the uncertainty in modelling parameters. A regional assessment must consider a simplified approach that is not highly parameter dependant and that can provide zonation with minimum data requirements. A distributed empirical model has thus been developed for regional susceptibility assessments using essentially a digital elevation model (DEM). The model is called Flow-R for Flow path assessment of gravitational hazards at a Regional scale (available free of charge under http://www.flow-r.org) and has been successfully applied to different case studies in various countries with variable data quality. It provides a substantial basis for a preliminary susceptibility assessment at a regional scale. The model was also found relevant to assess other natural hazards such as rockfall, snow avalanches and floods. The model allows for automatic source area delineation, given user criteria, and for the assessment of the propagation extent based on various spreading algorithms and simple frictional laws. We developed a new spreading algorithm, an improved version of Holmgren's direction algorithm, that is less sensitive to small variations of the DEM and that is avoiding over-channelization, and so produces more realistic extents. The choices of the datasets and the algorithms are open to the user, which makes it compliant for various applications and dataset availability. Amongst the possible datasets, the DEM is the only one that is really needed for both the source area delineation and the propagation assessment; its quality is of major importance for the results accuracy. We consider a 10 m DEM resolution as a good compromise between processing time and quality of results. However, valuable results have still been obtained on the basis of lower quality DEMs with 25 m resolution.
Resumo:
The migration of radioactive and chemical contaminants in clay materials and argillaceous host rocks is characterised by diffusion and retention processes. Valuable information on such processes can be gained by combining diffusion studies at laboratory scale with field migration tests. In this work, the outcome of a multi-tracer in situ migration test performed in the Opalinus Clay formation in the Mont Terri underground rock laboratory (Switzerland) is presented. Thus, 1.16 x 10(5) Bq/L of HTO, 3.96 x 10(3) Bq/L of Sr-85, 6.29 x 10(2) Bq/L of Co-60, 2.01 x 10(-3) mol/L Cs, 9.10 x 10(-4) mol/L I and 1.04 x 10(-3) mol/L Br were injected into the borehole. The decrease of the radioisotope concentrations in the borehole was monitored using in situ gamma-spectrometry. The other tracers were analyzed with state-of-the-art laboratory procedures after sampling of small water aliquots from the reservoir. The diffusion experiment was carried out over a period of one year after which the interval section was overcored and analyzed. Based on the experimental data from the tracer evolution in the borehole and the tracer profiles in the rock, the diffusion of tracers was modelled with the numerical code CRUNCH. The results obtained for HTO (H-3), I- and Br- confirm previous lab and in situ diffusion data. Anionic fluxes into the formation were smaller compared to HTO because of anion exclusion effects. The migration of the cations Sr-85(2+), Cs+ and Co-60(2+) was found to be governed by both diffusion and sorption processes. For Sr-85(2+), the slightly higher diffusivity relative to HTO and the low sorption value are consistent with laboratory diffusion measurements on small-scale samples. In the case of Cs+, the numerically deduced high diffusivity and the Freundlich-type sorption behaviour is also supported by ongoing laboratory data. For Co, no laboratory diffusion data were yet available for comparison; however, the modelled data suggests that Co-60(2+) sorption was weaker than would be expected from available batch sorption data. Overall, the results demonstrate the feasibility of the experimental setup for obtaining high-quality diffusion data for conservative and sorbing tracers. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Understanding natural climate variability and its driving factors is crucial to assessing future climate change. Therefore, comparing proxy-based climate reconstructions with forcing factors as well as comparing these with paleoclimate model simulations is key to gaining insights into the relative roles of internal versus forced variability. A review of the state of modelling of the climate of the last millennium prior to the CMIP5–PMIP3 (Coupled Model Intercomparison Project Phase 5–Paleoclimate Modelling Intercomparison Project Phase 3) coordinated effort is presented and compared to the available temperature reconstructions. Simulations and reconstructions broadly agree on reproducing the major temperature changes and suggest an overall linear response to external forcing on multidecadal or longer timescales. Internal variability is found to have an important influence at hemispheric and global scales. The spatial distribution of simulated temperature changes during the transition from the Medieval Climate Anomaly to the Little Ice Age disagrees with that found in the reconstructions. Thus, either internal variability is a possible major player in shaping temperature changes through the millennium or the model simulations have problems realistically representing the response pattern to external forcing. A last millennium transient climate response (LMTCR) is defined to provide a quantitative framework for analysing the consistency between simulated and reconstructed climate. Beyond an overall agreement between simulated and reconstructed LMTCR ranges, this analysis is able to single out specific discrepancies between some reconstructions and the ensemble of simulations. The disagreement is found in the cases where the reconstructions show reduced covariability with external forcings or when they present high rates of temperature change.
Resumo:
The Wetland and Wetland CH4 Intercomparison of Models Project (WETCHIMP) was created to evaluate our present ability to simulate large-scale wetland characteristics and corresponding methane (CH4) emissions. A multi-model comparison is essential to evaluate the key uncertainties in the mechanisms and parameters leading to methane emissions. Ten modelling groups joined WETCHIMP to run eight global and two regional models with a common experimental protocol using the same climate and atmospheric carbon dioxide (CO2) forcing datasets. We reported the main conclusions from the intercomparison effort in a companion paper (Melton et al., 2013). Here we provide technical details for the six experiments, which included an equilibrium, a transient, and an optimized run plus three sensitivity experiments (temperature, precipitation, and atmospheric CO2 concentration). The diversity of approaches used by the models is summarized through a series of conceptual figures, and is used to evaluate the wide range of wetland extent and CH4 fluxes predicted by the models in the equilibrium run. We discuss relationships among the various approaches and patterns in consistencies of these model predictions. Within this group of models, there are three broad classes of methods used to estimate wetland extent: prescribed based on wetland distribution maps, prognostic relationships between hydrological states based on satellite observations, and explicit hydrological mass balances. A larger variety of approaches was used to estimate the net CH4 fluxes from wetland systems. Even though modelling of wetland extent and CH4 emissions has progressed significantly over recent decades, large uncertainties still exist when estimating CH4 emissions: there is little consensus on model structure or complexity due to knowledge gaps, different aims of the models, and the range of temporal and spatial resolutions of the models.
Resumo:
Global wetlands are believed to be climate sensitive, and are the largest natural emitters of methane (CH4). Increased wetland CH4 emissions could act as a positive feedback to future warming. The Wetland and Wetland CH4 Inter-comparison of Models Project (WETCHIMP) investigated our present ability to simulate large-scale wetland characteristics and corresponding CH4 emissions. To ensure inter-comparability, we used a common experimental protocol driving all models with the same climate and carbon dioxide (CO2) forcing datasets. The WETCHIMP experiments were conducted for model equilibrium states as well as transient simulations covering the last century. Sensitivity experiments investigated model response to changes in selected forcing inputs (precipitation, temperature, and atmospheric CO2 concentration). Ten models participated, covering the spectrum from simple to relatively complex, including models tailored either for regional or global simulations. The models also varied in methods to calculate wetland size and location, with some models simulating wetland area prognostically, while other models relied on remotely sensed inundation datasets, or an approach intermediate between the two. Four major conclusions emerged from the project. First, the suite of models demonstrate extensive disagreement in their simulations of wetland areal extent and CH4 emissions, in both space and time. Simple metrics of wetland area, such as the latitudinal gradient, show large variability, principally between models that use inundation dataset information and those that independently determine wetland area. Agreement between the models improves for zonally summed CH4 emissions, but large variation between the models remains. For annual global CH4 emissions, the models vary by ±40% of the all-model mean (190 Tg CH4 yr−1). Second, all models show a strong positive response to increased atmospheric CO2 concentrations (857 ppm) in both CH4 emissions and wetland area. In response to increasing global temperatures (+3.4 °C globally spatially uniform), on average, the models decreased wetland area and CH4 fluxes, primarily in the tropics, but the magnitude and sign of the response varied greatly. Models were least sensitive to increased global precipitation (+3.9 % globally spatially uniform) with a consistent small positive response in CH4 fluxes and wetland area. Results from the 20th century transient simulation show that interactions between climate forcings could have strong non-linear effects. Third, we presently do not have sufficient wetland methane observation datasets adequate to evaluate model fluxes at a spatial scale comparable to model grid cells (commonly 0.5°). This limitation severely restricts our ability to model global wetland CH4 emissions with confidence. Our simulated wetland extents are also difficult to evaluate due to extensive disagreements between wetland mapping and remotely sensed inundation datasets. Fourth, the large range in predicted CH4 emission rates leads to the conclusion that there is both substantial parameter and structural uncertainty in large-scale CH4 emission models, even after uncertainties in wetland areas are accounted for.
Resumo:
For a three-dimensional vertically-oriented fault zone, we consider the coupled effects of fluid flow, heat transfer and reactive mass transport, to investigate the patterns of fluid flow, temperature distribution, mineral alteration and chemically induced porosity changes. We show, analytically and numerically, that finger-like convection patterns can arise in a vertically-oriented fault zone. The onset and patterns of convective fluid flow are controlled by the Rayleigh number which is a function of the thermal properties of the fluid and the rock, the vertical temperature gradient, and the height and the permeability of the fault zone. Vigorous fluid flow causes low temperature gradients over a large region of the fault zone. In such a case, flow across lithological interfaces becomes the most important mechanism for the formation of sharp chemical reaction fronts. The degree of rock buffering, the extent and intensity of alteration, the alteration mineralogy and in some cases the formation of ore deposits are controlled by the magnitude of the flow velocity across these compositional interfaces in the rock. This indicates that alteration patterns along compositional boundaries in the rock may provide some insights into the convection pattern. The advective mass and heat exchanges between the fault zone and the wallrock depend on the permeability contrast between the fault zone and the wallrock. A high permeability contrast promotes focussed convective flow within the fault zone and diffusive exchange of heat and chemical reactants between the fault zone and the wallrock. However, a more gradual permeability change may lead to a regional-scale convective flow system where the flow pattern in the fault affects large-scale fluid flow, mass transport and chemical alteration in the wallrocks
Resumo:
We present quantitative reconstructions of regional vegetation cover in north-western Europe, western Europe north of the Alps, and eastern Europe for five time windows in the Holocene around 6k, 3k, 0.5k, 0.2k, and 0.05k calendar years before present (bp)] at a 1 degrees x1 degrees spatial scale with the objective of producing vegetation descriptions suitable for climate modelling. The REVEALS model was applied on 636 pollen records from lakes and bogs to reconstruct the past cover of 25 plant taxa grouped into 10 plant-functional types and three land-cover types evergreen trees, summer-green (deciduous) trees, and open land]. The model corrects for some of the biases in pollen percentages by using pollen productivity estimates and fall speeds of pollen, and by applying simple but robust models of pollen dispersal and deposition. The emerging patterns of tree migration and deforestation between 6k bp and modern time in the REVEALS estimates agree with our general understanding of the vegetation history of Europe based on pollen percentages. However, the degree of anthropogenic deforestation (i.e. cover of cultivated and grazing land) at 3k, 0.5k, and 0.2k bp is significantly higher than deduced from pollen percentages. This is also the case at 6k in some parts of Europe, in particular Britain and Ireland. Furthermore, the relationship between summer-green and evergreen trees, and between individual tree taxa, differs significantly when expressed as pollen percentages or as REVEALS estimates of tree cover. For instance, when Pinus is dominant over Picea as pollen percentages, Picea is dominant over Pinus as REVEALS estimates. These differences play a major role in the reconstruction of European landscapes and for the study of land cover-climate interactions, biodiversity and human resources.
Resumo:
The present research analyses the adequacy of the widely used Career Satisfaction Scale (CSS; Greenhaus, Parasuraman, & Wormley, 1990) for measuring change over time. We used data of a sample of 1,273 professionals over a 5-year time period. First, we tested longitudinal measurement invariance of the CSS. Second, we analysed changes in career satisfaction by means of multiple indicator latent growth modelling (MLGM). Results revealed that the CSS can be reliably used in mean change analyses. Altogether, career satisfaction was relatively stable over time; however, we found significant variance in intra-individual growth trajectories and a negative correlation between the initial level of and changes in career satisfaction. Professionals who were initially highly satisfied became less satisfied over time. Theoretical and practical implications with respect to the construct of career satisfaction and its development over time (i.e., alpha, beta, and gamma change) are discussed.
Resumo:
This paper reviews the methods, benefits and challenges associated with the adoption and translation of computational fluid dynamics (CFD) modelling within cardiovascular medicine. CFD, a specialist area of mathematics and a branch of fluid mechanics, is used routinely in a diverse range of safety-critical engineering systems, which increasingly is being applied to the cardiovascular system. By facilitating rapid, economical, low-risk prototyping, CFD modelling has already revolutionised research and development of devices such as stents, valve prostheses, and ventricular assist devices. Combined with cardiovascular imaging, CFD simulation enables detailed characterisation of complex physiological pressure and flow fields and the computation of metrics which cannot be directly measured, for example, wall shear stress. CFD models are now being translated into clinical tools for physicians to use across the spectrum of coronary, valvular, congenital, myocardial and peripheral vascular diseases. CFD modelling is apposite for minimally-invasive patient assessment. Patient-specific (incorporating data unique to the individual) and multi-scale (combining models of different length- and time-scales) modelling enables individualised risk prediction and virtual treatment planning. This represents a significant departure from traditional dependence upon registry-based, population-averaged data. Model integration is progressively moving towards 'digital patient' or 'virtual physiological human' representations. When combined with population-scale numerical models, these models have the potential to reduce the cost, time and risk associated with clinical trials. The adoption of CFD modelling signals a new era in cardiovascular medicine. While potentially highly beneficial, a number of academic and commercial groups are addressing the associated methodological, regulatory, education- and service-related challenges.
Resumo:
BACKGROUND AND AIMS Hepatitis C (HCV) is a leading cause of morbidity and mortality in people who live with HIV. In many countries, access to direct acting antiviral agents to treat HCV is restricted to individuals with advanced liver disease (METAVIR stage F3 or F4). Our goal was to estimate the long term impact of deferring HCV treatment for men who have sex with men (MSM) who are coinfected with HIV and often have multiple risk factors for liver disease progression. METHODS We developed an individual-based model of liver disease progression in HIV/HCV coinfected men who have sex with men. We estimated liver-related morbidity and mortality as well as the median time spent with replicating HCV infection when individuals were treated in liver fibrosis stages F0, F1, F2, F3 or F4 on the METAVIR scale. RESULTS The percentage of individuals who died of liver-related complications was 2% if treatment was initiated in F0 or F1. It increased to 3% if treatment was deferred until F2, 7% if it was deferred until F3 and 22% if deferred until F4. The median time individuals spent with replicating HCV increased from 5 years if treatment was initiated in F2 to almost 15 years if it was deferred until F4. CONCLUSIONS Deferring HCV therapy until advanced liver fibrosis is established could increase liver-related morbidity and mortality in HIV/HCV coinfected individuals, and substantially prolong the time individuals spend with replicating HCV infection.
Resumo:
BACKGROUND The number of patients in need of second-line antiretroviral drugs is increasing in sub-Saharan Africa. We aimed to project the need of second-line antiretroviral therapy in adults in sub-Saharan Africa up to 2030. METHODS We developed a simulation model for HIV and applied it to each sub-Saharan African country. We used the WHO country intelligence database to estimate the number of adult patients receiving antiretroviral therapy from 2005 to 2014. We fitted the number of adult patients receiving antiretroviral therapy to observed estimates, and predicted first-line and second-line needs between 2015 and 2030. We present results for sub-Saharan Africa, and eight selected countries. We present 18 scenarios, combining the availability of viral load monitoring, speed of antiretroviral scale-up, and rates of retention and switching to second-line. HIV transmission was not included. FINDINGS Depending on the scenario, 8·7-25·6 million people are expected to receive antiretroviral therapy in 2020, of whom 0·5-3·0 million will be receiving second-line antiretroviral therapy. The proportion of patients on treatment receiving second-line therapy was highest (15·6%) in the scenario with perfect retention and immediate switching, no further scale-up, and universal routine viral load monitoring. In 2030, the estimated range of patients receiving antiretroviral therapy will remain constant, but the number of patients receiving second-line antiretroviral therapy will increase to 0·8-4·6 million (6·6-19·6%). The need for second-line antiretroviral therapy was two to three times higher if routine viral load monitoring was implemented throughout the region, compared with a scenario of no further viral load monitoring scale-up. For each monitoring strategy, the future proportion of patients receiving second-line antiretroviral therapy differed only minimally between countries. INTERPRETATION Donors and countries in sub-Saharan Africa should prepare for a substantial increase in the need for second-line drugs during the next few years as access to viral load monitoring improves. An urgent need exists to decrease the costs of second-line drugs. FUNDING World Health Organization, Swiss National Science Foundation, National Institutes of Health.
Resumo:
Accurate rainfall data are the key input parameter for modelling river discharge and soil loss. Remote areas of Ethiopia often lack adequate precipitation data and where these data are available, there might be substantial temporal or spatial gaps. To counter this challenge, the Climate Forecast System Reanalysis (CFSR) of the National Centers for Environmental Prediction (NCEP) readily provides weather data for any geographic location on earth between 1979 and 2014. This study assesses the applicability of CFSR weather data to three watersheds in the Blue Nile Basin in Ethiopia. To this end, the Soil and Water Assessment Tool (SWAT) was set up to simulate discharge and soil loss, using CFSR and conventional weather data, in three small-scale watersheds ranging from 112 to 477 ha. Calibrated simulation results were compared to observed river discharge and observed soil loss over a period of 32 years. The conventional weather data resulted in very good discharge outputs for all three watersheds, while the CFSR weather data resulted in unsatisfactory discharge outputs for all of the three gauging stations. Soil loss simulation with conventional weather inputs yielded satisfactory outputs for two of three watersheds, while the CFSR weather input resulted in three unsatisfactory results. Overall, the simulations with the conventional data resulted in far better results for discharge and soil loss than simulations with CFSR data. The simulations with CFSR data were unable to adequately represent the specific regional climate for the three watersheds, performing even worse in climatic areas with two rainy seasons. Hence, CFSR data should not be used lightly in remote areas with no conventional weather data where no prior analysis is possible.