969 resultados para intervention modelling experiments
Resumo:
Innovative gas cooled reactors, such as the pebble bed reactor (PBR) and the gas cooled fast reactor (GFR) offer higher efficiency and new application areas for nuclear energy. Numerical methods were applied and developed to analyse the specific features of these reactor types with fully three dimensional calculation models. In the first part of this thesis, discrete element method (DEM) was used for a physically realistic modelling of the packing of fuel pebbles in PBR geometries and methods were developed for utilising the DEM results in subsequent reactor physics and thermal-hydraulics calculations. In the second part, the flow and heat transfer for a single gas cooled fuel rod of a GFR were investigated with computational fluid dynamics (CFD) methods. An in-house DEM implementation was validated and used for packing simulations, in which the effect of several parameters on the resulting average packing density was investigated. The restitution coefficient was found out to have the most significant effect. The results can be utilised in further work to obtain a pebble bed with a specific packing density. The packing structures of selected pebble beds were also analysed in detail and local variations in the packing density were observed, which should be taken into account especially in the reactor core thermal-hydraulic analyses. Two open source DEM codes were used to produce stochastic pebble bed configurations to add realism and improve the accuracy of criticality calculations performed with the Monte Carlo reactor physics code Serpent. Russian ASTRA criticality experiments were calculated. Pebble beds corresponding to the experimental specifications within measurement uncertainties were produced in DEM simulations and successfully exported into the subsequent reactor physics analysis. With the developed approach, two typical issues in Monte Carlo reactor physics calculations of pebble bed geometries were avoided. A novel method was developed and implemented as a MATLAB code to calculate porosities in the cells of a CFD calculation mesh constructed over a pebble bed obtained from DEM simulations. The code was further developed to distribute power and temperature data accurately between discrete based reactor physics and continuum based thermal-hydraulics models to enable coupled reactor core calculations. The developed method was also found useful for analysing sphere packings in general. CFD calculations were performed to investigate the pressure losses and heat transfer in three dimensional air cooled smooth and rib roughened rod geometries, housed inside a hexagonal flow channel representing a sub-channel of a single fuel rod of a GFR. The CFD geometry represented the test section of the L-STAR experimental facility at Karlsruhe Institute of Technology and the calculation results were compared to the corresponding experimental results. Knowledge was gained of the adequacy of various turbulence models and of the modelling requirements and issues related to the specific application. The obtained pressure loss results were in a relatively good agreement with the experimental data. Heat transfer in the smooth rod geometry was somewhat under predicted, which can partly be explained by unaccounted heat losses and uncertainties. In the rib roughened geometry heat transfer was severely under predicted by the used realisable k − epsilon turbulence model. An additional calculation with a v2 − f turbulence model showed significant improvement in the heat transfer results, which is most likely due to the better performance of the model in separated flow problems. Further investigations are suggested before using CFD to make conclusions of the heat transfer performance of rib roughened GFR fuel rod geometries. It is suggested that the viewpoints of numerical modelling are included in the planning of experiments to ease the challenging model construction and simulations and to avoid introducing additional sources of uncertainties. To facilitate the use of advanced calculation approaches, multi-physical aspects in experiments should also be considered and documented in a reasonable detail.
Resumo:
The energy consumption of IT equipments is becoming an issue of increasing importance. In particular, network equipments such as routers and switches are major contributors to the energy consumption of internet. Therefore it is important to understand how the relationship between input parameters such as bandwidth, number of active ports, traffic-load, hibernation-mode and their impact on energy consumption of a switch. In this paper, the energy consumption of a switch is analyzed in extensive experiments. A fuzzy rule-based model of energy consumption of a switch is proposed based on the result of experiments. The model can be used to predict the energy saving when deploying new switches by controlling the parameters to achieve desired energy consumption and subsequent performance. Furthermore, the model can also be used for further researches on energy saving techniques such as energy-efficient routing protocol, dynamic link shutdown, etc.
Resumo:
Mobile augmented reality applications are increasingly utilized as a medium for enhancing learning and engagement in history education. Although these digital devices facilitate learning through immersive and appealing experiences, their design should be driven by theories of learning and instruction. We provide an overview of an evidence-based approach to optimize the development of mobile augmented reality applications that teaches students about history. Our research aims to evaluate and model the impacts of design parameters towards learning and engagement. The research program is interdisciplinary in that we apply techniques derived from design-based experiments and educational data mining. We outline the methodological and analytical techniques as well as discuss the implications of the anticipated findings.
Resumo:
Mobile augmented reality applications are increasingly utilized as a medium for enhancing learning and engagement in history education. Although these digital devices facilitate learning through immersive and appealing experiences, their design should be driven by theories of learning and instruction. We provide an overview of an evidence-based approach to optimize the development of mobile augmented reality applications that teaches students about history. Our research aims to evaluate and model the impacts of design parameters towards learning and engagement. The research program is interdisciplinary in that we apply techniques derived from design-based experiments and educational data mining. We outline the methodological and analytical techniques as well as discuss the implications of the anticipated findings.
Resumo:
Les sociétés modernes dépendent de plus en plus sur les systèmes informatiques et ainsi, il y a de plus en plus de pression sur les équipes de développement pour produire des logiciels de bonne qualité. Plusieurs compagnies utilisent des modèles de qualité, des suites de programmes qui analysent et évaluent la qualité d'autres programmes, mais la construction de modèles de qualité est difficile parce qu'il existe plusieurs questions qui n'ont pas été répondues dans la littérature. Nous avons étudié les pratiques de modélisation de la qualité auprès d'une grande entreprise et avons identifié les trois dimensions où une recherche additionnelle est désirable : Le support de la subjectivité de la qualité, les techniques pour faire le suivi de la qualité lors de l'évolution des logiciels, et la composition de la qualité entre différents niveaux d'abstraction. Concernant la subjectivité, nous avons proposé l'utilisation de modèles bayésiens parce qu'ils sont capables de traiter des données ambiguës. Nous avons appliqué nos modèles au problème de la détection des défauts de conception. Dans une étude de deux logiciels libres, nous avons trouvé que notre approche est supérieure aux techniques décrites dans l'état de l'art, qui sont basées sur des règles. Pour supporter l'évolution des logiciels, nous avons considéré que les scores produits par un modèle de qualité sont des signaux qui peuvent être analysés en utilisant des techniques d'exploration de données pour identifier des patrons d'évolution de la qualité. Nous avons étudié comment les défauts de conception apparaissent et disparaissent des logiciels. Un logiciel est typiquement conçu comme une hiérarchie de composants, mais les modèles de qualité ne tiennent pas compte de cette organisation. Dans la dernière partie de la dissertation, nous présentons un modèle de qualité à deux niveaux. Ces modèles ont trois parties: un modèle au niveau du composant, un modèle qui évalue l'importance de chacun des composants, et un autre qui évalue la qualité d'un composé en combinant la qualité de ses composants. L'approche a été testée sur la prédiction de classes à fort changement à partir de la qualité des méthodes. Nous avons trouvé que nos modèles à deux niveaux permettent une meilleure identification des classes à fort changement. Pour terminer, nous avons appliqué nos modèles à deux niveaux pour l'évaluation de la navigabilité des sites web à partir de la qualité des pages. Nos modèles étaient capables de distinguer entre des sites de très bonne qualité et des sites choisis aléatoirement. Au cours de la dissertation, nous présentons non seulement des problèmes théoriques et leurs solutions, mais nous avons également mené des expériences pour démontrer les avantages et les limitations de nos solutions. Nos résultats indiquent qu'on peut espérer améliorer l'état de l'art dans les trois dimensions présentées. En particulier, notre travail sur la composition de la qualité et la modélisation de l'importance est le premier à cibler ce problème. Nous croyons que nos modèles à deux niveaux sont un point de départ intéressant pour des travaux de recherche plus approfondis.
Resumo:
In many real world contexts individuals find themselves in situations where they have to decide between options of behaviour that serve a collective purpose or behaviours which satisfy one’s private interests, ignoring the collective. In some cases the underlying social dilemma (Dawes, 1980) is solved and we observe collective action (Olson, 1965). In others social mobilisation is unsuccessful. The central topic of social dilemma research is the identification and understanding of mechanisms which yield to the observed cooperation and therefore resolve the social dilemma. It is the purpose of this thesis to contribute this research field for the case of public good dilemmas. To do so, existing work that is relevant to this problem domain is reviewed and a set of mandatory requirements is derived which guide theory and method development of the thesis. In particular, the thesis focusses on dynamic processes of social mobilisation which can foster or inhibit collective action. The basic understanding is that success or failure of the required process of social mobilisation is determined by heterogeneous individual preferences of the members of a providing group, the social structure in which the acting individuals are contained, and the embedding of the individuals in economic, political, biophysical, or other external contexts. To account for these aspects and for the involved dynamics the methodical approach of the thesis is computer simulation, in particular agent-based modelling and simulation of social systems. Particularly conductive are agent models which ground the simulation of human behaviour in suitable psychological theories of action. The thesis develops the action theory HAPPenInGS (Heterogeneous Agents Providing Public Goods) and demonstrates its embedding into different agent-based simulations. The thesis substantiates the particular added value of the methodical approach: Starting out from a theory of individual behaviour, in simulations the emergence of collective patterns of behaviour becomes observable. In addition, the underlying collective dynamics may be scrutinised and assessed by scenario analysis. The results of such experiments reveal insights on processes of social mobilisation which go beyond classical empirical approaches and yield policy recommendations on promising intervention measures in particular.
Resumo:
Els lixiviats d'abocadors urbans són aigües residuals altament contaminades, que es caracteritzen per les elevades concentracions d'amoni i el baix contingut de matèria orgànica biodegradable. El tractament dels lixiviats a través dels processos de nitrificació-desnitrificació convencionals és costós a causa de la seva elevada demanda d'oxigen i la necessitat d'addició d'una font de carboni externa. En els darrers anys, la viabilitat del tractament d'aquest tipus d'afluents per un procés combinat de nitritació parcial-anammox ha estat demostrada. Aquesta tesi es centra en el tractament de lixiviats d'abocador a través d'un procés de nitritació parcial en SBR, com un pas preparatori per a un reactor anammox. Els resultats de l'estudi han demostrat la viabilitat d'aquesta tecnologia per al tractament de lixiviats d'abocador. El treball va evolucionar des d'una escala inicial de laboratori, on el procés va ser testat inicialment, a uns exitosos experiments d'operació a llarg termini a escala pilot. Finalment, la tesi també inclou el desenvolupament, calibració i validació d'un model matemàtic del procés, que té com a objectiu augmentar el coneixement del procés.
Resumo:
Aquatic sediments often remove hydrophobic contaminants from fresh waters. The subsequent distribution and concentration of contaminants in bed sediments determines their effect on benthic organisms and the risk of re-entry into the water and/or leaching to groundwater. This study examines the transport of simazine and lindane in aquatic bed sediments with the aim of understanding the processes that determine their depth distribution. Experiments in flume channels (water flow of 10 cm s(-1)) determined the persistence of the compounds in the absence of sediment with (a) de-ionised water and (b) a solution that had been in contact with river sediment. In further experiments with river bed sediments in light and dark conditions, measurements were made of the concentration of the compounds in the overlying water and the development of bacterial/algal biofilms and bioturbation activity. At the end of the experiments, concentrations in sediments and associated pore waters were determined in sections of the sediment at 1 mm resolution down to 5 mm and then at 10 mm resolution to 50 mm depth and these distributions analysed using a sorption-diffusion-degradation model. The fine resolution in the depth profile permitted the detection of a maximum in the concentration of the compounds in the pore water near the surface, whereas concentrations in the sediment increased to a maximum at the surface itself. Experimental distribution coefficients determined from the pore water and sediment concentrations indicated a gradient with depth that was partly explained by an increase in organic matter content and specific surface area of the solids near the interface. The modelling showed that degradation of lindane within the sediment was necessary to explain the concentration profiles, with the optimum agreement between the measured and theoretical profiles obtained with differential degradation in the oxic and anoxic zones. The compounds penetrated to a depth of 40-50 rum over a period of 42 days. (C) 2004 Society of Chemical Industry.
Resumo:
Laboratory determined mineral weathering rates need to be normalised to allow their extrapolation to natural systems. The principle normalisation terms used in the literature are mass, and geometric- and BET specific surface area (SSA). The purpose of this study was to determine how dissolution rates normalised to these terms vary with grain size. Different size fractions of anorthite and biotite ranging from 180-150 to 20-10 mu m were dissolved in pH 3, HCl at 25 degrees C in flow through reactors under far from equilibrium conditions. Steady state dissolution rates after 5376 h (anorthite) and 4992 h (biotite) were calculated from Si concentrations and were normalised to initial- and final- mass and geometric-, geometric edge- (biotite), and BET SSA. For anorthite, rates normalised to initial- and final-BET SSA ranged from 0.33 to 2.77 X 10(-10) mol(feldspar) m(-2) s(-1), rates normalised to initial- and final-geometric SSA ranged from 5.74 to 8.88 X 10(-10) mol(feldspar) m(-2) s(-1) and rates normalised to initial- and final-mass ranged from 0.11 to 1.65 mol(feldspar) g(-1) s(-1). For biotite, rates normalised to initial- and final-BET SSA ranged from 1.02 to 2.03 X 10(-12) mol(biotite) m(-2) s(-1), rates normalised to initial- and final-geometric SSA ranged from 3.26 to 16.21 X 10(-12) mol(biotite) m(-2) s(-1), rates normalised to initial- and final-geometric edge SSA ranged from 59.46 to 111.32 x 10(-12) mol(biotite) m(-2) s(-1) and rates normalised to initial- and final-mass ranged from 0.81 to 6.93 X 10(-12) mol(biotite) g(-1) s(-1). For all normalising terms rates varied significantly (p <= 0.05) with grain size. The normalising terms which gave least variation in dissolution rate between grain sizes for anorthite were initial BET SSA and initial- and final-geometric SSA. This is consistent with: (1) dissolution being dominated by the slower dissolving but area dominant non-etched surfaces of the grains and, (2) the walls of etch pits and other dissolution features being relatively unreactive. These steady state normalised dissolution rates are likely to be constant with time. Normalisation to final BET SSA did not give constant ratios across grain size due to a non-uniform distribution of dissolution features. After dissolution coarser grains had a greater density of dissolution features with BET-measurable but unreactive wall surface area than the finer grains. The normalising term which gave the least variation in dissolution rates between grain sizes for biotite was initial BET SSA. Initial- and final-geometric edge SSA and final BET SSA gave the next least varied rates. The basal surfaces dissolved sufficiently rapidly to influence bulk dissolution rate and prevent geometric edge SSA normalised dissolution rates showing the least variation. Simple modelling indicated that biotite grain edges dissolved 71-132 times faster than basal surfaces. In this experiment, initial BET SSA best integrated the different areas and reactivities of the edge and basal surfaces of biotite. Steady state dissolution rates are likely to vary with time as dissolution alters the ratio of edge to basal surface area. Therefore they would be more properly termed pseudo-steady state rates, only appearing constant because the time period over which they were measured (1512 h) was less than the time period over wich they would change significantly. (c) 2006 Elsevier Inc. All rights reserved.
Resumo:
A new model, RothPC-1, is described for the turnover of organic C in the top metre of soil. RothPC-1 is a version of RothC-26.3, an earlier model for the turnover of C in topsoils. In RothPC-1 two extra parameters are used to model turnover in the top metre of soil: one, p, which moves organic C down the profile by an advective process, and the other, s, which slows decomposition with depth. RothPC-1 is parameterized and tested using measurements (described in Part 1, this issue) of total organic C and radiocarbon on soil profiles from the Rothamsted long-term field experiments, collected over a period of more than 100 years. RothPC-1 gives fits to measurements of organic C and radiocarbon in the 0-23, 23-46, 46-69 and 69-92 cm layers of soil that are almost all within (or close to) measurement error in two areas of regenerating woodland (Geescroft and Broadbalk Wildernesses) and an area of cultivated land from the Broadbalk Continuous Wheat Experiment. The fits to old grassland (the Park Grass Experiment) are less close. Two other sites that provide the requisite pre- and post-bomb data are also fitted; a prairie Chernozem from Russia and an annual grassland from California. Roth-PC-1 gives a close fit to measurements of organic C and radiocarbon down the Chernozem profile, provided that allowance is made for soil age; with the annual grassland the fit is acceptable in the upper part of the profile, but not in the clay-rich Bt horizon below. Calculations suggest that treating the top metre of soil as a homogeneous unit will greatly overestimate the effects of global warming in accelerating the decomposition of soil C and hence on the enhanced release of CO2 from soil organic matter; more realistic estimates will be obtained from multi-layer models such as RothPC-1.
Resumo:
Northern hemisphere snow water equivalent (SWE) distribution from remote sensing (SSM/I), the ERA40 reanalysis product and the HadCM3 general circulation model are compared. Large differences are seen in the February climatologies, particularly over Siberia. The SSM/I retrieval algorithm may be overestimating SWE in this region, while comparison with independent runoff estimates suggest that HadCM3 is underestimating SWE. Treatment of snow grain size and vegetation parameterizations are concerns with the remotely sensed data. For this reason, ERA40 is used as `truth' for the following experiments. Despite the climatology differences, HadCM3 is able to reproduce the distribution of ERA40 SWE anomalies when assimilating ERA40 anomaly fields of temperature, sea level pressure, atmospheric winds and ocean temperature and salinity. However when forecasts are released from these assimilated initial states, the SWE anomaly distribution diverges rapidly from that of ERA40. No predictability is seen from one season to another. Strong links between European SWE distribution and the North Atlantic Oscillation (NAO) are seen, but forecasts of this index by the assimilation scheme are poor. Longer term relationships between SWE and the NAO, and SWE and the El Ni\~no-Southern Oscillation (ENSO) are also investigated in a multi-century run of HadCM3. SWE is impacted by ENSO in the Himalayas and North America, while the NAO affects SWE in North America and Europe. While significant connections with the NAO index were only present in DJF (and to an extent SON), the link between ENSO and February SWE distribution was seen to exist from the previous JJA ENSO index onwards. This represents a long lead time for SWE prediction for hydrological applications such as flood and wildfire forecasting. Further work is required to develop reliable large scale observation-based SWE datasets with which to test these model-derived connections.
Resumo:
A model was devised to describe simultaneously the grain masses of water and dry matter against thermal time during grain filling and maturation of winter wheat. The model accounted for a linear increase in water mass of duration anthesis-m(1) (end of rapid water assimilation phase) and rate a, followed by a more stable water mass until in,, after which water mass declined rapidly at rate e. Grain dry matter was described as a linear increase of rate bgf until a maximum size (maxgf) was attained at m(2).The model was fitted to plot data from weekly samples of grains taken from replicated field experiments investigating effects of grain position (apical or medial), fungicide (five contrasting treatments), sowing date (early or late), cultivar (Malacca or Shamrock) and season (2001/2002 and 2002/2003) on grain filling. The model accounted for between 83 and 99% of the variation ( 2) when fitted to data from individual plots, and between 97 and 99% when fitted to treatment means. Endosperm cell number of grains from early-sown plots in the first season were also counted. Differences in maxgf between grain positions and also between cultivars were mostly the result of effects on bgf and were empirically associated with water mass at nil. Fungicide application controlled S. tritici and powdery mildew infection, delayed flag leaf senescence, increased water mass at m(1) (wm(1)), and also increased m(2), bgf and maxgf. Fungicide effects on water mass were detected before fungicide effects on dry matter, but comparison of the effects of individual fungicide treatments showed no evidence that effects on wm(1), nor on endosperm cell numbers at about m(1), were required for fungicide effects on maxgf, (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
The rate and scale of human-driven changes can exert profound impacts on ecosystems, the species that make them up and the services they provide that sustain humanity. Given the speed at which these changes are occurring, one of society's major challenges is to coexist within ecosystems and to manage ecosystem services in a sustainable way. The effect of possible scenarios of global change on ecosystem services can be explored using ecosystem models. Such models should adequately represent ecosystem processes above and below the soil surface (aboveground and belowground) and the interactions between them. We explore possibilities to include such interactions into ecosystem models at scales that range from global to local. At the regional to global scale we suggest to expand the plant functional type concept (aggregating plants into groups according to their physiological attributes) to include functional types of aboveground-belowground interactions. At the scale of discrete plant communities, process-based and organism-oriented models could be combined into "hybrid approaches" that include organism-oriented mechanistic representation of a limited number of trophic interactions in an otherwise process - oriented approach. Under global change the density and activity of organisms determining the processes may change non-linearly and therefore explicit knowledge of the organisms and their responses should ideally be included. At the individual plant scale a common organism-based conceptual model of aboveground-belowground interactions has emerged. This conceptual model facilitates the formulation of research questions to guide experiments aiming to identify patterns that are common within, but differ between, ecosystem types and biomes. Such experiments inform modelling approaches at larger scales. Future ecosystem models should better include this evolving knowledge of common patterns of aboveground-belowground interactions. Improved ecosystem models are necessary toots to reduce the uncertainty in the information that assists us in the sustainable management of our environment in a changing world. (C) 2004 Elsevier GmbH. All rights reserved.
Resumo:
The primary purpose of this study was to model the partitioning of evapotranspiration in a maize-sunflower intercrop at various canopy covers. The Shuttleworth-Wallace (SW) model was extended for intercropping systems to include both crop transpiration and soil evaporation and allowing interaction between the two. To test the accuracy of the extended SW model, two field experiments of maize-sunflower intercrop were conducted in 1998 and 1999. Plant transpiration and soil evaporation were measured using sap flow gauges and lysimeters, respectively. The mean prediction error (simulated minus measured values) for transpiration was zero (which indicated no overall bias in estimation error), and its accuracy was not affected by the plant growth stages, but simulated transpiration during high measured transpiration rates tended to be slightly underestimated. Overall, the predictions for daily soil evaporation were also accurate. Model estimation errors were probably due to the simplified modelling of soil water content, stomatal resistances and soil heat flux as well as due to the uncertainties in characterising the 2 micrometeorological conditions. The SW’s prediction of transpiration was most sensitive to parameters most directly related to the canopy characteristics such as the partitioning of captured solar radiation, canopy resistance, and bulk boundary layer resistance.
Resumo:
Health care providers, purchasers and policy makers need to make informed decisions regarding the provision of cost-effective care. When a new health care intervention is to be compared with the current standard, an economic evaluation alongside an evaluation of health benefits provides useful information for the decision making process. We consider the information on cost-effectiveness which arises from an individual clinical trial comparing the two interventions. Recent methods for conducting a cost-effectiveness analysis for a clinical trial have focused on the net benefit parameter. The net benefit parameter, a function of costs and health benefits, is positive if the new intervention is cost-effective compared with the standard. In this paper we describe frequentist and Bayesian approaches to cost-effectiveness analysis which have been suggested in the literature and apply them to data from a clinical trial comparing laparoscopic surgery with open mesh surgery for the repair of inguinal hernias. We extend the Bayesian model to allow the total cost to be divided into a number of different components. The advantages and disadvantages of the different approaches are discussed. In January 2001, NICE issued guidance on the type of surgery to be used for inguinal hernia repair. We discuss our example in the light of this information. Copyright © 2003 John Wiley & Sons, Ltd.