964 resultados para Uncertainty analysis
Resumo:
An uncertainty propagation methodology based on the Monte Carlo method is applied to PWR nuclear design analysis to assess the impact of nuclear data uncertainties. The importance of the nuclear data uncertainties for 235,238 U, 239 Pu, and the thermal scattering library for hydrogen in water is analyzed. This uncertainty analysis is compared with the design and acceptance criteria to assure the adequacy of bounding estimates in safety margins.
Resumo:
Four European fuel cycle scenarios involving transmutation options (in coherence with PATEROS and CPESFR EU projects) have been addressed from a point of view of resources utilization and economic estimates. Scenarios include: (i) the current fleet using Light Water Reactor (LWR) technology and open fuel cycle, (ii) full replacement of the initial fleet with Fast Reactors (FR) burning U?Pu MOX fuel, (iii) closed fuel cycle with Minor Actinide (MA) transmutation in a fraction of the FR fleet, and (iv) closed fuel cycle with MA transmutation in dedicated Accelerator Driven Systems (ADS). All scenarios consider an intermediate period of GEN-III+ LWR deployment and they extend for 200 years, looking for long term equilibrium mass flow achievement. The simulations were made using the TR_EVOL code, capable to assess the management of the nuclear mass streams in the scenario as well as economics for the estimation of the levelized cost of electricity (LCOE) and other costs. Results reveal that all scenarios are feasible according to nuclear resources demand (natural and depleted U, and Pu). Additionally, we have found as expected that the FR scenario reduces considerably the Pu inventory in repositories compared to the reference scenario. The elimination of the LWR MA legacy requires a maximum of 55% fraction (i.e., a peak value of 44 FR units) of the FR fleet dedicated to transmutation (MA in MOX fuel, homogeneous transmutation) or an average of 28 units of ADS plants (i.e., a peak value of 51 ADS units). Regarding the economic analysis, the main usefulness of the provided economic results is for relative comparison of scenarios and breakdown of LCOE contributors rather than provision of absolute values, as technological readiness levels are low for most of the advanced fuel cycle stages. The obtained estimations show an increase of LCOE ? averaged over the whole period ? with respect to the reference open cycle scenario of 20% for Pu management scenario and around 35% for both transmutation scenarios. The main contribution to LCOE is the capital costs of new facilities, quantified between 60% and 69% depending on the scenario. An uncertainty analysis is provided around assumed low and high values of processes and technologies.
Resumo:
1. Management decisions regarding invasive plants often have to be made quickly and in the face of fragmentary knowledge of their population dynamics. However, recommendations are commonly made on the basis of only a restricted set of parameters. Without addressing uncertainty and variability in model parameters we risk ineffective management, resulting in wasted resources and an escalating problem if early chances to control spread are missed. 2. Using available data for Pinus nigra in ungrazed and grazed grassland and shrubland in New Zealand, we parameterized a stage-structured spread model to calculate invasion wave speed, population growth rate and their sensitivities and elasticities to population parameters. Uncertainty distributions of parameters were used with the model to generate confidence intervals (CI) about the model predictions. 3. Ungrazed grassland environments were most vulnerable to invasion and the highest elasticities and sensitivities of invasion speed were to long-distance dispersal parameters. However, there was overlap between the elasticity and sensitivity CI on juvenile survival, seedling establishment and long-distance dispersal parameters, indicating overlap in their effects on invasion speed. 4. While elasticity of invasion speed to long-distance dispersal was highest in shrubland environments, there was overlap with the CI of elasticity to juvenile survival. In shrubland invasion speed was most sensitive to the probability of establishment, especially when establishment was low. In the grazed environment elasticity and sensitivity of invasion speed to the severity of grazing were consistently highest. Management recommendations based on elasticities and sensitivities depend on the vulnerability of the habitat. 5. Synthesis and applications. Despite considerable uncertainty in demography and dispersal, robust management recommendations emerged from the model. Proportional or absolute reductions in long-distance dispersal, juvenile survival and seedling establishment parameters have the potential to reduce wave speed substantially. Plantations of wind-dispersed invasive conifers should not be sited on exposed sites vulnerable to long-distance dispersal events, and trees in these sites should be removed. Invasion speed can also be reduced by removing seedlings, establishing competitive shrubs and grazing. Incorporating uncertainty into the modelling process increases our confidence in the wide applicability of the management strategies recommended here.
Resumo:
Increasing environmental awareness has been a significant driving force for innovations and process improvements in different sectors and the field of chemistry is not an outlier. Innovating around industrial chemical processes in line with current environmental responsibilities is however no mean feat. One of such hard to overhaul process is the production of methyl methacrylate (MMA) commonly produced via the acetone cyanohydrin (ACH) process developed back in the 1930s. Different alternatives to the ACH process have emerged over the years and the Alpha Lucite process has been particularly promising with a combined plant capacity of 370,000 metric tonnes in Singapore and Saudi Arabia. This study applied Life Cycle Assessment methodology to conduct a comparative analysis between the ACH and Lucite processes with the aim of ascertaining the effect of applying principles of green chemistry as a process improvement tool on overall environmental impacts. A further comparison was made between the Lucite process and a lab-scale process that is further improvement on the former, also based on green chemistry principles. Results showed that the Lucite process has higher impacts on resource scarcity and ecosystem health whereas the ACH process has higher impacts on human health. On the other hand, compared to the Lucite process the lab-scale process has higher impacts in both the ecosystem and human health categories with lower impacts only in the resource scarcity category. It was observed that the benefits of process improvements with green chemistry principles might not be apparent in some categories due to some limitations of the methodology. Process contribution analysis was also performed and it revealed that the contribution of energy is significant, therefore a sensitivity analysis with different energy scenarios was performed. An uncertainty analysis using Monte Carlo analysis was also performed to validate the consistency of the results in each of the comparisons.
Resumo:
This work assessed the environmental impacts of the production and use of 1 MJ of hydrous ethanol (E100) in Brazil in prospective scenarios (2020-2030), considering the deployment of technologies currently under development and better agricultural practices. The life cycle assessment technique was employed using the CML method for the life cycle impact assessment and the Monte Carlo method for the uncertainty analysis. Abiotic depletion, global warming, human toxicity, ecotoxicity, photochemical oxidation, acidification, and eutrophication were the environmental impacts categories analyzed. Results indicate that the proposed improvements (especially no-til farming-scenarios s2 and s4) would lead to environmental benefits in prospective scenarios compared to the current ethanol production (scenario s0). Combined first and second generation ethanol production (scenarios s3 and s4) would require less agricultural land but would not perform better than the projected first generation ethanol, although the uncertainties are relatively high. The best use of 1 ha of sugar cane was also assessed, considering the displacement of the conventional products by ethanol and electricity. No-til practices combined with the production of first generation ethanol and electricity (scenario s2) would lead to the largest mitigation effects for global warming and abiotic depletion. For the remaining categories, emissions would not be mitigated with the utilization of the sugar cane products. However, this conclusion is sensitive to the displaced electricity sources.
Resumo:
Mestrado em Radiações Aplicadas às Tecnologias da Saúde. Área de especialização: Protecção contra Radiações
Resumo:
Tässä diplomityössä suunniteltiin ja rakennettiin kaasuturbiinin kaasusuuttimien virtausmittauslaitteisto. Suuttimien epätasainen toiminta kasvattaa kaasuturbiinin poistolämpötilahajontaa. Virtausmittauksien perusteella voidaan määrittää suuttimien efektiivinen virtauspoikkipinta-ala. Suuttimien asennusjärjestys opti-moidaan suuttimien välisten pinta-alaerojen mukaisesti, jolloin polttoainevirtaus polttokammioihin on mahdollisimman tasainen ja poistolämpötilahajonta pienenee. Kaasuturbiinin MS6001 esittelyssä keskityttiin tärkeimpiin komponentteihin sekä polttoainesuuttimien testauksen kannalta oleellisiin osiin ja niiden toimintaan. Teoriaosuudessa tarkasteltiin tilavuusvirran sekä suutinvirtauksen laskennassa käytettäviä yhtälöitä. Mittalaitteiston suunnittelu ja toteutus olivat tämän työn laajin osa-alue. Laitteiston keskeiset osat ovat kuristuselin ja suutintestausosa. Kuristuselintyypiksi valittiin rengaskammiollinen kuristuslaippa, joka suun-niteltiin standardin SFS-EN ISO 5167:2003 mukaisesti. Standardissa annettujen yhtälöiden antamia tuloksia verrattiin numeerisella virtauslaskentamallilla lasket-tuihin tuloksiin. Suutinrunkojen ja -kärkien mittauksien suunnittelussa sovellettiin samaa standardia sekä numeerista virtauslaskentaa optimaalisen sijainnin löytämiseksi paineyhteelle. Mittauksissa syntyvien epävarmuuksien arviointiin kiinnitettiin erityistä huomiota. Kokeellisessa osuudessa mitattiin yhden kunnostetun suuttimen, käytetyn suut-timen ja suutinrungon virtausta. Tuloksien perusteella laskettiin efektiiviset pinta-alat, joita verrattiin turbiinivalmistajan ilmoittamiin pinta-aloihin. Lopuksi arvioitiin mittaustulosten perusteella laitteiston toimivuutta. Virhe-arvioinnin ja mittaustulosten perusteella laadittiin teknisiä parannusehdotuksia suutintestauslaitteiston luotettavan toiminnan varmistamiseksi.
Resumo:
Diplomityössä tarkastellaan Loviisan ydinvoimalaitoksen todennäköisyyspohjaisen riskianalyysin tason 2 epävarmuuksia. Tason 2 riskitutkimuksissa tutkitaan ydinvoimalaitosonnettomuuksia, joiden seurauksena osa reaktorin radioaktiivisista aineista vapautuu ympäristöön. Näiden tutkimuksien päätulos on suuren päästön vuotuinen taajuus ja se on pääosin todelliseen laitoshistoriaan perustuva tilastollinen odotusarvo. Tämän odotusarvon uskottavuutta voidaan parantaa huomioimalla merkittävimmät laskentaan liittyvät epävarmuudet. Epävarmuuksia laskentaan aiheutuu muiden muassa vakavan reaktorionnettomuuden ilmiöistä, turvallisuusjärjestelmien laitteista, inhimillisistä toiminnoista sekä luotettavuusmallin määrittelemättömistä osista. Diplomityössä kuvataan, kuinka epävarmuustarkastelut integroidaan osaksi Loviisan ydinvoimalaitoksen todennäköisyyspohjaisia riskianalyysejä. Tämä toteutetaan diplomityössä kehitetyillä apuohjelmilla PRALA:lla ja PRATU:lla, joiden avulla voidaan lisätä laitoshistorian perusteella muodostetut epävarmuusparametrit osaksi riskianalyysien luotettavuusdataa. Lisäksi diplomityössä on laskettu laskentaesimerkkinä Loviisan ydinvoimalaitoksen suuren päästön vuotuisen taajuuden vaihtelua kuvaava luottamusväli. Tämä laskentaesimerkki pohjautuu pääosin konservatiivisiin epävarmuusarvioihin, ei todellisiin tilastollisiin epävarmuuksiin. Laskentaesimerkin tulosten perusteella Loviisan suuren päästön taajuudella on laaja vaihteluväli; virhekertoimeksi saatiin 8,4 nykyisillä epävarmuusparametreilla. Suuren päästön taajuuden luottamusväliä voidaan kuitenkin tulevaisuudessa supistaa, kun hyödynnetään todelliseen laitoshistoriaan perustuvia epävarmuusparametreja.
Resumo:
Les modèles pharmacocinétiques à base physiologique (PBPK) permettent de simuler la dose interne de substances chimiques sur la base de paramètres spécifiques à l’espèce et à la substance. Les modèles de relation quantitative structure-propriété (QSPR) existants permettent d’estimer les paramètres spécifiques au produit (coefficients de partage (PC) et constantes de métabolisme) mais leur domaine d’application est limité par leur manque de considération de la variabilité de leurs paramètres d’entrée ainsi que par leur domaine d’application restreint (c. à d., substances contenant CH3, CH2, CH, C, C=C, H, Cl, F, Br, cycle benzénique et H sur le cycle benzénique). L’objectif de cette étude est de développer de nouvelles connaissances et des outils afin d’élargir le domaine d’application des modèles QSPR-PBPK pour prédire la toxicocinétique de substances organiques inhalées chez l’humain. D’abord, un algorithme mécaniste unifié a été développé à partir de modèles existants pour prédire les PC de 142 médicaments et polluants environnementaux aux niveaux macro (tissu et sang) et micro (cellule et fluides biologiques) à partir de la composition du tissu et du sang et de propriétés physicochimiques. L’algorithme résultant a été appliqué pour prédire les PC tissu:sang, tissu:plasma et tissu:air du muscle (n = 174), du foie (n = 139) et du tissu adipeux (n = 141) du rat pour des médicaments acides, basiques et neutres ainsi que pour des cétones, esters d’acétate, éthers, alcools, hydrocarbures aliphatiques et aromatiques. Un modèle de relation quantitative propriété-propriété (QPPR) a été développé pour la clairance intrinsèque (CLint) in vivo (calculée comme le ratio du Vmax (μmol/h/kg poids de rat) sur le Km (μM)), de substrats du CYP2E1 (n = 26) en fonction du PC n octanol:eau, du PC sang:eau et du potentiel d’ionisation). Les prédictions du QPPR, représentées par les limites inférieures et supérieures de l’intervalle de confiance à 95% à la moyenne, furent ensuite intégrées dans un modèle PBPK humain. Subséquemment, l’algorithme de PC et le QPPR pour la CLint furent intégrés avec des modèles QSPR pour les PC hémoglobine:eau et huile:air pour simuler la pharmacocinétique et la dosimétrie cellulaire d’inhalation de composés organiques volatiles (COV) (benzène, 1,2-dichloroéthane, dichlorométhane, m-xylène, toluène, styrène, 1,1,1 trichloroéthane et 1,2,4 trimethylbenzène) avec un modèle PBPK chez le rat. Finalement, la variabilité de paramètres de composition des tissus et du sang de l’algorithme pour les PC tissu:air chez le rat et sang:air chez l’humain a été caractérisée par des simulations Monte Carlo par chaîne de Markov (MCMC). Les distributions résultantes ont été utilisées pour conduire des simulations Monte Carlo pour prédire des PC tissu:sang et sang:air. Les distributions de PC, avec celles des paramètres physiologiques et du contenu en cytochrome P450 CYP2E1, ont été incorporées dans un modèle PBPK pour caractériser la variabilité de la toxicocinétique sanguine de quatre COV (benzène, chloroforme, styrène et trichloroéthylène) par simulation Monte Carlo. Globalement, les approches quantitatives mises en œuvre pour les PC et la CLint dans cette étude ont permis l’utilisation de descripteurs moléculaires génériques plutôt que de fragments moléculaires spécifiques pour prédire la pharmacocinétique de substances organiques chez l’humain. La présente étude a, pour la première fois, caractérisé la variabilité des paramètres biologiques des algorithmes de PC pour étendre l’aptitude des modèles PBPK à prédire les distributions, pour la population, de doses internes de substances organiques avant de faire des tests chez l’animal ou l’humain.
Resumo:
Se presenta el análisis de sensibilidad de un modelo de percepción de marca y ajuste de la inversión en marketing desarrollado en el Laboratorio de Simulación de la Universidad del Rosario. Este trabajo de grado consta de una introducción al tema de análisis de sensibilidad y su complementario el análisis de incertidumbre. Se pasa a mostrar ambos análisis usando un ejemplo simple de aplicación del modelo mediante la aplicación exhaustiva y rigurosa de los pasos descritos en la primera parte. Luego se hace una discusión de la problemática de medición de magnitudes que prueba ser el factor más complejo de la aplicación del modelo en el contexto práctico y finalmente se dan conclusiones sobre los resultados de los análisis.
Resumo:
La implementació de la Directiva Europea 91/271/CEE referent a tractament d'aigües residuals urbanes va promoure la construcció de noves instal·lacions al mateix temps que la introducció de noves tecnologies per tractar nutrients en àrees designades com a sensibles. Tant el disseny d'aquestes noves infraestructures com el redisseny de les ja existents es va portar a terme a partir d'aproximacions basades fonamentalment en objectius econòmics degut a la necessitat d'acabar les obres en un període de temps relativament curt. Aquests estudis estaven basats en coneixement heurístic o correlacions numèriques provinents de models determinístics simplificats. Així doncs, moltes de les estacions depuradores d'aigües residuals (EDARs) resultants van estar caracteritzades per una manca de robustesa i flexibilitat, poca controlabilitat, amb freqüents problemes microbiològics de separació de sòlids en el decantador secundari, elevats costos d'operació i eliminació parcial de nutrients allunyant-les de l'òptim de funcionament. Molts d'aquestes problemes van sorgir degut a un disseny inadequat, de manera que la comunitat científica es va adonar de la importància de les etapes inicials de disseny conceptual. Precisament per aquesta raó, els mètodes tradicionals de disseny han d'evolucionar cap a sistemes d'avaluació mes complexos, que tinguin en compte múltiples objectius, assegurant així un millor funcionament de la planta. Tot i la importància del disseny conceptual tenint en compte múltiples objectius, encara hi ha un buit important en la literatura científica tractant aquest camp d'investigació. L'objectiu que persegueix aquesta tesi és el de desenvolupar un mètode de disseny conceptual d'EDARs considerant múltiples objectius, de manera que serveixi d'eina de suport a la presa de decisions al seleccionar la millor alternativa entre diferents opcions de disseny. Aquest treball de recerca contribueix amb un mètode de disseny modular i evolutiu que combina diferent tècniques com: el procés de decisió jeràrquic, anàlisi multicriteri, optimació preliminar multiobjectiu basada en anàlisi de sensibilitat, tècniques d'extracció de coneixement i mineria de dades, anàlisi multivariant i anàlisi d'incertesa a partir de simulacions de Monte Carlo. Això s'ha aconseguit subdividint el mètode de disseny desenvolupat en aquesta tesis en quatre blocs principals: (1) generació jeràrquica i anàlisi multicriteri d'alternatives, (2) anàlisi de decisions crítiques, (3) anàlisi multivariant i (4) anàlisi d'incertesa. El primer dels blocs combina un procés de decisió jeràrquic amb anàlisi multicriteri. El procés de decisió jeràrquic subdivideix el disseny conceptual en una sèrie de qüestions mes fàcilment analitzables i avaluables mentre que l'anàlisi multicriteri permet la consideració de diferent objectius al mateix temps. D'aquesta manera es redueix el nombre d'alternatives a avaluar i fa que el futur disseny i operació de la planta estigui influenciat per aspectes ambientals, econòmics, tècnics i legals. Finalment aquest bloc inclou una anàlisi de sensibilitat dels pesos que proporciona informació de com varien les diferents alternatives al mateix temps que canvia la importància relativa del objectius de disseny. El segon bloc engloba tècniques d'anàlisi de sensibilitat, optimització preliminar multiobjectiu i extracció de coneixement per donar suport al disseny conceptual d'EDAR, seleccionant la millor alternativa un cop s'han identificat decisions crítiques. Les decisions crítiques són aquelles en les que s'ha de seleccionar entre alternatives que compleixen de forma similar els objectius de disseny però amb diferents implicacions pel que respecte a la futura estructura i operació de la planta. Aquest tipus d'anàlisi proporciona una visió més àmplia de l'espai de disseny i permet identificar direccions desitjables (o indesitjables) cap on el procés de disseny pot derivar. El tercer bloc de la tesi proporciona l'anàlisi multivariant de les matrius multicriteri obtingudes durant l'avaluació de les alternatives de disseny. Específicament, les tècniques utilitzades en aquest treball de recerca engloben: 1) anàlisi de conglomerats, 2) anàlisi de components principals/anàlisi factorial i 3) anàlisi discriminant. Com a resultat és possible un millor accés a les dades per realitzar la selecció de les alternatives, proporcionant més informació per a una avaluació mes efectiva, i finalment incrementant el coneixement del procés d'avaluació de les alternatives de disseny generades. En el quart i últim bloc desenvolupat en aquesta tesi, les diferents alternatives de disseny són avaluades amb incertesa. L'objectiu d'aquest bloc és el d'estudiar el canvi en la presa de decisions quan una alternativa és avaluada incloent o no incertesa en els paràmetres dels models que descriuen el seu comportament. La incertesa en el paràmetres del model s'introdueix a partir de funcions de probabilitat. Desprès es porten a terme simulacions Monte Carlo, on d'aquestes distribucions se n'extrauen números aleatoris que es subsisteixen pels paràmetres del model i permeten estudiar com la incertesa es propaga a través del model. Així és possible analitzar la variació en l'acompliment global dels objectius de disseny per a cada una de les alternatives, quines són les contribucions en aquesta variació que hi tenen els aspectes ambientals, legals, econòmics i tècnics, i finalment el canvi en la selecció d'alternatives quan hi ha una variació de la importància relativa dels objectius de disseny. En comparació amb les aproximacions tradicionals de disseny, el mètode desenvolupat en aquesta tesi adreça problemes de disseny/redisseny tenint en compte múltiples objectius i múltiples criteris. Al mateix temps, el procés de presa de decisions mostra de forma objectiva, transparent i sistemàtica el perquè una alternativa és seleccionada en front de les altres, proporcionant l'opció que més bé acompleix els objectius marcats, mostrant els punts forts i febles, les principals correlacions entre objectius i alternatives, i finalment tenint en compte la possible incertesa inherent en els paràmetres del model que es fan servir durant les anàlisis. Les possibilitats del mètode desenvolupat es demostren en aquesta tesi a partir de diferents casos d'estudi: selecció del tipus d'eliminació biològica de nitrogen (cas d'estudi # 1), optimització d'una estratègia de control (cas d'estudi # 2), redisseny d'una planta per aconseguir eliminació simultània de carboni, nitrogen i fòsfor (cas d'estudi # 3) i finalment anàlisi d'estratègies control a nivell de planta (casos d'estudi # 4 i # 5).
Resumo:
A traditional method of validating the performance of a flood model when remotely sensed data of the flood extent are available is to compare the predicted flood extent to that observed. The performance measure employed often uses areal pattern-matching to assess the degree to which the two extents overlap. Recently, remote sensing of flood extents using synthetic aperture radar (SAR) and airborne scanning laser altimetry (LIDAR) has made more straightforward the synoptic measurement of water surface elevations along flood waterlines, and this has emphasised the possibility of using alternative performance measures based on height. This paper considers the advantages that can accrue from using a performance measure based on waterline elevations rather than one based on areal patterns of wet and dry pixels. The two measures were compared for their ability to estimate flood inundation uncertainty maps from a set of model runs carried out to span the acceptable model parameter range in a GLUE-based analysis. A 1 in 5-year flood on the Thames in 1992 was used as a test event. As is typical for UK floods, only a single SAR image of observed flood extent was available for model calibration and validation. A simple implementation of a two-dimensional flood model (LISFLOOD-FP) was used to generate model flood extents for comparison with that observed. The performance measure based on height differences of corresponding points along the observed and modelled waterlines was found to be significantly more sensitive to the channel friction parameter than the measure based on areal patterns of flood extent. The former was able to restrict the parameter range of acceptable model runs and hence reduce the number of runs necessary to generate an inundation uncertainty map. A result of this was that there was less uncertainty in the final flood risk map. The uncertainty analysis included the effects of uncertainties in the observed flood extent as well as in model parameters. The height-based measure was found to be more sensitive when increased heighting accuracy was achieved by requiring that observed waterline heights varied slowly along the reach. The technique allows for the decomposition of the reach into sections, with different effective channel friction parameters used in different sections, which in this case resulted in lower r.m.s. height differences between observed and modelled waterlines than those achieved by runs using a single friction parameter for the whole reach. However, a validation of the modelled inundation uncertainty using the calibration event showed a significant difference between the uncertainty map and the observed flood extent. While this was true for both measures, the difference was especially significant for the height-based one. This is likely to be due to the conceptually simple flood inundation model and the coarse application resolution employed in this case. The increased sensitivity of the height-based measure may lead to an increased onus being placed on the model developer in the production of a valid model
Resumo:
Historic geomagnetic activity observations have been used to reveal centennial variations in the open solar flux and the near-Earth heliospheric conditions (the interplanetary magnetic field and the solar wind speed). The various methods are in very good agreement for the past 135 years when there were sufficient reliable magnetic observatories in operation to eliminate problems due to site-specific errors and calibration drifts. This review underlines the physical principles that allow these reconstructions to be made, as well as the details of the various algorithms employed and the results obtained. Discussion is included of: the importance of the averaging timescale; the key differences between “range” and “interdiurnal variability” geomagnetic data; the need to distinguish source field sector structure from heliospherically-imposed field structure; the importance of ensuring that regressions used are statistically robust; and uncertainty analysis. The reconstructions are exceedingly useful as they provide calibration between the in-situ spacecraft measurements from the past five decades and the millennial records of heliospheric behaviour deduced from measured abundances of cosmogenic radionuclides found in terrestrial reservoirs. Continuity of open solar flux, using sunspot number to quantify the emergence rate, is the basis of a number of models that have been very successful in reproducing the variation derived from geomagnetic activity. These models allow us to extend the reconstructions back to before the development of the magnetometer and to cover the Maunder minimum. Allied to the radionuclide data, the models are revealing much about how the Sun and heliosphere behaved outside of grand solar maxima and are providing a means of predicting how solar activity is likely to evolve now that the recent grand maximum (that had prevailed throughout the space age) has come to an end.
Resumo:
The potential impact of climate change on areas of strategic importance for water resources remains a concern. Here, river flow projections for the River Medway, above Teston in southeast England are presented, which is just such an area of strategic importance. The river flow projections use climate inputs from the Hadley Centre Regional Climate Model (HadRM3) for the time period 1960–2080 (a subset of the early release UKCP09 projections). River flow predictions are calculated using CATCHMOD, the main river flow prediction tool of the Environment Agency (EA) of England and Wales. In order to use this tool in the best way for climate change predictions, model setup and performance are analysed using sensitivity and uncertainty analysis. The model's representation of hydrological processes is discussed and the direct percolation and first linear storage constant parameters are found to strongly affect model results in a complex way, with the former more important for low flows and the latter for high flows. The uncertainty in predictions resulting from the hydrological model parameters is demonstrated and the projections of river flow under future climate are analysed. A clear climate change impact signal is evident in the results with a persistent lowering of mean daily river flows for all months and for all projection time slices. Results indicate that a projection of lower flows under future climate is valid even taking into account the uncertainties considered in this modelling chain exercise. The model parameter uncertainty becomes more significant under future climate as the river flows become lower. This has significant implications for those making policy decisions based on such modelling results. Copyright © 2010 John Wiley & Sons, Ltd.
Resumo:
A global river routing scheme coupled to the ECMWF land surface model is implemented and tested within the framework of the Global Soil Wetness Project II, to evaluate the feasibility of modelling global river runoff at a daily time scale. The exercise is designed to provide benchmark river runoff predictions needed to verify the land surface model. Ten years of daily runoff produced by the HTESSEL land surface scheme is input into the TRIP2 river routing scheme in order to generate daily river runoff. These are then compared to river runoff observations from the Global Runoff Data Centre (GRDC) in order to evaluate the potential and the limitations. A notable source of inaccuracy is bias between observed and modelled discharges which is not primarily due to the modelling system but instead of to the forcing and quality of observations and seems uncorrelated to the river catchment size. A global sensitivity analysis and Generalised Likelihood Uncertainty Estimation (GLUE) uncertainty analysis are applied to the global routing model. The ground water delay parameter is identified as being the most sensitive calibration parameter. Significant uncertainties are found in results, and those due to parameterisation of the routing model are quantified. The difficulty involved in parameterising global river discharge models is discussed. Detailed river runoff simulations are shown for the river Danube, which match well observed river runoff in upstream river transects. Results show that although there are errors in runoff predictions, model results are encouraging and certainly indicative of useful runoff predictions, particularly for the purpose of verifying the land surface scheme hydrologicly. Potential of this modelling system on future applications such as river runoff forecasting and climate impact studies is highlighted. Copyright © 2009 Royal Meteorological Society.