948 resultados para best estimate method
Resumo:
A Work Project, presented as part of the requirements for the Award of a Masters Degree in Finance from the NOVA – School of Business and Economics
Assessment of the Wind Gust Estimate Method in mesoscale modelling of storm events over West Germany
Resumo:
A physically based gust parameterisation is added to the atmospheric mesoscale model FOOT3DK to estimate wind gusts associated with storms over West Germany. The gust parameterisation follows the Wind Gust Estimate (WGE) method and its functionality is verified in this study. The method assumes that gusts occurring at the surface are induced by turbulent eddies in the planetary boundary layer, deflecting air parcels from higher levels down to the surface under suitable conditions. Model simulations are performed with horizontal resolutions of 20 km and 5 km. Ten historical storm events of different characteristics and intensities are chosen in order to include a wide range of typical storms affecting Central Europe. All simulated storms occurred between 1990 and 1998. The accuracy of the method is assessed objectively by validating the simulated wind gusts against data from 16 synoptic stations by means of “quality parameters”. Concerning these parameters, the temporal and spatial evolution of the simulated gusts is well reproduced. Simulated values for low altitude stations agree particularly well with the measured gusts. For orographically exposed locations, the gust speeds are partly underestimated. The absolute maximum gusts lie in most cases within the bounding interval given by the WGE method. Focussing on individual storms, the performance of the method is better for intense and large storms than for weaker ones. Particularly for weaker storms, the gusts are typically overestimated. The results for the sample of ten storms document that the method is generally applicable with the mesoscale model FOOT3DK for mid-latitude winter storms, even in areas with complex orography.
Resumo:
Uno de los objetivos del Proyecto europeo NURISP (NUclear Reactor Integrated Platform) del 7º Programa Marco es avanzar en la simulación de reactores de agua ligera mediante el acoplamiento de códigos best-estimate que profundicen en la física de núcleo, termohidráulica bifásica y comportamiento del combustible [1]. Uno de estos códigos es COBAYA3, código de difusión 3D en multigrupos pin-by-pin desarrollado en la UPM, que requiere de librerías de secciones eficaces homogeneizadas a nivel de la barrita de combustible.
Resumo:
Steam Generator Tube Rupture (SGTR) sequences in Pressurized Water Reactors are known to be one of the most demanding transients for the operating crew. SGTR are a special kind of transient as they could lead to radiological releases without core damage or containment failure, as they can constitute a direct path from the reactor coolant system to the environment. The first methodology used to perform the Deterministic Safety Analysis (DSA) of a SGTR did not credit the operator action for the first 30 min of the transient, assuming that the operating crew was able to stop the primary to secondary leakage within that period of time. However, the different real SGTR accident cases happened in the USA and over the world demonstrated that the operators usually take more than 30 min to stop the leakage in actual sequences. Some methodologies were raised to overcome that fact, considering operator actions from the beginning of the transient, as it is done in Probabilistic Safety Analysis. This paper presents the results of comparing different assumptions regarding the single failure criteria and the operator action taken from the most common methodologies included in the different Deterministic Safety Analysis. One single failure criteria that has not been analysed previously in the literature is proposed and analysed in this paper too. The comparison is done with a PWR Westinghouse three loop model in TRACE code (Almaraz NPP) with best estimate assumptions but including deterministic hypothesis such as single failure criteria or loss of offsite power. The behaviour of the reactor is quite diverse depending on the different assumptions made regarding the operator actions. On the other hand, although there are high conservatisms included in the hypothesis, as the single failure criteria, all the results are quite far from the regulatory limits. In addition, some improvements to the Emergency Operating Procedures to minimize the offsite release from the damaged SG in case of a SGTR are outlined taking into account the offsite dose sensitivity results.
Resumo:
The simulation of design basis accidents in a containment building is usually conducted with a lumped parameter model. The codes normally used by Westinghouse Electric Company (WEC) for that license analysis are WGOTHIC or COCO, which are suitable to provide an adequate estimation of the overall peak temperature and pressure of the containment. However, for the detailed study of the thermal-hydraulic behavior in every room and compartment of the containment building, it could be more convenient to model the containment with a more detailed 3D representation of the geometry of the whole building. The main objective of this project is to obtain a standard PWR Westinghouse as well as an AP1000® containment model for a CFD code to analyze the thermal-hydraulic detailed behavior during a design basis accident. In this paper the development and testing of both containment models is presented.
Resumo:
The objective of the project is the evaluation of the code STAR-CCM +, as well as the establishment of guidelines and standardized procedures for the discretization of the area of study and the selection of physical models suitable for the simulation of BWR fuel. For this purpose several of BFBT experiments have simulated provide a data base for the development of experiments for measuring distribution of fractions of holes to changes in power in order to find the most appropriate models for the simulation of the problem.
Resumo:
Tässä diplomityössä esitetään selvitys käytössä olevista deterministisistä turvallisuusanalyysimenetelmistä. Deterministisillä turvallisuusanalyyseillä arvioidaan ydinvoimalaitosten turvallisuutta eri käyttötilojen aikana. Voimalaitoksen turvallisuusjärjestelmät mitoitetaan deterministisen turvallisuusanalyysin tulosten perusteella. Deterministiset turvallisuusanalyysit voidaan laatia konservatiivista tai tilastollista menetelmää käyttäen. Konservatiivinen menetelmä pyrkii mallintamaan tarkasteltavan tilanteen siten, että laitoksen todellinen käyttäytyminen on hyvällä varmuudella lievempää kuin analyysitulos. Konservatiivisessa menetelmässä analyysin epävarmuudet huomioidaan konservatiivisilla oletuksilla. Tilastollinen menetelmä perustuu parhaan arvion menetelmään eli pyrkimykseen mallintaa laitoksen käyttäytyminen mahdollisimman todenmukaisesti. Tilastollisessa menetelmässä analyysin epävarmuudet määritetään systemaattisesti tilastomatematiikan keinoin. Työssä painotetaan tilastollisen analyysin epävarmuuksien määritykseen käytettäviä epävarmuustarkastelumenetelmiä. Diplomityön laskennallisessa osassa vertaillaan deterministisen turvallisuusanalyysin laadintaan käytettäviä menetelmiä termohydraulisen turvallisuusanalyysiesimerkin laskennan kautta. Laskennassa tarkasteltavana onnettomuutena on Olkiluoto 3-laitosyksikössä tapahtuva primäärijäähdytepiirin putkikatkosta aiheutuva jäähdytteenmenetysonnettomuus. Lasketun esimerkkitapauksen perusteella tilastollista ja konservatiivista menetelmää voidaan pitää vaihtoehtoisina turvallisuusanalyysin laadintaan. Molemmat analyysit tuottivat hyväksyttäviä ja toisilleen verrannollisia tuloksia, joiden suuruusluokka on sama.
Resumo:
OBJECTIVES: Family studies typically use multiple sources of information on each individual including direct interviews and family history information. The aims of the present study were to: (1) assess agreement for diagnoses of specific substance use disorders between direct interviews and the family history method; (2) compare prevalence estimates according to the two methods; (3) test strategies to approximate prevalence estimates according to family history reports to those based on direct interviews; (4) determine covariates of inter-informant agreement; and (5) identify covariates that affect the likelihood of reporting disorders by informants. METHODS: Analyses were based on family study data which included 1621 distinct informant (first-degree relatives and spouses) - index subject pairs. RESULTS: Our main findings were: (1) inter-informant agreement was fair to good for all substance disorders, except for alcohol abuse; (2) the family history method underestimated the prevalence of drug but not alcohol use disorders; (3) lowering diagnostic thresholds for drug disorders and combining multiple family histories increased the accuracy of prevalence estimates for these disorders according to the family history method; (4) female sex of index subjects was associated with higher agreement for nearly all disorders; and (5) informants who themselves had a history of the same substance use disorder were more likely to report this disorder in their relatives, which entails the risk of overestimation of the size of familial aggregation. CONCLUSION: Our findings have important implications for the best-estimate procedure applied in family studies.
Resumo:
Time series of global and regional mean Surface Air Temperature (SAT) anomalies are a common metric used to estimate recent climate change. Various techniques can be used to create these time series from meteorological station data. The degree of difference arising from using five different techniques, based on existing temperature anomaly dataset techniques, to estimate Arctic SAT anomalies over land and sea ice were investigated using reanalysis data as a testbed. Techniques which interpolated anomalies were found to result in smaller errors than non-interpolating techniques relative to the reanalysis reference. Kriging techniques provided the smallest errors in estimates of Arctic anomalies and Simple Kriging was often the best kriging method in this study, especially over sea ice. A linear interpolation technique had, on average, Root Mean Square Errors (RMSEs) up to 0.55 K larger than the two kriging techniques tested. Non-interpolating techniques provided the least representative anomaly estimates. Nonetheless, they serve as useful checks for confirming whether estimates from interpolating techniques are reasonable. The interaction of meteorological station coverage with estimation techniques between 1850 and 2011 was simulated using an ensemble dataset comprising repeated individual years (1979-2011). All techniques were found to have larger RMSEs for earlier station coverages. This supports calls for increased data sharing and data rescue, especially in sparsely observed regions such as the Arctic.
Resumo:
n this paper we make an exhaustive study of the fourth order linear operator u((4)) + M u coupled with the clamped beam conditions u(0) = u(1) = u'(0) = u'(1) = 0. We obtain the exact values on the real parameter M for which this operator satisfies an anti-maximum principle. Such a property is equivalent to the fact that the related Green's function is nonnegative in [0, 1] x [0, 1]. When M < 0 we obtain the best estimate by means of the spectral theory and for M > 0 we attain the optimal value by studying the oscillation properties of the solutions of the homogeneous equation u((4)) + M u = 0. By using the method of lower and upper solutions we deduce the existence of solutions for nonlinear problems coupled with this boundary conditions. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Phengites from the eclogite and blueschist-facies sequences of the Cycladic island of Syros (Greece) have been dated by the in situ UV-laser ablation Ar-40/Ar-39 method. A massive, phengite-rich eclogite and an omphacite-rich metagabbro were investigated. The phengites are eubedral and coarse-grained (several 100 mum), strain-free and exhibit no evidence for late brittle deformation or recrystallization. Apparent ages in these samples range from 43 to 50 Ma for the phengite-rich eclogite and 42 to 52 Ma for the ompbacitic metagabbro. This large spread of ages is visible at all scales-within individual grains as well as in domains of several 100 mum and across the entire sample (ca. 2 cm). Such variations have been traditionally attributed to metamorphic cooling or the incorporation of excess argon. However, the textural equilibrium between the phengites and other high pressure phases and the subtle compositional variations within the phengites, especially the preservation of growth textures, alternatively suggest that the observed range in ages may reflect variations of radiogenic argon acquired during phengite formation and subsequent growth, thus dating a discrete event on the prograde path. This implies that the oldest phengite 40Ar/39Ar ages provide the best estimate of a minimum crystallization age, which is in agreement with recently reported U-Pb and Lu-Hf geochronological data. Our results are consistent with available stable isotope data and further suggest that, under fluid-restricted conditions, both stable and radiogenic isotopic systems can survive without significant isotopic exchange during subduction and exhumation from eclogite-facies P-T conditions. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
Pölyäminen aiheuttaa ongelmia sekä paperin valmistuksessa paperikoneella että asiakkaalla painokoneella. Pölyäminen painokoneella aiheuttaa kumulatiivisia kertymiä painokumeille, -levyille ja kostutusvesijärjestelmään. Pölykertymistä seuraa painojäljen heikkenemistä, pesukertojen lisääntymistä ja tuotantokatkoja. Lisääntynyt painaminen tahmeilla offsetväreillä ja täyteaineiden käyttö paperin raaka-aineena on aiheuttanut paperin pölyämisherkkyyden kasvamista. Paperin pölyävyyteen vaikuttavia tekijöitä valmistuksessa, jälkikäsittelyssä ja jatkojalostuksessa on useita, mikä tekee ilmiöstä monimutkaisen. Paperin painatuksessa esiintyvän pölyävyyden ennakoimiseksi tutkittiin erilaisten pölymittausmenetelmien käytettävyys ja luotettavuus. Työssä käytetyt menetelmät olivatMac Millan Bloedel pölytesteri, R.A. Emerson & Company:n pölymittalaite, Finntesteri, SOLA, Masuga, arkkipainatus sekä IGT ja Prüfbau laboratoriopainatukset. Menetelmien luotettavuus arvioitiin vertaamalla saatuja pölytuloksia testipainatuksesta saatuihin pölytuloksiin ja Gage R&R testiä käyttäen. Lisäksi työssä selvitettiin paperin pölyävyyteen vaikuttavia tekijöitä, ja tutkittiin prosessiolosuhteiden vaikutusta paperin pölyävyyteen. Tutkimus osoitti tuotantoprosessin olosuhteissa tehtyjen muutosten aiheuttavan oleellisia muutoksia paperin pölyävyyteen. Käytettävän massan ominaisuuksilla, tuotantoprosessin kemian hallinnalla sekä paperin ominaisuuksista etenkin lujuus- ja formaatio-ominaisuuksilla on selkeästi yhteyttä paperin pölyävyyteen painatuksessa. Pölymittauksilla paperin valmistuksen yhteydessä voidaan tasollisesti arvioida paperinpölyävyys painatuksessa, mutta absoluuttisten tulosten saaminen on ilmiön monimuotoisuudesta johtuen mahdotonta. Käytetyistä menetelmistä tähän tuotantoprosessiin toimivin pölyävyyden mittamenetelmä on R.A. Emerson & Company:n pölymittalaite. Laite on yksinkertainen ja nopea käyttää, eikä toteutus vaadi suuria investointeja.
Resumo:
Seven groups have participated in an intercomparison study of calculations of radiative forcing (RF) due to stratospheric water vapour (SWV) and contrails. A combination of detailed radiative transfer schemes and codes for global-scale calculations have been used, as well as a combination of idealized simulations and more realistic global-scale changes in stratospheric water vapour and contrails. Detailed line-by-line codes agree within about 15 % for longwave (LW) and shortwave (SW) RF, except in one case where the difference is 30 %. Since the LW and SW RF due to contrails and SWV changes are of opposite sign, the differences between the models seen in the individual LW and SW components can be either compensated or strengthened in the net RF, and thus in relative terms uncertainties are much larger for the net RF. Some of the models used for global-scale simulations of changes in SWV and contrails differ substantially in RF from the more detailed radiative transfer schemes. For the global-scale calculations we use a method of weighting the results to calculate a best estimate based on their performance compared to the more detailed radiative transfer schemes in the idealized simulations.
Resumo:
The problem of state estimation occurs in many applications of fluid flow. For example, to produce a reliable weather forecast it is essential to find the best possible estimate of the true state of the atmosphere. To find this best estimate a nonlinear least squares problem has to be solved subject to dynamical system constraints. Usually this is solved iteratively by an approximate Gauss–Newton method where the underlying discrete linear system is in general unstable. In this paper we propose a new method for deriving low order approximations to the problem based on a recently developed model reduction method for unstable systems. To illustrate the theoretical results, numerical experiments are performed using a two-dimensional Eady model – a simple model of baroclinic instability, which is the dominant mechanism for the growth of storms at mid-latitudes. It is a suitable test model to show the benefit that may be obtained by using model reduction techniques to approximate unstable systems within the state estimation problem.