14 resultados para Uncertainty propagation
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
In the framework of a global transition to a low-carbon energy mix, the interest in advanced nuclear Small Modular Reactors (SMRs) has been growing at the international level. Due to the high level of maturity reached by Severe Accident Codes for currently operating rectors, their applicability to advanced SMRs is starting to be studied. Within the present work of thesis and in the framework of a collaboration between ENEA, UNIBO and IRSN, an ASTEC code model of a generic IRIS reactor has been developed. The simulation of a DBA sequence involving the operation of all the passive safety systems of the generic IRIS has been carried out to investigate the code model capability in the prediction of the thermal-hydraulics characterizing an integral SMR adopting a passive mitigation strategy. The following simulation of 4 BDBAs sequences explores the applicability of Severe Accident Codes to advance SMRs in beyond-design and core-degradation conditions. The uncertainty affecting a code simulation can be estimated by using the method of Input Uncertainty Propagation, whose application has been realized through the RAVEN-ASTEC coupling and implementation on an HPC platform. This probabilistic methodology has been employed in a study of the uncertainty affecting the passive safety system operation in the DBA simulation of ASTEC, providing a further characterization of the thermal-hydraulics of this sequence. The application of the Uncertainty Quantification method to early core-melt phenomena has been investigated in the framework of a BEPU analysis of the ASTEC simulation of the QUENCH test-6 experiment. A possible solution to the encountered challenges has been proposed through the application of a Limit Surface search algorithm.
Resumo:
High-frequency seismograms contain features that reflect the random inhomogeneities of the earth. In this work I use an imaging method to locate the high contrast small- scale heterogeneity respect to the background earth medium. This method was first introduced by Nishigami (1991) and than applied to different volcanic and tectonically active areas (Nishigami, 1997, Nishigami, 2000, Nishigami, 2006). The scattering imaging method is applied to two volcanic areas: Campi Flegrei and Mt. Vesuvius. Volcanic and seismological active areas are often characterized by complex velocity structures, due to the presence of rocks with different elastic properties. I introduce some modifications to the original method in order to make it suitable for small and highly complex media. In particular, for very complex media the single scattering approximation assumed by Nishigami (1991) is not applicable as the mean free path becomes short. The multiple scattering or diffusive approximation become closer to the reality. In this thesis, differently from the ordinary Nishigami’s method (Nishigami, 1991), I use the mean of the recorded coda envelope as reference curve and calculate the variations from this average envelope. In this way I implicitly do not assume any particular scattering regime for the "average" scattered radiation, whereas I consider the variations as due to waves that are singularly scattered from the strongest heterogeneities. The imaging method is applied to a relatively small area (20 x 20 km), this choice being justified by the small length of the analyzed codas of the low magnitude earthquakes. I apply the unmodified Nishigami’s method to the volcanic area of Campi Flegrei and compare the results with the other tomographies done in the same area. The scattering images, obtained with frequency waves around 18 Hz, show the presence of high scatterers in correspondence with the submerged caldera rim in the southern part of the Pozzuoli bay. Strong scattering is also found below the Solfatara crater, characterized by the presence of densely fractured, fluid-filled rocks and by a strong thermal anomaly. The modified Nishigami’s technique is applied to the Mt. Vesuvius area. Results show a low scattering area just below the central cone and a high scattering area around it. The high scattering zone seems to be due to the contrast between the high rigidity body located beneath the crater and the low rigidity materials located around it. The central low scattering area overlaps the hydrothermal reservoirs located below the central cone. An interpretation of the results in terms of geological properties of the medium is also supplied, aiming to find a correspondence of the scattering properties and the geological nature of the material. A complementary result reported in this thesis is that the strong heterogeneity of the volcanic medium create a phenomenon called "coda localization". It has been verified that the shape of the seismograms recorded from the stations located at the top of the volcanic edifice of Mt. Vesuvius is different from the shape of the seismograms recorded at the bottom. This behavior is justified by the consideration that the coda energy is not uniformly distributed within a region surrounding the source for great lapse time.
Resumo:
The research activity carried out during the PhD course in Electrical Engineering belongs to the branch of electric and electronic measurements. The main subject of the present thesis is a distributed measurement system to be installed in Medium Voltage power networks, as well as the method developed to analyze data acquired by the measurement system itself and to monitor power quality. In chapter 2 the increasing interest towards power quality in electrical systems is illustrated, by reporting the international research activity inherent to the problem and the relevant standards and guidelines emitted. The aspect of the quality of voltage provided by utilities and influenced by customers in the various points of a network came out only in recent years, in particular as a consequence of the energy market liberalization. Usually, the concept of quality of the delivered energy has been associated mostly to its continuity. Hence the reliability was the main characteristic to be ensured for power systems. Nowadays, the number and duration of interruptions are the “quality indicators” commonly perceived by most customers; for this reason, a short section is dedicated also to network reliability and its regulation. In this contest it should be noted that although the measurement system developed during the research activity belongs to the field of power quality evaluation systems, the information registered in real time by its remote stations can be used to improve the system reliability too. Given the vast scenario of power quality degrading phenomena that usually can occur in distribution networks, the study has been focused on electromagnetic transients affecting line voltages. The outcome of such a study has been the design and realization of a distributed measurement system which continuously monitor the phase signals in different points of a network, detect the occurrence of transients superposed to the fundamental steady state component and register the time of occurrence of such events. The data set is finally used to locate the source of the transient disturbance propagating along the network lines. Most of the oscillatory transients affecting line voltages are due to faults occurring in any point of the distribution system and have to be seen before protection equipment intervention. An important conclusion is that the method can improve the monitored network reliability, since the knowledge of the location of a fault allows the energy manager to reduce as much as possible both the area of the network to be disconnected for protection purposes and the time spent by technical staff to recover the abnormal condition and/or the damage. The part of the thesis presenting the results of such a study and activity is structured as follows: chapter 3 deals with the propagation of electromagnetic transients in power systems by defining characteristics and causes of the phenomena and briefly reporting the theory and approaches used to study transients propagation. Then the state of the art concerning methods to detect and locate faults in distribution networks is presented. Finally the attention is paid on the particular technique adopted for the same purpose during the thesis, and the methods developed on the basis of such approach. Chapter 4 reports the configuration of the distribution networks on which the fault location method has been applied by means of simulations as well as the results obtained case by case. In this way the performance featured by the location procedure firstly in ideal then in realistic operating conditions are tested. In chapter 5 the measurement system designed to implement the transients detection and fault location method is presented. The hardware belonging to the measurement chain of every acquisition channel in remote stations is described. Then, the global measurement system is characterized by considering the non ideal aspects of each device that can concur to the final combined uncertainty on the estimated position of the fault in the network under test. Finally, such parameter is computed according to the Guide to the Expression of Uncertainty in Measurements, by means of a numeric procedure. In the last chapter a device is described that has been designed and realized during the PhD activity aiming at substituting the commercial capacitive voltage divider belonging to the conditioning block of the measurement chain. Such a study has been carried out aiming at providing an alternative to the used transducer that could feature equivalent performance and lower cost. In this way, the economical impact of the investment associated to the whole measurement system would be significantly reduced, making the method application much more feasible.
Resumo:
In the context of “testing laboratory” one of the most important aspect to deal with is the measurement result. Whenever decisions are based on measurement results, it is important to have some indication of the quality of the results. In every area concerning with noise measurement many standards are available but without an expression of uncertainty, it is impossible to judge whether two results are in compliance or not. ISO/IEC 17025 is an international standard related with the competence of calibration and testing laboratories. It contains the requirements that testing and calibration laboratories have to meet if they wish to demonstrate that they operate to a quality system, are technically competent and are able to generate technically valid results. ISO/IEC 17025 deals specifically with the requirements for the competence of laboratories performing testing and calibration and for the reporting of the results, which may or may not contain opinions and interpretations of the results. The standard requires appropriate methods of analysis to be used for estimating uncertainty of measurement. In this point of view, for a testing laboratory performing sound power measurement according to specific ISO standards and European Directives, the measurement of uncertainties is the most important factor to deal with. Sound power level measurement, according to ISO 3744:1994 , performed with a limited number of microphones distributed over a surface enveloping a source is affected by a certain systematic error and a related standard deviation. Making a comparison of measurement carried out with different microphone arrays is difficult because results are affected by systematic errors and standard deviation that are peculiarities of the number of microphones disposed on the surface, their spatial position and the complexity of the sound field. A statistical approach could give an overview of the difference between sound power level evaluated with different microphone arrays and an evaluation of errors that afflict this kind of measurement. Despite the classical approach that tend to follow the ISO GUM this thesis present a different point of view of the problem related to the comparison of result obtained from different microphone arrays.
Resumo:
Investigation on impulsive signals, originated from Partial Discharge (PD) phenomena, represents an effective tool for preventing electric failures in High Voltage (HV) and Medium Voltage (MV) systems. The determination of both sensors and instruments bandwidths is the key to achieve meaningful measurements, that is to say, obtaining the maximum Signal-To-Noise Ratio (SNR). The optimum bandwidth depends on the characteristics of the system under test, which can be often represented as a transmission line characterized by signal attenuation and dispersion phenomena. It is therefore necessary to develop both models and techniques which can characterize accurately the PD propagation mechanisms in each system and work out the frequency characteristics of the PD pulses at detection point, in order to design proper sensors able to carry out PD measurement on-line with maximum SNR. Analytical models will be devised in order to predict PD propagation in MV apparatuses. Furthermore, simulation tools will be used where complex geometries make analytical models to be unfeasible. In particular, PD propagation in MV cables, transformers and switchgears will be investigated, taking into account both irradiated and conducted signals associated to PD events, in order to design proper sensors.
Resumo:
The hydrologic risk (and the hydro-geologic one, closely related to it) is, and has always been, a very relevant issue, due to the severe consequences that may be provoked by a flooding or by waters in general in terms of human and economic losses. Floods are natural phenomena, often catastrophic, and cannot be avoided, but their damages can be reduced if they are predicted sufficiently in advance. For this reason, the flood forecasting plays an essential role in the hydro-geological and hydrological risk prevention. Thanks to the development of sophisticated meteorological, hydrologic and hydraulic models, in recent decades the flood forecasting has made a significant progress, nonetheless, models are imperfect, which means that we are still left with a residual uncertainty on what will actually happen. In this thesis, this type of uncertainty is what will be discussed and analyzed. In operational problems, it is possible to affirm that the ultimate aim of forecasting systems is not to reproduce the river behavior, but this is only a means through which reducing the uncertainty associated to what will happen as a consequence of a precipitation event. In other words, the main objective is to assess whether or not preventive interventions should be adopted and which operational strategy may represent the best option. The main problem for a decision maker is to interpret model results and translate them into an effective intervention strategy. To make this possible, it is necessary to clearly define what is meant by uncertainty, since in the literature confusion is often made on this issue. Therefore, the first objective of this thesis is to clarify this concept, starting with a key question: should be the choice of the intervention strategy to adopt based on the evaluation of the model prediction based on its ability to represent the reality or on the evaluation of what actually will happen on the basis of the information given by the model forecast? Once the previous idea is made unambiguous, the other main concern of this work is to develope a tool that can provide an effective decision support, making possible doing objective and realistic risk evaluations. In particular, such tool should be able to provide an uncertainty assessment as accurate as possible. This means primarily three things: it must be able to correctly combine all the available deterministic forecasts, it must assess the probability distribution of the predicted quantity and it must quantify the flooding probability. Furthermore, given that the time to implement prevention strategies is often limited, the flooding probability will have to be linked to the time of occurrence. For this reason, it is necessary to quantify the flooding probability within a horizon time related to that required to implement the intervention strategy and it is also necessary to assess the probability of the flooding time.
Resumo:
On the basis of well-known literature, an analytical tool named LEAF (Linear Elastic Analysis of Fracture) was developed to predict the Damage Tolerance (DT) proprieties of aeronautical stiffened panels. The tool is based on the linear elastic fracture mechanics and the displacement compatibility method. By means of LEAF, an extensive parametric analysis of stiffened panels, representative of typical aeronautical constructions, was performed to provide meaningful design guidelines. The effects of riveted, integral and adhesively bonded stringers on the fatigue crack propagation performances of stiffened panels were investigated, as well as the crack retarder contribution using metallic straps (named doublers) bonded in the middle of the stringers bays. The effect of both perfectly bonded and partially debonded doublers was investigated as well. Adhesively bonded stiffeners showed the best DT properties in comparison with riveted and integral ones. A great reduction of the skin crack growth propagation rate can be achieved with the adoption of additional doublers bonded between the stringers.
Resumo:
This work presents a comprehensive methodology for the reduction of analytical or numerical stochastic models characterized by uncertain input parameters or boundary conditions. The technique, based on the Polynomial Chaos Expansion (PCE) theory, represents a versatile solution to solve direct or inverse problems related to propagation of uncertainty. The potentiality of the methodology is assessed investigating different applicative contexts related to groundwater flow and transport scenarios, such as global sensitivity analysis, risk analysis and model calibration. This is achieved by implementing a numerical code, developed in the MATLAB environment, presented here in its main features and tested with literature examples. The procedure has been conceived under flexibility and efficiency criteria in order to ensure its adaptability to different fields of engineering; it has been applied to different case studies related to flow and transport in porous media. Each application is associated with innovative elements such as (i) new analytical formulations describing motion and displacement of non-Newtonian fluids in porous media, (ii) application of global sensitivity analysis to a high-complexity numerical model inspired by a real case of risk of radionuclide migration in the subsurface environment, and (iii) development of a novel sensitivity-based strategy for parameter calibration and experiment design in laboratory scale tracer transport.
Resumo:
Environmental computer models are deterministic models devoted to predict several environmental phenomena such as air pollution or meteorological events. Numerical model output is given in terms of averages over grid cells, usually at high spatial and temporal resolution. However, these outputs are often biased with unknown calibration and not equipped with any information about the associated uncertainty. Conversely, data collected at monitoring stations is more accurate since they essentially provide the true levels. Due the leading role played by numerical models, it now important to compare model output with observations. Statistical methods developed to combine numerical model output and station data are usually referred to as data fusion. In this work, we first combine ozone monitoring data with ozone predictions from the Eta-CMAQ air quality model in order to forecast real-time current 8-hour average ozone level defined as the average of the previous four hours, current hour, and predictions for the next three hours. We propose a Bayesian downscaler model based on first differences with a flexible coefficient structure and an efficient computational strategy to fit model parameters. Model validation for the eastern United States shows consequential improvement of our fully inferential approach compared with the current real-time forecasting system. Furthermore, we consider the introduction of temperature data from a weather forecast model into the downscaler, showing improved real-time ozone predictions. Finally, we introduce a hierarchical model to obtain spatially varying uncertainty associated with numerical model output. We show how we can learn about such uncertainty through suitable stochastic data fusion modeling using some external validation data. We illustrate our Bayesian model by providing the uncertainty map associated with a temperature output over the northeastern United States.
Resumo:
In this thesis we address a collection of Network Design problems which are strongly motivated by applications from Telecommunications, Logistics and Bioinformatics. In most cases we justify the need of taking into account uncertainty in some of the problem parameters, and different Robust optimization models are used to hedge against it. Mixed integer linear programming formulations along with sophisticated algorithmic frameworks are designed, implemented and rigorously assessed for the majority of the studied problems. The obtained results yield the following observations: (i) relevant real problems can be effectively represented as (discrete) optimization problems within the framework of network design; (ii) uncertainty can be appropriately incorporated into the decision process if a suitable robust optimization model is considered; (iii) optimal, or nearly optimal, solutions can be obtained for large instances if a tailored algorithm, that exploits the structure of the problem, is designed; (iv) a systematic and rigorous experimental analysis allows to understand both, the characteristics of the obtained (robust) solutions and the behavior of the proposed algorithm.
Resumo:
Laser Shock Peening (LSP) is a surface enhancement treatment which induces a significant layer of beneficial compressive residual stresses up to several mm underneath the surface of metal components in order to improve the detrimental effects of crack growth behavior rate in it. The aim of this thesis is to predict the crack growth behavior of thin Aluminum specimens with one or more LSP stripes defining a compressive residual stress area. The LSP treatment has been applied as crack retardation stripes perpendicular to the crack growing direction, with the objective of slowing down the crack when approaching the LSP patterns. Different finite element approaches have been implemented to predict the residual stress field left by the laser treatment, mostly by means of the commercial software Abaqus/Explicit. The Afgrow software has been used to predict the crack growth behavior of the component following the laser peening treatment and to detect the improvement in fatigue life comparing to the specimen baseline. Furthermore, an analytical model has been implemented on the Matlab software to make more accurate predictions on fatigue life of the treated components. An educational internship at the Research and Technologies Germany- Hamburg department of Airbus helped to achieve knowledge and experience to write this thesis. The main tasks of the thesis are the following: -To up to date Literature Survey related to laser shock peening in metallic structures -To validate the FE models developed against experimental measurements at coupon level -To develop design of crack growth slow down in centered and edge cracked tension specimens based on residual stress engineering approach using laser peened patterns transversal to the crack path -To predict crack growth behavior of thin aluminum panels -To validate numerical and analytical results by means of experimental tests.