925 resultados para Uncertainty quantification
Resumo:
This paper offers an uncertainty quantification (UQ) study applied to the performance analysis of the ERCOFTAC conical diffuser. A deterministic CFD solver is coupled with a non-statistical generalised Polynomial Chaos(gPC)representation based on a pseudo-spectral projection method. Such approach has the advantage to not require any modification of the CFD code for the propagation of random disturbances in the aerodynamic field. The stochactic results highlihgt the importance of the inlet velocity uncertainties on the pressure recovery both alone and when coupled with a second uncertain variable. From a theoretical point of view, we investigate the possibility to build our gPC representation on arbitray grid, thus increasing the flexibility of the stochastic framework.
Resumo:
The effect of structural and aerodynamic uncertainties on the performance predictions of a helicopter is investigated. An aerodynamic model based on blade element and momentum theory is used to predict the helicopter performance. The aeroelastic parameters, such as blade chord, rotor radius, two-dimensional lift-curve slope, blade profile drag coefficient, rotor angular velocity, blade pitch angle, and blade twist rate per radius of the rotor, are considered as random variables. The propagation of these uncertainties to the performance parameters, such as thrust coefficient and power coefficient, are studied using Monte Carlo Simulations. The simulations are performed with 100,000 samples of structural and aerodynamic uncertain variables with a coefficient of variation ranging from 1 to 5%. The scatter in power predictions in hover, axial climb, and forward flight for the untwisted and linearly twisted blades is studied. It is found that about 20-25% excess power can be required by the helicopter relative to the determination predictions due to uncertainties.
Resumo:
In this paper we consider the problem of guided wave scattering from delamination in laminated composite and further the problem of estimating delamination size and layer-wise location from the guided wave measurement. Damage location and region/size can be estimated from time of flight and wave packet spread, whereas depth information can be obtained from wavenumber modulation in the carrier packet. The key challenge is that these information are highly sensitive to various uncertainties. Variation in reflected and transmitted wave amplitude in a bar due to boundary/interface uncertainty is studied to illustrate such effect. Effect of uncertainty in material parameters on the time of flight are estimated for longitudinal wave propagation. To evaluate the effect of uncertainty in delamination detection, we employ a time domain spectral finite element (tSFEM) scheme where wave propagation is modeled using higher-order interpolation with shape function have spectral convergence properties. A laminated composite beam with layer-wise placement of delamination is considered in the simulation. Scattering due to the presence of delamination is analyzed. For a single delamination, two identical waveforms are created at the two fronts of the delamination, whereas waves in the two sub-laminates create two independent waveforms with different wavelengths. Scattering due to multiple delaminations in composite beam is studied.
Resumo:
Many engineering applications face the problem of bounding the expected value of a quantity of interest (performance, risk, cost, etc.) that depends on stochastic uncertainties whose probability distribution is not known exactly. Optimal uncertainty quantification (OUQ) is a framework that aims at obtaining the best bound in these situations by explicitly incorporating available information about the distribution. Unfortunately, this often leads to non-convex optimization problems that are numerically expensive to solve.
This thesis emphasizes on efficient numerical algorithms for OUQ problems. It begins by investigating several classes of OUQ problems that can be reformulated as convex optimization problems. Conditions on the objective function and information constraints under which a convex formulation exists are presented. Since the size of the optimization problem can become quite large, solutions for scaling up are also discussed. Finally, the capability of analyzing a practical system through such convex formulations is demonstrated by a numerical example of energy storage placement in power grids.
When an equivalent convex formulation is unavailable, it is possible to find a convex problem that provides a meaningful bound for the original problem, also known as a convex relaxation. As an example, the thesis investigates the setting used in Hoeffding's inequality. The naive formulation requires solving a collection of non-convex polynomial optimization problems whose number grows doubly exponentially. After structures such as symmetry are exploited, it is shown that both the number and the size of the polynomial optimization problems can be reduced significantly. Each polynomial optimization problem is then bounded by its convex relaxation using sums-of-squares. These bounds are found to be tight in all the numerical examples tested in the thesis and are significantly better than Hoeffding's bounds.
Resumo:
Operational uncertainties such as throttle excursions, varying inlet conditions and geometry changes lead to variability in compressor performance. In this work, the main operational uncertainties inherent in a transonic axial compressor are quantified to deter- mine their effect on performance. These uncertainties include the effects of inlet distortion, metal expansion, ow leakages and blade roughness. A 3D, validated RANS model of the compressor is utilized to simulate these uncertainties and quantify their effect on polytropic efficiency and pressure ratio. To propagate them, stochastic collocation and sparse pseudospectral approximations are used. We demonstrate that lower-order approximations are sufficient as these uncertainties are inherently linear. Results for epistemic uncertainties in the form of meshing methodologies are also presented. Finally, the uncertainties considered are ranked in order of their effect on efficiency loss. © 2012 AIAA.
Resumo:
Approximate models (proxies) can be employed to reduce the computational costs of estimating uncertainty. The price to pay is that the approximations introduced by the proxy model can lead to a biased estimation. To avoid this problem and ensure a reliable uncertainty quantification, we propose to combine functional data analysis and machine learning to build error models that allow us to obtain an accurate prediction of the exact response without solving the exact model for all realizations. We build the relationship between proxy and exact model on a learning set of geostatistical realizations for which both exact and approximate solvers are run. Functional principal components analysis (FPCA) is used to investigate the variability in the two sets of curves and reduce the dimensionality of the problem while maximizing the retained information. Once obtained, the error model can be used to predict the exact response of any realization on the basis of the sole proxy response. This methodology is purpose-oriented as the error model is constructed directly for the quantity of interest, rather than for the state of the system. Also, the dimensionality reduction performed by FPCA allows a diagnostic of the quality of the error model to assess the informativeness of the learning set and the fidelity of the proxy to the exact model. The possibility of obtaining a prediction of the exact response for any newly generated realization suggests that the methodology can be effectively used beyond the context of uncertainty quantification, in particular for Bayesian inference and optimization.
Resumo:
Uncertainty quantification (UQ) is both an old and new concept. The current novelty lies in the interactions and synthesis of mathematical models, computer experiments, statistics, field/real experiments, and probability theory, with a particular emphasize on the large-scale simulations by computer models. The challenges not only come from the complication of scientific questions, but also from the size of the information. It is the focus in this thesis to provide statistical models that are scalable to massive data produced in computer experiments and real experiments, through fast and robust statistical inference.
Chapter 2 provides a practical approach for simultaneously emulating/approximating massive number of functions, with the application on hazard quantification of Soufri\`{e}re Hills volcano in Montserrate island. Chapter 3 discusses another problem with massive data, in which the number of observations of a function is large. An exact algorithm that is linear in time is developed for the problem of interpolation of Methylation levels. Chapter 4 and Chapter 5 are both about the robust inference of the models. Chapter 4 provides a new criteria robustness parameter estimation criteria and several ways of inference have been shown to satisfy such criteria. Chapter 5 develops a new prior that satisfies some more criteria and is thus proposed to use in practice.
Resumo:
This paper demonstrates the procedures for probabilistic assessment of a pesticide fate and transport model, PCPF-1, to elucidate the modeling uncertainty using the Monte Carlo technique. Sensitivity analyses are performed to investigate the influence of herbicide characteristics and related soil properties on model outputs using four popular rice herbicides: mefenacet, pretilachlor, bensulfuron-methyl and imazosulfuron. Uncertainty quantification showed that the simulated concentrations in paddy water varied more than those of paddy soil. This tendency decreased as the simulation proceeded to a later period but remained important for herbicides having either high solubility or a high 1st-order dissolution rate. The sensitivity analysis indicated that PCPF-1 parameters requiring careful determination are primarily those involve with herbicide adsorption (the organic carbon content, the bulk density and the volumetric saturated water content), secondary parameters related with herbicide mass distribution between paddy water and soil (1st-order desorption and dissolution rates) and lastly, those involving herbicide degradations. © Pesticide Science Society of Japan.
Resumo:
Representation and quantification of uncertainty in climate change impact studies are a difficult task. Several sources of uncertainty arise in studies of hydrologic impacts of climate change, such as those due to choice of general circulation models (GCMs), scenarios and downscaling methods. Recently, much work has focused on uncertainty quantification and modeling in regional climate change impacts. In this paper, an uncertainty modeling framework is evaluated, which uses a generalized uncertainty measure to combine GCM, scenario and downscaling uncertainties. The Dempster-Shafer (D-S) evidence theory is used for representing and combining uncertainty from various sources. A significant advantage of the D-S framework over the traditional probabilistic approach is that it allows for the allocation of a probability mass to sets or intervals, and can hence handle both aleatory or stochastic uncertainty, and epistemic or subjective uncertainty. This paper shows how the D-S theory can be used to represent beliefs in some hypotheses such as hydrologic drought or wet conditions, describe uncertainty and ignorance in the system, and give a quantitative measurement of belief and plausibility in results. The D-S approach has been used in this work for information synthesis using various evidence combination rules having different conflict modeling approaches. A case study is presented for hydrologic drought prediction using downscaled streamflow in the Mahanadi River at Hirakud in Orissa, India. Projections of n most likely monsoon streamflow sequences are obtained from a conditional random field (CRF) downscaling model, using an ensemble of three GCMs for three scenarios, which are converted to monsoon standardized streamflow index (SSFI-4) series. This range is used to specify the basic probability assignment (bpa) for a Dempster-Shafer structure, which represents uncertainty associated with each of the SSFI-4 classifications. These uncertainties are then combined across GCMs and scenarios using various evidence combination rules given by the D-S theory. A Bayesian approach is also presented for this case study, which models the uncertainty in projected frequencies of SSFI-4 classifications by deriving a posterior distribution for the frequency of each classification, using an ensemble of GCMs and scenarios. Results from the D-S and Bayesian approaches are compared, and relative merits of each approach are discussed. Both approaches show an increasing probability of extreme, severe and moderate droughts and decreasing probability of normal and wet conditions in Orissa as a result of climate change. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Vibration and acoustic analysis at higher frequencies faces two challenges: computing the response without using an excessive number of degrees of freedom, and quantifying its uncertainty due to small spatial variations in geometry, material properties and boundary conditions. Efficient models make use of the observation that when the response of a decoupled vibro-acoustic subsystem is sufficiently sensitive to uncertainty in such spatial variations, the local statistics of its natural frequencies and mode shapes saturate to universal probability distributions. This holds irrespective of the causes that underly these spatial variations and thus leads to a nonparametric description of uncertainty. This work deals with the identification of uncertain parameters in such models by using experimental data. One of the difficulties is that both experimental errors and modeling errors, due to the nonparametric uncertainty that is inherent to the model type, are present. This is tackled by employing a Bayesian inference strategy. The prior probability distribution of the uncertain parameters is constructed using the maximum entropy principle. The likelihood function that is subsequently computed takes the experimental information, the experimental errors and the modeling errors into account. The posterior probability distribution, which is computed with the Markov Chain Monte Carlo method, provides a full uncertainty quantification of the identified parameters, and indicates how well their uncertainty is reduced, with respect to the prior information, by the experimental data. © 2013 Taylor & Francis Group, London.