997 resultados para uncertainty quantification


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper offers an uncertainty quantification (UQ) study applied to the performance analysis of the ERCOFTAC conical diffuser. A deterministic CFD solver is coupled with a non-statistical generalised Polynomial Chaos(gPC)representation based on a pseudo-spectral projection method. Such approach has the advantage to not require any modification of the CFD code for the propagation of random disturbances in the aerodynamic field. The stochactic results highlihgt the importance of the inlet velocity uncertainties on the pressure recovery both alone and when coupled with a second uncertain variable. From a theoretical point of view, we investigate the possibility to build our gPC representation on arbitray grid, thus increasing the flexibility of the stochastic framework.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The effect of structural and aerodynamic uncertainties on the performance predictions of a helicopter is investigated. An aerodynamic model based on blade element and momentum theory is used to predict the helicopter performance. The aeroelastic parameters, such as blade chord, rotor radius, two-dimensional lift-curve slope, blade profile drag coefficient, rotor angular velocity, blade pitch angle, and blade twist rate per radius of the rotor, are considered as random variables. The propagation of these uncertainties to the performance parameters, such as thrust coefficient and power coefficient, are studied using Monte Carlo Simulations. The simulations are performed with 100,000 samples of structural and aerodynamic uncertain variables with a coefficient of variation ranging from 1 to 5%. The scatter in power predictions in hover, axial climb, and forward flight for the untwisted and linearly twisted blades is studied. It is found that about 20-25% excess power can be required by the helicopter relative to the determination predictions due to uncertainties.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we consider the problem of guided wave scattering from delamination in laminated composite and further the problem of estimating delamination size and layer-wise location from the guided wave measurement. Damage location and region/size can be estimated from time of flight and wave packet spread, whereas depth information can be obtained from wavenumber modulation in the carrier packet. The key challenge is that these information are highly sensitive to various uncertainties. Variation in reflected and transmitted wave amplitude in a bar due to boundary/interface uncertainty is studied to illustrate such effect. Effect of uncertainty in material parameters on the time of flight are estimated for longitudinal wave propagation. To evaluate the effect of uncertainty in delamination detection, we employ a time domain spectral finite element (tSFEM) scheme where wave propagation is modeled using higher-order interpolation with shape function have spectral convergence properties. A laminated composite beam with layer-wise placement of delamination is considered in the simulation. Scattering due to the presence of delamination is analyzed. For a single delamination, two identical waveforms are created at the two fronts of the delamination, whereas waves in the two sub-laminates create two independent waveforms with different wavelengths. Scattering due to multiple delaminations in composite beam is studied.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many engineering applications face the problem of bounding the expected value of a quantity of interest (performance, risk, cost, etc.) that depends on stochastic uncertainties whose probability distribution is not known exactly. Optimal uncertainty quantification (OUQ) is a framework that aims at obtaining the best bound in these situations by explicitly incorporating available information about the distribution. Unfortunately, this often leads to non-convex optimization problems that are numerically expensive to solve.

This thesis emphasizes on efficient numerical algorithms for OUQ problems. It begins by investigating several classes of OUQ problems that can be reformulated as convex optimization problems. Conditions on the objective function and information constraints under which a convex formulation exists are presented. Since the size of the optimization problem can become quite large, solutions for scaling up are also discussed. Finally, the capability of analyzing a practical system through such convex formulations is demonstrated by a numerical example of energy storage placement in power grids.

When an equivalent convex formulation is unavailable, it is possible to find a convex problem that provides a meaningful bound for the original problem, also known as a convex relaxation. As an example, the thesis investigates the setting used in Hoeffding's inequality. The naive formulation requires solving a collection of non-convex polynomial optimization problems whose number grows doubly exponentially. After structures such as symmetry are exploited, it is shown that both the number and the size of the polynomial optimization problems can be reduced significantly. Each polynomial optimization problem is then bounded by its convex relaxation using sums-of-squares. These bounds are found to be tight in all the numerical examples tested in the thesis and are significantly better than Hoeffding's bounds.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Operational uncertainties such as throttle excursions, varying inlet conditions and geometry changes lead to variability in compressor performance. In this work, the main operational uncertainties inherent in a transonic axial compressor are quantified to deter- mine their effect on performance. These uncertainties include the effects of inlet distortion, metal expansion, ow leakages and blade roughness. A 3D, validated RANS model of the compressor is utilized to simulate these uncertainties and quantify their effect on polytropic efficiency and pressure ratio. To propagate them, stochastic collocation and sparse pseudospectral approximations are used. We demonstrate that lower-order approximations are sufficient as these uncertainties are inherently linear. Results for epistemic uncertainties in the form of meshing methodologies are also presented. Finally, the uncertainties considered are ranked in order of their effect on efficiency loss. © 2012 AIAA.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Accurate forecasting of wind farm power generation is essential for successful operation and management of wind farms and to minimize risks associated with their integration into energy systems. However, due to the inherent wind intermittency, wind power forecasts are highly prone to error and often far from being perfect. The purpose of this paper is to develop statistical methods for quantifying uncertainties associated with wind power generation forecasts. Prediction intervals (PIs) with a prescribed confidence level are constructed using the delta and bootstrap methods for neural network forecasts. The moving block bootstrap method is applied to preserve the correlation structure in wind power observations. The effectiveness and efficiency of these two methods for uncertainty quantification is examined using two month datasets taken from a wind farm in Australia. It is demonstrated that while all constructed PIs are theoretically valid, bootstrap PIs are more informative than delta PIs, and are therefore more useful for decision-making.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes an innovative optimized parametric method for construction of prediction intervals (PIs) for uncertainty quantification. The mean-variance estimation (MVE) method employs two separate neural network (NN) models to estimate the mean and variance of targets. A new training method is developed in this study that adjusts parameters of NN models through minimization of a PI-based cost functions. A simulated annealing method is applied for minimization of the nonlinear non-differentiable cost function. The performance of the proposed method for PI construction is examined using monthly data sets taken from a wind farm in Australia. PIs for the wind farm power generation are constructed with five confidence levels between 50% and 90%. Demonstrated results indicate that valid PIs constructed using the optimized MVE method have a quality much better than the traditional MVE-based PIs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A statistical optimized technique for rapid development of reliable prediction intervals (PIs) is presented in this study. The mean-variance estimation (MVE) technique is employed here for quantification of uncertainties related with wind power predictions. In this method, two separate neural network models are used for estimation of wind power generation and its variance. A novel PI-based training algorithm is also presented to enhance the performance of the MVE method and improve the quality of PIs. For an in-depth analysis, comprehensive experiments are conducted with seasonal datasets taken from three geographically dispersed wind farms in Australia. Five confidence levels of PIs are between 50% and 90%. Obtained results show while both traditional and optimized PIs are hypothetically valid, the optimized PIs are much more informative than the traditional MVE PIs. The informativeness of these PIs paves the way for their application in trouble-free operation and smooth integration of wind farms into energy systems. © 2014 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Approximate models (proxies) can be employed to reduce the computational costs of estimating uncertainty. The price to pay is that the approximations introduced by the proxy model can lead to a biased estimation. To avoid this problem and ensure a reliable uncertainty quantification, we propose to combine functional data analysis and machine learning to build error models that allow us to obtain an accurate prediction of the exact response without solving the exact model for all realizations. We build the relationship between proxy and exact model on a learning set of geostatistical realizations for which both exact and approximate solvers are run. Functional principal components analysis (FPCA) is used to investigate the variability in the two sets of curves and reduce the dimensionality of the problem while maximizing the retained information. Once obtained, the error model can be used to predict the exact response of any realization on the basis of the sole proxy response. This methodology is purpose-oriented as the error model is constructed directly for the quantity of interest, rather than for the state of the system. Also, the dimensionality reduction performed by FPCA allows a diagnostic of the quality of the error model to assess the informativeness of the learning set and the fidelity of the proxy to the exact model. The possibility of obtaining a prediction of the exact response for any newly generated realization suggests that the methodology can be effectively used beyond the context of uncertainty quantification, in particular for Bayesian inference and optimization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Uncertainty quantification (UQ) is both an old and new concept. The current novelty lies in the interactions and synthesis of mathematical models, computer experiments, statistics, field/real experiments, and probability theory, with a particular emphasize on the large-scale simulations by computer models. The challenges not only come from the complication of scientific questions, but also from the size of the information. It is the focus in this thesis to provide statistical models that are scalable to massive data produced in computer experiments and real experiments, through fast and robust statistical inference.

Chapter 2 provides a practical approach for simultaneously emulating/approximating massive number of functions, with the application on hazard quantification of Soufri\`{e}re Hills volcano in Montserrate island. Chapter 3 discusses another problem with massive data, in which the number of observations of a function is large. An exact algorithm that is linear in time is developed for the problem of interpolation of Methylation levels. Chapter 4 and Chapter 5 are both about the robust inference of the models. Chapter 4 provides a new criteria robustness parameter estimation criteria and several ways of inference have been shown to satisfy such criteria. Chapter 5 develops a new prior that satisfies some more criteria and is thus proposed to use in practice.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper demonstrates the procedures for probabilistic assessment of a pesticide fate and transport model, PCPF-1, to elucidate the modeling uncertainty using the Monte Carlo technique. Sensitivity analyses are performed to investigate the influence of herbicide characteristics and related soil properties on model outputs using four popular rice herbicides: mefenacet, pretilachlor, bensulfuron-methyl and imazosulfuron. Uncertainty quantification showed that the simulated concentrations in paddy water varied more than those of paddy soil. This tendency decreased as the simulation proceeded to a later period but remained important for herbicides having either high solubility or a high 1st-order dissolution rate. The sensitivity analysis indicated that PCPF-1 parameters requiring careful determination are primarily those involve with herbicide adsorption (the organic carbon content, the bulk density and the volumetric saturated water content), secondary parameters related with herbicide mass distribution between paddy water and soil (1st-order desorption and dissolution rates) and lastly, those involving herbicide degradations. © Pesticide Science Society of Japan.