992 resultados para parameter uncertainty
Resumo:
Estimation of soil parameters by inverse modeling using observations on either surface soil moisture or crop variables has been successfully attempted in many studies, but difficulties to estimate root zone properties arise when heterogeneous layered soils are considered. The objective of this study was to explore the potential of combining observations on surface soil moisture and crop variables - leaf area index (LAI) and above-ground biomass for estimating soil parameters (water holding capacity and soil depth) in a two-layered soil system using inversion of the crop model STICS. This was performed using GLUE method on a synthetic data set on varying soil types and on a data set from a field experiment carried out in two maize plots in South India. The main results were (i) combination of surface soil moisture and above-ground biomass provided consistently good estimates with small uncertainity of soil properties for the two soil layers, for a wide range of soil paramater values, both in the synthetic and the field experiment, (ii) above-ground biomass was found to give relatively better estimates and lower uncertainty than LAI when combined with surface soil moisture, especially for estimation of soil depth, (iii) surface soil moisture data, either alone or combined with crop variables, provided a very good estimate of the water holding capacity of the upper soil layer with very small uncertainty whereas using the surface soil moisture alone gave very poor estimates of the soil properties of the deeper layer, and (iv) using crop variables alone (else above-ground biomass or LAI) provided reasonable estimates of the deeper layer properties depending on the soil type but provided poor estimates of the first layer properties. The robustness of combining observations of the surface soil moisture and the above-ground biomass for estimating two layer soil properties, which was demonstrated using both synthetic and field experiments in this study, needs now to be tested for a broader range of climatic conditions and crop types, to assess its potential for spatial applications. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Wavelet coefficients based on spatial wavelets are used as damage indicators to identify the damage location as well as the size of the damage in a laminated composite beam with localized matrix cracks. A finite element model of the composite beam is used in conjunction with a matrix crack based damage model to simulate the damaged composite beam structure. The modes of vibration of the beam are analyzed using the wavelet transform in order to identify the location and the extent of the damage by sensing the local perturbations at the damage locations. The location of the damage is identified by a sudden change in spatial distribution of wavelet coefficients. Monte Carlo Simulations (MCS) are used to investigate the effect of ply level uncertainty in composite material properties such as ply longitudinal stiffness, transverse stiffness, shear modulus and Poisson's ratio on damage detection parameter, wavelet coefficient. In this study, numerical simulations are done for single and multiple damage cases. It is observed that spatial wavelets can be used as a reliable damage detection tool for composite beams with localized matrix cracks which can result from low velocity impact damage.
Resumo:
41 p.
Resumo:
The uncertainty associated with a rainfall-runoff and non-point source loading (NPS) model can be attributed to both the parameterization and model structure. An interesting implication of the areal nature of NPS models is the direct relationship between model structure (i.e. sub-watershed size) and sample size for the parameterization of spatial data. The approach of this research is to find structural limitations in scale for the use of the conceptual NPS model, then examine the scales at which suitable stochastic depictions of key parameter sets can be generated. The overlapping regions are optimal (and possibly the only suitable regions) for conducting meaningful stochastic analysis with a given NPS model. Previous work has sought to find optimal scales for deterministic analysis (where, in fact, calibration can be adjusted to compensate for sub-optimal scale selection); however, analysis of stochastic suitability and uncertainty associated with both the conceptual model and the parameter set, as presented here, is novel; as is the strategy of delineating a watershed based on the uncertainty distribution. The results of this paper demonstrate a narrow range of acceptable model structure for stochastic analysis in the chosen NPS model. In the case examined, the uncertainties associated with parameterization and parameter sensitivity are shown to be outweighed in significance by those resulting from structural and conceptual decisions. © 2011 Copyright IAHS Press.
Resumo:
Coupled hydrology and water quality models are an important tool today, used in the understanding and management of surface water and watershed areas. Such problems are generally subject to substantial uncertainty in parameters, process understanding, and data. Component models, drawing on different data, concepts, and structures, are affected differently by each of these uncertain elements. This paper proposes a framework wherein the response of component models to their respective uncertain elements can be quantified and assessed, using a hydrological model and water quality model as two exemplars. The resulting assessments can be used to identify model coupling strategies that permit more appropriate use and calibration of individual models, and a better overall coupled model response. One key finding was that an approximate balance of water quality and hydrological model responses can be obtained using both the QUAL2E and Mike11 water quality models. The balance point, however, does not support a particularly narrow surface response (or stringent calibration criteria) with respect to the water quality calibration data, at least in the case examined here. Additionally, it is clear from the results presented that the structural source of uncertainty is at least as significant as parameter-based uncertainties in areal models. © 2012 John Wiley & Sons, Ltd.
Resumo:
The problem of phase uncertainty arising in calibration of the test fixtures is investigated in this paper, It is shown that the problem exists no matter what kinds of calibration standards are used. It is also found that there is no need to determine the individual S-parameters of the test fixtures. In order to eliminate the problem of phase uncertainty, three different precise (known) reflection standards or one known reflection standard plus one known transmission standard should be used to calibrate symmetrical test fixtures. For the asymmetrical cases, three known standards, including at least one transmission standard, should be used. The thru-open-match (TOM) and thru-short-match (TSM) techniques are the simplest methods, and they have no bandwidth limitation. When the standards are imprecise (unknown), it is recommended to use any suitable technique, such as the thru-reflect-line, line-reflect-line, thru-short-delay, thru-open-delay,line-reflect-match, line-reflect-reflect-match, or multiline methods, to accurately determine the values of the required calibration terms and, in addition, to use the TOM or TSM method with the same imprecise standards to resolve the phase uncertainty.
Resumo:
In this paper, a disturbance controller is designed for making robotic system behave as a decoupled linear system according to the concept of internal model. Based on the linear system, the paper presents an iterative learning control algorithm to robotic manipulators. A sufficient condition for convergence is provided. The selection of parameter values of the algorithm is simple and easy to meet the convergence condition. The simulation results demonstrate the effectiveness of the algorithm..
Resumo:
Wind energy is the energy source that contributes most to the renewable energy mix of European countries. While there are good wind resources throughout Europe, the intermittency of the wind represents a major problem for the deployment of wind energy into the electricity networks. To ensure grid security a Transmission System Operator needs today for each kilowatt of wind energy either an equal amount of spinning reserve or a forecasting system that can predict the amount of energy that will be produced from wind over a period of 1 to 48 hours. In the range from 5m/s to 15m/s a wind turbine’s production increases with a power of three. For this reason, a Transmission System Operator requires an accuracy for wind speed forecasts of 1m/s in this wind speed range. Forecasting wind energy with a numerical weather prediction model in this context builds the background of this work. The author’s goal was to present a pragmatic solution to this specific problem in the ”real world”. This work therefore has to be seen in a technical context and hence does not provide nor intends to provide a general overview of the benefits and drawbacks of wind energy as a renewable energy source. In the first part of this work the accuracy requirements of the energy sector for wind speed predictions from numerical weather prediction models are described and analysed. A unique set of numerical experiments has been carried out in collaboration with the Danish Meteorological Institute to investigate the forecast quality of an operational numerical weather prediction model for this purpose. The results of this investigation revealed that the accuracy requirements for wind speed and wind power forecasts from today’s numerical weather prediction models can only be met at certain times. This means that the uncertainty of the forecast quality becomes a parameter that is as important as the wind speed and wind power itself. To quantify the uncertainty of a forecast valid for tomorrow requires an ensemble of forecasts. In the second part of this work such an ensemble of forecasts was designed and verified for its ability to quantify the forecast error. This was accomplished by correlating the measured error and the forecasted uncertainty on area integrated wind speed and wind power in Denmark and Ireland. A correlation of 93% was achieved in these areas. This method cannot solve the accuracy requirements of the energy sector. By knowing the uncertainty of the forecasts, the focus can however be put on the accuracy requirements at times when it is possible to accurately predict the weather. Thus, this result presents a major step forward in making wind energy a compatible energy source in the future.
Resumo:
The paper extends Blackburn and Galindev's (Economics Letters, Vol. 79 (2003), pp. 417-421) stochastic growth model in which productivity growth entails both external and internal learning behaviour with a constant relative risk aversion utility function and productivity shocks. Consequently, the relationship between long-term growth and short-term volatility depends not only on the relative importance of each learning mechanism but also on a parameter measuring individuals' attitude towards risk.
Resumo:
This paper addresses the estimation of parameters of a Bayesian network from incomplete data. The task is usually tackled by running the Expectation-Maximization (EM) algorithm several times in order to obtain a high log-likelihood estimate. We argue that choosing the maximum log-likelihood estimate (as well as the maximum penalized log-likelihood and the maximum a posteriori estimate) has severe drawbacks, being affected both by overfitting and model uncertainty. Two ideas are discussed to overcome these issues: a maximum entropy approach and a Bayesian model averaging approach. Both ideas can be easily applied on top of EM, while the entropy idea can be also implemented in a more sophisticated way, through a dedicated non-linear solver. A vast set of experiments shows that these ideas produce significantly better estimates and inferences than the traditional and widely used maximum (penalized) log-likelihood and maximum a posteriori estimates. In particular, if EM is adopted as optimization engine, the model averaging approach is the best performing one; its performance is matched by the entropy approach when implemented using the non-linear solver. The results suggest that the applicability of these ideas is immediate (they are easy to implement and to integrate in currently available inference engines) and that they constitute a better way to learn Bayesian network parameters.
Resumo:
The assimilation of discrete higher fidelity data points with model predictions can be used to achieve a reduction in the uncertainty of the model input parameters which generate accurate predictions. The problem investigated here involves the prediction of limit-cycle oscillations using a High-Dimensional Harmonic Balance method (HDHB). The efficiency of the HDHB method is exploited to enable calibration of structural input parameters using a Bayesian inference technique. Markov-chain Monte Carlo is employed to sample the posterior distributions. Parameter estimation is carried out on both a pitch/plunge aerofoil and Goland wing configuration. In both cases significant refinement was achieved in the distribution of possible structural parameters allowing better predictions of their
true deterministic values.
Resumo:
In this paper, a stochastic programming approach is proposed for trading wind energy in a market environment under uncertainty. Uncertainty in the energy market prices is the main cause of high volatility of profits achieved by power producers. The volatile and intermittent nature of wind energy represents another source of uncertainty. Hence, each uncertain parameter is modeled by scenarios, where each scenario represents a plausible realization of the uncertain parameters with an associated occurrence probability. Also, an appropriate risk measurement is considered. The proposed approach is applied on a realistic case study, based on a wind farm in Portugal. Finally, conclusions are duly drawn. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Auf dem Gebiet der Strukturdynamik sind computergestützte Modellvalidierungstechniken inzwischen weit verbreitet. Dabei werden experimentelle Modaldaten, um ein numerisches Modell für weitere Analysen zu korrigieren. Gleichwohl repräsentiert das validierte Modell nur das dynamische Verhalten der getesteten Struktur. In der Realität gibt es wiederum viele Faktoren, die zwangsläufig zu variierenden Ergebnissen von Modaltests führen werden: Sich verändernde Umgebungsbedingungen während eines Tests, leicht unterschiedliche Testaufbauten, ein Test an einer nominell gleichen aber anderen Struktur (z.B. aus der Serienfertigung), etc. Damit eine stochastische Simulation durchgeführt werden kann, muss eine Reihe von Annahmen für die verwendeten Zufallsvariablengetroffen werden. Folglich bedarf es einer inversen Methode, die es ermöglicht ein stochastisches Modell aus experimentellen Modaldaten zu identifizieren. Die Arbeit beschreibt die Entwicklung eines parameter-basierten Ansatzes, um stochastische Simulationsmodelle auf dem Gebiet der Strukturdynamik zu identifizieren. Die entwickelte Methode beruht auf Sensitivitäten erster Ordnung, mit denen Parametermittelwerte und Kovarianzen des numerischen Modells aus stochastischen experimentellen Modaldaten bestimmt werden können.
Resumo:
Despite the many models developed for phosphorus concentration prediction at differing spatial and temporal scales, there has been little effort to quantify uncertainty in their predictions. Model prediction uncertainty quantification is desirable, for informed decision-making in river-systems management. An uncertainty analysis of the process-based model, integrated catchment model of phosphorus (INCA-P), within the generalised likelihood uncertainty estimation (GLUE) framework is presented. The framework is applied to the Lugg catchment (1,077 km2), a River Wye tributary, on the England–Wales border. Daily discharge and monthly phosphorus (total reactive and total), for a limited number of reaches, are used to initially assess uncertainty and sensitivity of 44 model parameters, identified as being most important for discharge and phosphorus predictions. This study demonstrates that parameter homogeneity assumptions (spatial heterogeneity is treated as land use type fractional areas) can achieve higher model fits, than a previous expertly calibrated parameter set. The model is capable of reproducing the hydrology, but a threshold Nash-Sutcliffe co-efficient of determination (E or R 2) of 0.3 is not achieved when simulating observed total phosphorus (TP) data in the upland reaches or total reactive phosphorus (TRP) in any reach. Despite this, the model reproduces the general dynamics of TP and TRP, in point source dominated lower reaches. This paper discusses why this application of INCA-P fails to find any parameter sets, which simultaneously describe all observed data acceptably. The discussion focuses on uncertainty of readily available input data, and whether such process-based models should be used when there isn’t sufficient data to support the many parameters.
Resumo:
Improvements in the resolution of satellite imagery have enabled extraction of water surface elevations at the margins of the flood. Comparison between modelled and observed water surface elevations provides a new means for calibrating and validating flood inundation models, however the uncertainty in this observed data has yet to be addressed. Here a flood inundation model is calibrated using a probabilistic treatment of the observed data. A LiDAR guided snake algorithm is used to determine an outline of a flood event in 2006 on the River Dee, North Wales, UK, using a 12.5m ERS-1 image. Points at approximately 100m intervals along this outline are selected, and the water surface elevation recorded as the LiDAR DEM elevation at each point. With a planar water surface from the gauged upstream to downstream water elevations as an approximation, the water surface elevations at points along this flooded extent are compared to their ‘expected’ value. The pattern of errors between the two show a roughly normal distribution, however when plotted against coordinates there is obvious spatial autocorrelation. The source of this spatial dependency is investigated by comparing errors to the slope gradient and aspect of the LiDAR DEM. A LISFLOOD-FP model of the flood event is set-up to investigate the effect of observed data uncertainty on the calibration of flood inundation models. Multiple simulations are run using different combinations of friction parameters, from which the optimum parameter set will be selected. For each simulation a T-test is used to quantify the fit between modelled and observed water surface elevations. The points chosen for use in this T-test are selected based on their error. The criteria for selection enables evaluation of the sensitivity of the choice of optimum parameter set to uncertainty in the observed data. This work explores the observed data in detail and highlights possible causes of error. The identification of significant error (RMSE = 0.8m) between approximate expected and actual observed elevations from the remotely sensed data emphasises the limitations of using this data in a deterministic manner within the calibration process. These limitations are addressed by developing a new probabilistic approach to using the observed data.