974 resultados para mismatched uncertainties
Resumo:
This paper presents a novel path planning method for minimizing the energy consumption of an autonomous underwater vehicle subjected to time varying ocean disturbances and forecast model uncertainty. The algorithm determines 4-Dimensional path candidates using Nonlinear Robust Model Predictive Control (NRMPC) and solutions optimised using A*-like algorithms. Vehicle performance limits are incorporated into the algorithm with disturbances represented as spatial and temporally varying ocean currents with a bounded uncertainty in their predictions. The proposed algorithm is demonstrated through simulations using a 4-Dimensional, spatially distributed time-series predictive ocean current model. Results show the combined NRMPC and A* approach is capable of generating energy-efficient paths which are resistant to both dynamic disturbances and ocean model uncertainty.
Resumo:
This paper presents an uncertainty quantification study of the performance analysis of the high pressure ratio single stage radial-inflow turbine used in the Sundstrand Power Systems T-100 Multi-purpose Small Power Unit. A deterministic 3D volume-averaged Computational Fluid Dynamics (CFD) solver is coupled with a non-statistical generalized Polynomial Chaos (gPC) representation based on a pseudo-spectral projection method. One of the advantages of this approach is that it does not require any modification of the CFD code for the propagation of random disturbances in the aerodynamic and geometric fields. The stochastic results highlight the importance of the blade thickness and trailing edge tip radius on the total-to-static efficiency of the turbine compared to the angular velocity and trailing edge tip length. From a theoretical point of view, the use of the gPC representation on an arbitrary grid also allows the investigation of the sensitivity of the blade thickness profiles on the turbine efficiency. The gPC approach is also applied to coupled random parameters. The results show that the most influential coupled random variables are trailing edge tip radius coupled with the angular velocity.
Resumo:
We consider estimating the total load from frequent flow data but less frequent concentration data. There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates that minimizes the biases and makes use of informative predictive variables. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized rating-curve approach with additional predictors that capture unique features in the flow data, such as the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and the discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. Forming this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach for two rivers delivering to the Great Barrier Reef, Queensland, Australia. One is a data set from the Burdekin River, and consists of the total suspended sediment (TSS) and nitrogen oxide (NO(x)) and gauged flow for 1997. The other dataset is from the Tully River, for the period of July 2000 to June 2008. For NO(x) Burdekin, the new estimates are very similar to the ratio estimates even when there is no relationship between the concentration and the flow. However, for the Tully dataset, by incorporating the additional predictive variables namely the discounted flow and flow phases (rising or recessing), we substantially improved the model fit, and thus the certainty with which the load is estimated.
Resumo:
There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates by minimizing the biases and making use of possible predictive variables. The load estimation procedure can be summarized by the following four steps: - (i) output the flow rates at regular time intervals (e.g. 10 minutes) using a time series model that captures all the peak flows; - (ii) output the predicted flow rates as in (i) at the concentration sampling times, if the corresponding flow rates are not collected; - (iii) establish a predictive model for the concentration data, which incorporates all possible predictor variables and output the predicted concentrations at the regular time intervals as in (i), and; - (iv) obtain the sum of all the products of the predicted flow and the predicted concentration over the regular time intervals to represent an estimate of the load. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized regression (rating-curve) approach with additional predictors that capture unique features in the flow data, namely the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and cumulative discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. The model also has the capacity to accommodate autocorrelation in model errors which are the result of intensive sampling during floods. Incorporating this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach using the concentrations of total suspended sediment (TSS) and nitrogen oxide (NOx) and gauged flow data from the Burdekin River, a catchment delivering to the Great Barrier Reef. The sampling biases for NOx concentrations range from 2 to 10 times indicating severe biases. As we expect, the traditional average and extrapolation methods produce much higher estimates than those when bias in sampling is taken into account.
Resumo:
In this paper, the trajectory tracking control of an autonomous underwater vehicle (AUVs) in six-degrees-of-freedom (6-DOFs) is addressed. It is assumed that the system parameters are unknown and the vehicle is underactuated. An adaptive controller is proposed, based on Lyapunov׳s direct method and the back-stepping technique, which interestingly guarantees robustness against parameter uncertainties. The desired trajectory can be any sufficiently smooth bounded curve parameterized by time even if consist of straight line. In contrast with the majority of research in this field, the likelihood of actuators׳ saturation is considered and another adaptive controller is designed to overcome this problem, in which control signals are bounded using saturation functions. The nonlinear adaptive control scheme yields asymptotic convergence of the vehicle to the reference trajectory, in the presence of parametric uncertainties. The stability of the presented control laws is proved in the sense of Lyapunov theory and Barbalat׳s lemma. Efficiency of presented controller using saturation functions is verified through comparing numerical simulations of both controllers.
Resumo:
A minimax filter is derived to estimate the state of a system, using observations corrupted by colored noise, when large uncertainties in the plant dynamics and process noise are presen.
Resumo:
This paper presents a method of designing a minimax filter in the presence of large plant uncertainties and constraints on the mean squared values of the estimates. The minimax filtering problem is reformulated in the framework of a deterministic optimal control problem and the method of solution employed, invokes the matrix Minimum Principle. The constrained linear filter and its relation to singular control problems has been illustrated. For the class of problems considered here it is shown that the filter can he constrained separately after carrying out the mini maximization. Numorieal examples are presented to illustrate the results.
Resumo:
By observing mergers of compact objects, future gravity wave experiments would measure the luminosity distance to a large number of sources to a high precision but not their redshifts. Given the directional sensitivity of an experiment, a fraction of such sources (gold plated) can be identified optically as single objects in the direction of the source. We show that if an approximate distance-redshift relation is known then it is possible to statistically resolve those sources that have multiple galaxies in the beam. We study the feasibility of using gold plated sources to iteratively resolve the unresolved sources, obtain the self-calibrated best possible distance-redshift relation and provide an analytical expression for the accuracy achievable. We derive the lower limit on the total number of sources that is needed to achieve this accuracy through self-calibration. We show that this limit depends exponentially on the beam width and give estimates for various experimental parameters representative of future gravitational wave experiments DECIGO and BBO.
Resumo:
We examine the exclusion limits set by the CDF and D0 experiments on the Standard Model Higgs boson mass from their searches at the Tevatron in the light of large theoretical uncertainties on the signal and background cross sections. We show that when these uncertainties are consistently taken into account, the sensitivity of the experiments becomes significantly lower and the currently excluded mass range M-H = 158-175 GeV could be entirely reopened. The necessary luminosity required to recover the current sensitivity is found to be a factor of two higher than the present one. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Cross-polarization from the dipolar reservoir for a range of mismatched Hartmann-Hahn conditions has been considered. Experiment, in general, agrees with the dispersive Lorentzian behavior expected on the basis of quasi-equilibrium theory. It is observed that inclusion of additional mechanisms of polarization transfer lead to an improvment of the fit of the experimental results. The utility of extending the technique to the case of ordered long chain molecules, such as liquid crystals, for the measurement of the local dipolar field is also presented. (C) 2002 Elsevier Science (USA).
Resumo:
A reliable method for service life estimation of the structural element is a prerequisite for service life design. A new methodology for durability-based service life estimation of reinforced concrete flexural elements with respect to chloride-induced corrosion of reinforcement is proposed. The methodology takes into consideration the fuzzy and random uncertainties associated with the variables involved in service life estimation by using a hybrid method combining the vertex method of fuzzy set theory with Monte Carlo simulation technique. It is also shown how to determine the bounds for characteristic value of failure probability from the resulting fuzzy set for failure probability with minimal computational effort. Using the methodology, the bounds for the characteristic value of failure probability for a reinforced concrete T-beam bridge girder has been determined. The service life of the structural element is determined by comparing the upper bound of characteristic value of failure probability with the target failure probability. The methodology will be useful for durability-based service life design and also for making decisions regarding in-service inspections.
Resumo:
The performance of postdetection integration (PDI) techniques for the detection of Global Navigation Satellite Systems (GNSS) signals in the presence of uncertainties in frequency offsets, noise variance, and unknown data-bits is studied. It is shown that the conventional PDI techniques are generally not robust to uncertainty in the data-bits and/or the noise variance. Two new modified PDI techniques are proposed, and they are shown to be robust to these uncertainties. The receiver operating characteristics (ROC) and sample complexity performance of the PDI techniques in the presence of model uncertainties are analytically derived. It is shown that the proposed methods significantly outperform existing methods, and hence they could become increasingly important as the GNSS receivers attempt to push the envelope on the minimum signal-to-noise ratio (SNR) for reliable detection.
Resumo:
Global change in climate and consequent large impacts on regional hydrologic systems have, in recent years, motivated significant research efforts in water resources modeling under climate change. In an integrated future hydrologic scenario, it is likely that water availability and demands will change significantly due to modifications in hydro-climatic variables such as rainfall, reservoir inflows, temperature, net radiation, wind speed and humidity. An integrated regional water resources management model should capture the likely impacts of climate change on water demands and water availability along with uncertainties associated with climate change impacts and with management goals and objectives under non-stationary conditions. Uncertainties in an integrated regional water resources management model, accumulating from various stages of decision making include climate model and scenario uncertainty in the hydro-climatic impact assessment, uncertainty due to conflicting interests of the water users and uncertainty due to inherent variability of the reservoir inflows. This paper presents an integrated regional water resources management modeling approach considering uncertainties at various stages of decision making by an integration of a hydro-climatic variable projection model, a water demand quantification model, a water quantity management model and a water quality control model. Modeling tools of canonical correlation analysis, stochastic dynamic programming and fuzzy optimization are used in an integrated framework, in the approach presented here. The proposed modeling approach is demonstrated with the case study of the Bhadra Reservoir system in Karnataka, India.
Resumo:
In this work, the hypothesis testing problem of spectrum sensing in a cognitive radio is formulated as a Goodness-of-fit test against the general class of noise distributions used in most communications-related applications. A simple, general, and powerful spectrum sensing technique based on the number of weighted zero-crossings in the observations is proposed. For the cases of uniform and exponential weights, an expression for computing the near-optimal detection threshold that meets a given false alarm probability constraint is obtained. The proposed detector is shown to be robust to two commonly encountered types of noise uncertainties, namely, the noise model uncertainty, where the PDF of the noise process is not completely known, and the noise parameter uncertainty, where the parameters associated with the noise PDF are either partially or completely unknown. Simulation results validate our analysis, and illustrate the performance benefits of the proposed technique relative to existing methods, especially in the low SNR regime and in the presence of noise uncertainties.
Resumo:
要: We have recently proposed a generalized JKR model for non-slipping adhesive contact between two elastic spheres subjected to a pair of pulling forces and a mismatch strain (Chen, S., Gao, H., 2006c. Non-slipping adhesive contact between mismatched elastic spheres: a model of adhesion mediated deformation sensor. J. Mech. Phys. Solids 54, 1548-1567). Here we extend this model to adhesion between two mismatched elastic cylinders. The attention is focused on how the mismatch strain affects the contact area and the pull-off force. It is found that there exists a critical mismatch strain at which the contact spontaneously dissociates. The analysis suggests possible mechanisms by which mechanical deformation can affect binding between cells and molecules in biology.