147 resultados para parameter uncertainty


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we provide estimates for the coverage of parameter space when using Latin Hypercube Sampling, which forms the basis of building so-called populations of models. The estimates are obtained using combinatorial counting arguments to determine how many trials, k, are needed in order to obtain specified parameter space coverage for a given value of the discretisation size n. In the case of two dimensions, we show that if the ratio (Ø) of trials to discretisation size is greater than 1, then as n becomes moderately large the fractional coverage behaves as 1-exp-ø. We compare these estimates with simulation results obtained from an implementation of Latin Hypercube Sampling using MATLAB.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper the issue of finding uncertainty intervals for queries in a Bayesian Network is reconsidered. The investigation focuses on Bayesian Nets with discrete nodes and finite populations. An earlier asymptotic approach is compared with a simulation-based approach, together with further alternatives, one based on a single sample of the Bayesian Net of a particular finite population size, and another which uses expected population sizes together with exact probabilities. We conclude that a query of a Bayesian Net should be expressed as a probability embedded in an uncertainty interval. Based on an investigation of two Bayesian Net structures, the preferred method is the simulation method. However, both the single sample method and the expected sample size methods may be useful and are simpler to compute. Any method at all is more useful than none, when assessing a Bayesian Net under development, or when drawing conclusions from an ‘expert’ system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The method of generalized estimating equations (GEEs) provides consistent estimates of the regression parameters in a marginal regression model for longitudinal data, even when the working correlation model is misspecified (Liang and Zeger, 1986). However, the efficiency of a GEE estimate can be seriously affected by the choice of the working correlation model. This study addresses this problem by proposing a hybrid method that combines multiple GEEs based on different working correlation models, using the empirical likelihood method (Qin and Lawless, 1994). Analyses show that this hybrid method is more efficient than a GEE using a misspecified working correlation model. Furthermore, if one of the working correlation structures correctly models the within-subject correlations, then this hybrid method provides the most efficient parameter estimates. In simulations, the hybrid method's finite-sample performance is superior to a GEE under any of the commonly used working correlation models and is almost fully efficient in all scenarios studied. The hybrid method is illustrated using data from a longitudinal study of the respiratory infection rates in 275 Indonesian children.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For a multiarmed bandit problem with exponential discounting the optimal allocation rule is defined by a dynamic allocation index defined for each arm on its space. The index for an arm is equal to the expected immediate reward from the arm, with an upward adjustment reflecting any uncertainty about the prospects of obtaining rewards from the arm, and the possibilities of resolving those uncertainties by selecting that arm. Thus the learning component of the index is defined to be the difference between the index and the expected immediate reward. For two arms with the same expected immediate reward the learning component should be larger for the arm for which the reward rate is more uncertain. This is shown to be true for arms based on independent samples from a fixed distribution with an unknown parameter in the cases of Bernoulli and normal distributions, and similar results are obtained in other cases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes solutions to three issues pertaining to the estimation of finite mixture models with an unknown number of components: the non-identifiability induced by overfitting the number of components, the mixing limitations of standard Markov Chain Monte Carlo (MCMC) sampling techniques, and the related label switching problem. An overfitting approach is used to estimate the number of components in a finite mixture model via a Zmix algorithm. Zmix provides a bridge between multidimensional samplers and test based estimation methods, whereby priors are chosen to encourage extra groups to have weights approaching zero. MCMC sampling is made possible by the implementation of prior parallel tempering, an extension of parallel tempering. Zmix can accurately estimate the number of components, posterior parameter estimates and allocation probabilities given a sufficiently large sample size. The results will reflect uncertainty in the final model and will report the range of possible candidate models and their respective estimated probabilities from a single run. Label switching is resolved with a computationally light-weight method, Zswitch, developed for overfitted mixtures by exploiting the intuitiveness of allocation-based relabelling algorithms and the precision of label-invariant loss functions. Four simulation studies are included to illustrate Zmix and Zswitch, as well as three case studies from the literature. All methods are available as part of the R package Zmix, which can currently be applied to univariate Gaussian mixture models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we tackle the problem of efficient video event detection. We argue that linear detection functions should be preferred in this regard due to their scalability and efficiency during estimation and evaluation. A popular approach in this regard is to represent a sequence using a bag of words (BOW) representation due to its: (i) fixed dimensionality irrespective of the sequence length, and (ii) its ability to compactly model the statistics in the sequence. A drawback to the BOW representation, however, is the intrinsic destruction of the temporal ordering information. In this paper we propose a new representation that leverages the uncertainty in relative temporal alignments between pairs of sequences while not destroying temporal ordering. Our representation, like BOW, is of a fixed dimensionality making it easily integrated with a linear detection function. Extensive experiments on CK+, 6DMG, and UvA-NEMO databases show significant performance improvements across both isolated and continuous event detection tasks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, the trajectory tracking control of an autonomous underwater vehicle (AUVs) in six-degrees-of-freedom (6-DOFs) is addressed. It is assumed that the system parameters are unknown and the vehicle is underactuated. An adaptive controller is proposed, based on Lyapunov׳s direct method and the back-stepping technique, which interestingly guarantees robustness against parameter uncertainties. The desired trajectory can be any sufficiently smooth bounded curve parameterized by time even if consist of straight line. In contrast with the majority of research in this field, the likelihood of actuators׳ saturation is considered and another adaptive controller is designed to overcome this problem, in which control signals are bounded using saturation functions. The nonlinear adaptive control scheme yields asymptotic convergence of the vehicle to the reference trajectory, in the presence of parametric uncertainties. The stability of the presented control laws is proved in the sense of Lyapunov theory and Barbalat׳s lemma. Efficiency of presented controller using saturation functions is verified through comparing numerical simulations of both controllers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background Exercise referral schemes (ERS) aim to identify inactive adults in the primary care setting. The primary care professional refers the patient to a third party service, with this service taking responsibility for prescribing and monitoring an exercise programme tailored to the needs of the patient. This paper examines the cost-effectiveness of ERS in promoting physical activity compared with usual care in primary care setting. Methods A decision analytic model was developed to estimate the cost-effectiveness of ERS from a UK NHS perspective. The costs and outcomes of ERS were modelled over the patient's lifetime. Data were derived from a systematic review of the literature on the clinical and cost-effectiveness of ERS, and on parameter inputs in the modelling framework. Outcomes were expressed as incremental cost per quality-adjusted life-year (QALY). Deterministic and probabilistic sensitivity analyses investigated the impact of varying ERS cost and effectiveness assumptions. Sub-group analyses explored the cost-effectiveness of ERS in sedentary people with an underlying condition. Results Compared with usual care, the mean incremental lifetime cost per patient for ERS was £169 and the mean incremental QALY was 0.008, generating a base-case incremental cost-effectiveness ratio (ICER) for ERS at £20,876 per QALY in sedentary individuals without a diagnosed medical condition. There was a 51% probability that ERS was cost-effective at £20,000 per QALY and 88% probability that ERS was cost-effective at £30,000 per QALY. In sub-group analyses, cost per QALY for ERS in sedentary obese individuals was £14,618, and in sedentary hypertensives and sedentary individuals with depression the estimated cost per QALY was £12,834 and £8,414 respectively. Incremental lifetime costs and benefits associated with ERS were small, reflecting the preventative public health context of the intervention, with this resulting in estimates of cost-effectiveness that are sensitive to variations in the relative risk of becoming physically active and cost of ERS. Conclusions ERS is associated with modest increase in lifetime costs and benefits. The cost-effectiveness of ERS is highly sensitive to small changes in the effectiveness and cost of ERS and is subject to some significant uncertainty mainly due to limitations in the clinical effectiveness evidence base.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Assessing build-up and wash-off process uncertainty is important for accurate interpretation of model outcomes to facilitate informed decision making for developing effective stormwater pollution mitigation strategies. Uncertainty inherent to pollutant build-up and wash-off processes influences the variations in pollutant loads entrained in stormwater runoff from urban catchments. However, build-up and wash-off predictions from stormwater quality models do not adequately represent such variations due to poor characterisation of the variability of these processes in mathematical models. The changes to the mathematical form of current models with the incorporation of process variability, facilitates accounting for process uncertainty without significantly affecting the model prediction performance. Moreover, the investigation of uncertainty propagation from build-up to wash-off confirmed that uncertainty in build-up process significantly influences wash-off process uncertainty. Specifically, the behaviour of particles <150µm during build-up primarily influences uncertainty propagation, resulting in appreciable variations in the pollutant load and composition during a wash-off event.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Uncertainty inherent to heavy metal build-up and wash-off stems from process variability. This results in inaccurate interpretation of stormwater quality model predictions. The research study has characterised the variability in heavy metal build-up and wash-off processes based on the temporal variations in particle-bound heavy metals commonly found on urban roads. The study outcomes found that the distribution of Al, Cr, Mn, Fe, Ni, Cu, Zn, Cd and Pb were consistent over particle size fractions <150µm and >150µm, with most metals concentrated in the particle size fraction <150µm. When build-up and wash-off are considered as independent processes, the temporal variations in these processes in relation to the heavy metals load are consistent with variations in the particulate load. However, the temporal variations in the load in build-up and wash-off of heavy metals and particulates are not consistent for consecutive build-up and wash-off events that occur on a continuous timeline. These inconsistencies are attributed to interactions between heavy metals and particulates <150µm and >150µm, which are influenced by particle characteristics such as organic matter content. The behavioural variability of particles determines the variations in the heavy metals load entrained in stormwater runoff. Accordingly, the variability in build-up and wash-off of particle-bound pollutants needs to be characterised in the description of pollutant attachment to particulates in stormwater quality modelling. This will ensure the accounting of process uncertainty, and thereby enhancing the interpretation of the outcomes derived from modelling studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There are some scenarios in which Unmmaned Aerial Vehicle (UAV) navigation becomes a challenge due to the occlusion of GPS systems signal, the presence of obstacles and constraints in the space in which a UAV operates. An additional challenge is presented when a target whose location is unknown must be found within a confined space. In this paper we present a UAV navigation and target finding mission, modelled as a Partially Observable Markov Decision Process (POMDP) using a state-of-the-art online solver in a real scenario using a low cost commercial multi rotor UAV and a modular system architecture running under the Robotic Operative System (ROS). Using POMDP has several advantages to conventional approaches as they take into account uncertainties in sensor information. We present a framework for testing the mission with simulation tests and real flight tests in which we model the system dynamics and motion and perception uncertainties. The system uses a quad-copter aircraft with an board downwards looking camera without the need of GPS systems while avoiding obstacles within a confined area. Results indicate that the system has 100% success rate in simulation and 80% rate during flight test for finding targets located at different locations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There has been a recent spate of high profile infrastructure cost overruns in Australia and internationally. This is just the tip of a longer-term and more deeply-seated problem with initial budget estimating practice, well recognised in both academic research and industry reviews: the problem of uncertainty. A case study of the Sydney Opera House is used to identify and illustrate the key causal factors and system dynamics of cost overruns. It is conventionally the role of risk management to deal with such uncertainty, but the type and extent of the uncertainty involved in complex projects is shown to render established risk management techniques ineffective. This paper considers a radical advance on current budget estimating practice which involves a particular approach to statistical modelling complemented by explicit training in estimating practice. The statistical modelling approach combines the probability management techniques of Savage, which operate on actual distributions of values rather than flawed representations of distributions, and the data pooling technique of Skitmore, where the size of the reference set is optimised. Estimating training employs particular calibration development methods pioneered by Hubbard, which reduce the bias of experts caused by over-confidence and improve the consistency of subjective decision-making. A new framework for initial budget estimating practice is developed based on the combined statistical and training methods, with each technique being explained and discussed.