14 resultados para model uncertainty
em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast
Resumo:
This paper studies a problem of dynamic pricing faced by a retailer with limited inventory, uncertain about the demand rate model, aiming to maximize expected discounted revenue over an infinite time horizon. The retailer doubts his demand model which is generated by historical data and views it as an approximation. Uncertainty in the demand rate model is represented by a notion of generalized relative entropy process, and the robust pricing problem is formulated as a two-player zero-sum stochastic differential game. The pricing policy is obtained through the Hamilton-Jacobi-Isaacs (HJI) equation. The existence and uniqueness of the solution of the HJI equation is shown and a verification theorem is proved to show that the solution of the HJI equation is indeed the value function of the pricing problem. The results are illustrated by an example with exponential nominal demand rate.
Resumo:
Radiocarbon-dated sediment cores from six lakes in the Ahklun Mountains, south-western Alaska, were used to interpolate the ages of late Quaternary tephra beds ranging in age from 25.4 to 0.4ka. The lakes are located downwind of the Aleutian Arc and Alaska Peninsula volcanoes in the northern Bristol Bay area between 159° and 161°W at around 60°N. Sedimentation-rate age models for each lake were based on a published spline-fit procedure that uses Monte Carlo simulation to determine age model uncertainty. In all, 62 C ages were used to construct the six age models, including 23 ages presented here for the first time. The age model from Lone Spruce Pond is based on 18 ages, and is currently the best-resolved Holocene age model available from the region, with an average 2s age uncertainty of about±109 years over the past 14.5ka. The sedimentary sequence from Lone Spruce Pond contains seven tephra beds, more than previously found in any other lake in the area. Of the 26 radiocarbon-dated tephra beds at the six lakes and from a soil pit, seven are correlated between two or more sites based on their ages. The major-element geochemistry of glass shards from most of these tephra beds supports the age-based correlations. The remaining tephra beds appear to be present at only one site based on their unique geochemistry or age. The 5.8ka tephra is similar to the widespread Aniakchak tephra [3.7±0.2 (1s) ka], but can be distinguished conclusively based on its trace-element geochemistry. The 3.1 and 0.4ka tephras have glass major- and trace-element geochemical compositions indistinguishable from prominent Aniakchak tephra, and might represent redeposited beds. Only two tephra beds are found in all lakes: the Aniakchak tephra (3.7±0.2ka) and Tephra B (6.1±0.3ka). The tephra beds can be used as chronostratigraphic markers for other sedimentary sequences in the region, including cores from Cascade and Sunday lakes, which were previously undated and were analyzed in this study to correlate with the new regional tephrostratigraphy. © 2012 John Wiley & Sons, Ltd.
Resumo:
This paper addresses the estimation of parameters of a Bayesian network from incomplete data. The task is usually tackled by running the Expectation-Maximization (EM) algorithm several times in order to obtain a high log-likelihood estimate. We argue that choosing the maximum log-likelihood estimate (as well as the maximum penalized log-likelihood and the maximum a posteriori estimate) has severe drawbacks, being affected both by overfitting and model uncertainty. Two ideas are discussed to overcome these issues: a maximum entropy approach and a Bayesian model averaging approach. Both ideas can be easily applied on top of EM, while the entropy idea can be also implemented in a more sophisticated way, through a dedicated non-linear solver. A vast set of experiments shows that these ideas produce significantly better estimates and inferences than the traditional and widely used maximum (penalized) log-likelihood and maximum a posteriori estimates. In particular, if EM is adopted as optimization engine, the model averaging approach is the best performing one; its performance is matched by the entropy approach when implemented using the non-linear solver. The results suggest that the applicability of these ideas is immediate (they are easy to implement and to integrate in currently available inference engines) and that they constitute a better way to learn Bayesian network parameters.
Resumo:
The ability of an agent to make quick, rational decisions in an uncertain environment is paramount for its applicability in realistic settings. Markov Decision Processes (MDP) provide such a framework, but can only model uncertainty that can be expressed as probabilities. Possibilistic counterparts of MDPs allow to model imprecise beliefs, yet they cannot accurately represent probabilistic sources of uncertainty and they lack the efficient online solvers found in the probabilistic MDP community. In this paper we advance the state of the art in three important ways. Firstly, we propose the first online planner for possibilistic MDP by adapting the Monte-Carlo Tree Search (MCTS) algorithm. A key component is the development of efficient search structures to sample possibility distributions based on the DPY transformation as introduced by Dubois, Prade, and Yager. Secondly, we introduce a hybrid MDP model that allows us to express both possibilistic and probabilistic uncertainty, where the hybrid model is a proper extension of both probabilistic and possibilistic MDPs. Thirdly, we demonstrate that MCTS algorithms can readily be applied to solve such hybrid models.
Resumo:
The paper extends Blackburn and Galindev's (Economics Letters, Vol. 79 (2003), pp. 417-421) stochastic growth model in which productivity growth entails both external and internal learning behaviour with a constant relative risk aversion utility function and productivity shocks. Consequently, the relationship between long-term growth and short-term volatility depends not only on the relative importance of each learning mechanism but also on a parameter measuring individuals' attitude towards risk.
Resumo:
This paper studies the dynamic pricing problem of selling fixed stock of perishable items over a finite horizon, where the decision maker does not have the necessary historic data to estimate the distribution of uncertain demand, but has imprecise information about the quantity demand. We model this uncertainty using fuzzy variables. The dynamic pricing problem based on credibility theory is formulated using three fuzzy programming models, viz.: the fuzzy expected revenue maximization model, a-optimistic revenue maximization model, and credibility maximization model. Fuzzy simulations for functions with fuzzy parameters are given and embedded into a genetic algorithm to design a hybrid intelligent algorithm to solve these three models. Finally, a real-world example is presented to highlight the effectiveness of the developed model and algorithm.
Resumo:
George Brecht, an artist best known for his associations with Fluxus, is considered to have made significant contributions to emerging traditions of conceptual art and experimental music in the early 1960s. His Event scores, brief verbal scores that comprised lists of terms or open-ended instructions, provided a signature model for indeterminate composition and were ‘used extensively by virtually every Fluxus artist’. This article revisits Brecht’s early writings and research to argue that, while Event scores were adopted within Fluxus performance, they were intended as much more than performance devices. Specifically, Brecht conceived of his works as ‘structures of experience’ that, by revealing the underlying connections between chanced forms, could enable a kind of enlightenment rooted within an experience of a ‘unified reality’.
Resumo:
Flutter prediction as currently practiced is usually deterministic, with a single structural model used to represent an aircraft. By using interval analysis to take into account structural variability, recent work has demonstrated that small changes in the structure can lead to very large changes in the altitude at which
utter occurs (Marques, Badcock, et al., J. Aircraft, 2010). In this follow-up work we examine the same phenomenon using probabilistic collocation (PC), an uncertainty quantification technique which can eficiently propagate multivariate stochastic input through a simulation code,
in this case an eigenvalue-based fluid-structure stability code. The resulting analysis predicts the consequences of an uncertain structure on incidence of
utter in probabilistic terms { information that could be useful in planning
flight-tests and assessing the risk of structural failure. The uncertainty in
utter altitude is confirmed to be substantial. Assuming that the structural uncertainty represents a epistemic uncertainty regarding the
structure, it may be reduced with the availability of additional information { for example aeroelastic response data from a flight-test. Such data is used to update the structural uncertainty using Bayes' theorem. The consequent
utter uncertainty is significantly reduced across the entire Mach number range.
Resumo:
This paper considers the ways in which structural model parameter variability can influence aeroelastic stability. Previous work on formulating the stability calculation (with the Euler equations providing the aerodynamic predictions) is exploited to use Monte Carlo, interval, and perturbation calculations to allow this question to be investigated. Three routes are identified. The first involves variable normal-mode frequencies only. The second involves normal-mode frequencies and shapes. Finally, the third, in addition to normal-mode frequencies and shapes, also includes their influence on the static equilibrium. Previous work has suggested only considering the first route, which allows significant gains in computational efficiency if reduced-order models can be built for the aerodynamics. However, results in the current paper show that neglecting the mode-shape variation can give misleading results for the flutter-onset prediction, complicating the development of reduced aerodynamic models for variability analysis.
Resumo:
Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates.
Resumo:
To provide in-time reactions to a large volume of surveil- lance data, uncertainty-enabled event reasoning frameworks for CCTV and sensor based intelligent surveillance system have been integrated to model and infer events of interest. However, most of the existing works do not consider decision making under uncertainty which is important for surveillance operators. In this paper, we extend an event reasoning framework for decision support, which enables our framework to predict, rank and alarm threats from multiple heterogeneous sources.
Resumo:
There are many uncertainties in forecasting the charging and discharging capacity required by electric vehicles (EVs) often as a consequence of stochastic usage and intermittent travel. In terms of large-scale EV integration in future power networks this paper develops a capacity forecasting model which considers eight particular uncertainties in three categories. Using the model, a typical application of EVs to load levelling is presented and exemplified using a UK 2020 case study. The results presented in this paper demonstrate that the proposed model is accurate for charge and discharge prediction and a feasible basis for steady-state analysis required for large-scale EV integration.
Resumo:
The assimilation of discrete higher fidelity data points with model predictions can be used to achieve a reduction in the uncertainty of the model input parameters which generate accurate predictions. The problem investigated here involves the prediction of limit-cycle oscillations using a High-Dimensional Harmonic Balance method (HDHB). The efficiency of the HDHB method is exploited to enable calibration of structural input parameters using a Bayesian inference technique. Markov-chain Monte Carlo is employed to sample the posterior distributions. Parameter estimation is carried out on both a pitch/plunge aerofoil and Goland wing configuration. In both cases significant refinement was achieved in the distribution of possible structural parameters allowing better predictions of their
true deterministic values.