993 resultados para Probabilistic Algorithms
Resumo:
Credal nets are probabilistic graphical models which extend Bayesian nets to cope with sets of distributions. This feature makes the model particularly suited for the implementation of classifiers and knowledge-based systems. When working with sets of (instead of single) probability distributions, the identification of the optimal option can be based on different criteria, some of them eventually leading to multiple choices. Yet, most of the inference algorithms for credal nets are designed to compute only the bounds of the posterior probabilities. This prevents some of the existing criteria from being used. To overcome this limitation, we present two simple transformations for credal nets which make it possible to compute decisions based on the maximality and E-admissibility criteria without any modification in the inference algorithms. We also prove that these decision problems have the same complexity of standard inference, being NP^PP-hard for general credal nets and NP-hard for polytrees.
Resumo:
This paper explores semi-qualitative probabilistic networks (SQPNs) that combine numeric and qualitative information. We first show that exact inferences with SQPNs are NPPP-Complete. We then show that existing qualitative relations in SQPNs (plus probabilistic logic and imprecise assessments) can be dealt effectively through multilinear programming. We then discuss learning: we consider a maximum likelihood method that generates point estimates given a SQPN and empirical data, and we describe a Bayesian-minded method that employs the Imprecise Dirichlet Model to generate set-valued estimates.
Resumo:
When an agent wants to fulfill its desires about the world, the agent usually has multiple plans to choose from and these plans have different pre-conditions and additional effects in addition to achieving its goals. Therefore, for further reasoning and interaction with the world, a plan selection strategy (usually based on plan cost estimation) is mandatory for an autonomous agent. This demand becomes even more critical when uncertainty on the observation of the world is taken into account, since in this case, we consider not only the costs of different plans, but also their chances of success estimated according to the agent's beliefs. In addition, when multiple goals are considered together, different plans achieving the goals can be conflicting on their preconditions (contexts) or the required resources. Hence a plan selection strategy should be able to choose a subset of plans that fulfills the maximum number of goals while maintaining context consistency and resource-tolerance among the chosen plans. To address the above two issues, in this paper we first propose several principles that a plan selection strategy should satisfy, and then we present selection strategies that stem from the principles, depending on whether a plan cost is taken into account. In addition, we also show that our selection strategy can partially recover intention revision.
Resumo:
The demand for sustainable development has resulted in a rapid growth in wind power worldwide. Despite various approaches have been proposed to improve the accuracy and to overcome the uncertainties associated with traditional methods, the stochastic and variable nature of wind still remains the most challenging issue in accurately forecasting wind power. This paper presents a hybrid deterministic-probabilistic method where a temporally local ‘moving window’ technique is used in Gaussian Process to examine estimated forecasting errors. This temporally local Gaussian Process employs less measurement data while faster and better predicts wind power at two wind farms, one in the USA and the other in Ireland. Statistical analysis on the results shows that the method can substantially reduce the forecasting error while more likely generate Gaussian-distributed residuals, particularly for short-term forecast horizons due to its capability to handle the time-varying characteristics of wind power.
Resumo:
Game-theoretic security resource allocation problems have generated significant interest in the area of designing and developing security systems. These approaches traditionally utilize the Stackelberg game model for security resource scheduling in order to improve the protection of critical assets. The basic assumption in Stackelberg games is that a defender will act first, then an attacker will choose their best response after observing the defender’s strategy commitment (e.g., protecting a specific asset). Thus, it requires an attacker’s full or partial observation of a defender’s strategy. This assumption is unrealistic in real-time threat recognition and prevention. In this paper, we propose a new solution concept (i.e., a method to predict how a game will be played) for deriving the defender’s optimal strategy based on the principle of acceptable costs of minimax regret. Moreover, we demonstrate the advantages of this solution concept by analyzing its properties.
Resumo:
AgentSpeak is a logic-based programming language, based on the Belief-Desire-Intention (BDI) paradigm, suitable for building complex agent-based systems. To limit the computational complexity, agents in AgentSpeak rely on a plan library to reduce the planning problem to the much simpler problem of plan selection. However, such a plan library is often inadequate when an agent is situated in an uncertain environment. In this paper, we propose the AgentSpeak+ framework, which extends AgentSpeak with a mechanism for probabilistic planning. The beliefs of an AgentSpeak+ agent are represented using epistemic states to allow an agent to reason about its uncertain observations and the uncertain effects of its actions. Each epistemic state consists of a POMDP, used to encode the agent’s knowledge of the environment, and its associated probability distribution (or belief state). In addition, the POMDP is used to select the optimal actions for achieving a given goal, even when facing uncertainty.
Resumo:
Rationale, aims and objectives: This study aimed to determine the value of using a mix of clinical pharmacy data and routine hospital admission spell data in the development of predictive algorithms. Exploration of risk factors in hospitalized patients, together with the targeting strategies devised, will enable the prioritization of clinical pharmacy services to optimize patient outcomes.
Methods: Predictive algorithms were developed using a number of detailed steps using a 75% sample of integrated medicines management (IMM) patients, and validated using the remaining 25%. IMM patients receive targeted clinical pharmacy input throughout their hospital stay. The algorithms were applied to the validation sample, and predicted risk probability was generated for each patient from the coefficients. Risk threshold for the algorithms were determined by identifying the cut-off points of risk scores at which the algorithm would have the highest discriminative performance. Clinical pharmacy staffing levels were obtained from the pharmacy department staffing database.
Results: Numbers of previous emergency admissions and admission medicines together with age-adjusted co-morbidity and diuretic receipt formed a 12-month post-discharge and/or readmission risk algorithm. Age-adjusted co-morbidity proved to be the best index to predict mortality. Increased numbers of clinical pharmacy staff at ward level was correlated with a reduction in risk-adjusted mortality index (RAMI).
Conclusions: Algorithms created were valid in predicting risk of in-hospital and post-discharge mortality and risk of hospital readmission 3, 6 and 12 months post-discharge. The provision of ward-based clinical pharmacy services is a key component to reducing RAMI and enabling the full benefits of pharmacy input to patient care to be realized.
Resumo:
As an important type of spatial keyword query, the m-closest keywords (mCK) query finds a group of objects such that they cover all query keywords and have the smallest diameter, which is defined as the largest distance between any pair of objects in the group. The query is useful in many applications such as detecting locations of web resources. However, the existing work does not study the intractability of this problem and only provides exact algorithms, which are computationally expensive.
In this paper, we prove that the problem of answering mCK queries is NP-hard. We first devise a greedy algorithm that has an approximation ratio of 2. Then, we observe that an mCK query can be approximately answered by finding the circle with the smallest diameter that encloses a group of objects together covering all query keywords. We prove that the group enclosed in the circle can answer the mCK query with an approximation ratio of 2 over 3. Based on this, we develop an algorithm for finding such a circle exactly, which has a high time complexity. To improve efficiency, we propose another two algorithms that find such a circle approximately, with a ratio of 2 over √3 + ε. Finally, we propose an exact algorithm that utilizes the group found by the 2 over √3 + ε)-approximation algorithm to obtain the optimal group. We conduct extensive experiments using real-life datasets. The experimental results offer insights into both efficiency and accuracy of the proposed approximation algorithms, and the results also demonstrate that our exact algorithm outperforms the best known algorithm by an order of magnitude.