999 resultados para influence diagrams


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Influence diagrams are intuitive and concise representations of structured decision problems. When the problem is non-Markovian, an optimal strategy can be exponentially large in the size of the diagram. We can avoid the inherent intractability by constraining the size of admissible strategies, giving rise to limited memory influence diagrams. A valuable question is then how small do strategies need to be to enable efficient optimal planning. Arguably, the smallest strategies one can conceive simply prescribe an action for each time step, without considering past decisions or observations. Previous work has shown that finding such optimal strategies even for polytree-shaped diagrams with ternary variables and a single value node is NP-hard, but the case of binary variables was left open. In this paper we address such a case, by first noting that optimal strategies can be obtained in polynomial time for polytree-shaped diagrams with binary variables and a single value node. We then show that the same problem is NP-hard if the diagram has multiple value nodes. These two results close the fixed-parameter complexity analysis of optimal strategy selection in influence diagrams parametrized by the shape of the diagram, the number of value nodes and the maximum variable cardinality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a new algorithm for exactly solving decision making problems represented as influence diagrams. We do not require the usual assumptions of no forgetting and regularity; this allows us to solve problems with simultaneous decisions and limited information. The algorithm is empirically shown to outperform a state-of-the-art algorithm on randomly generated problems of up to 150 variables and 10^64 solutions. We show that these problems are NP-hard even if the underlying graph structure of the problem has low treewidth and the variables take on a bounded number of states, and that they admit no provably good approximation if variables can take on an arbitrary number of states.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a new algorithm for exactly solving decision making problems represented as influence diagrams. We do not require the usual assumptions of no forgetting and regularity; this allows us to solve problems with simultaneous decisions and limited information. The algorithm is empirically shown to outperform a state-of-the-art algorithm on randomly generated problems of up to 150 variables and 10^64 solutions. We show that the problem is NP-hard even if the underlying graph structure of the problem has small treewidth and the variables take on a bounded number of states, but that a fully polynomial time approximation scheme exists for these cases. Moreover, we show that the bound on the number of states is a necessary condition for any efficient approximation scheme.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Influence diagrams allow for intuitive and yet precise description of complex situations involving decision making under uncertainty. Unfortunately, most of the problems described by influence diagrams are hard to solve. In this paper we discuss the complexity of approximately solving influence diagrams. We do not assume no-forgetting or regularity, which makes the class of problems we address very broad. Remarkably, we show that when both the treewidth and the cardinality of the variables are bounded the problem admits a fully polynomial-time approximation scheme.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

DUE TO INCOMPLETE PAPERWORK, ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Emerging literature on climate adaptation suggests the need for effective ways of engaging or activating communities and supporting community roles, coupled with whole-of-system approaches to understanding climate change and adaptation needs. We have developed and evaluated a participatory approach to elicit community and stakeholder understanding of climate change adaptation needs, and connect diverse community members and local office bearers towards potential action. The approach was trialed in a series of connected social-ecological systems along a transect from a rural area to the coast and islands of ecologically sensitive Moreton Bay in Queensland, Australia. We conducted ‘climate roundtables’ in each of three areas along the transect, then a fourth roundtable reviewed and extended the results to the region as a whole. Influence diagrams produced through the process show how each climate variable forecast to affect this region (heat, storm, flood, sea-level rise, fire, drought) affects the natural environment, infrastructure, economic and social behaviour patterns, and psychosocial responses, and how sets of people, species and ecosystems are affected, and act, differentially. The participatory process proved effective as a way of building local empathy, a local knowledge base and empowering participants to join towards future climate adaptation action. Key principles are highlighted to assist in adapting the process for use elsewhere.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In many real world situations, we make decisions in the presence of multiple, often conflicting and non-commensurate objectives. The process of optimizing systematically and simultaneously over a set of objective functions is known as multi-objective optimization. In multi-objective optimization, we have a (possibly exponentially large) set of decisions and each decision has a set of alternatives. Each alternative depends on the state of the world, and is evaluated with respect to a number of criteria. In this thesis, we consider the decision making problems in two scenarios. In the first scenario, the current state of the world, under which the decisions are to be made, is known in advance. In the second scenario, the current state of the world is unknown at the time of making decisions. For decision making under certainty, we consider the framework of multiobjective constraint optimization and focus on extending the algorithms to solve these models to the case where there are additional trade-offs. We focus especially on branch-and-bound algorithms that use a mini-buckets algorithm for generating the upper bound at each node of the search tree (in the context of maximizing values of objectives). Since the size of the guiding upper bound sets can become very large during the search, we introduce efficient methods for reducing these sets, yet still maintaining the upper bound property. We define a formalism for imprecise trade-offs, which allows the decision maker during the elicitation stage, to specify a preference for one multi-objective utility vector over another, and use such preferences to infer other preferences. The induced preference relation then is used to eliminate the dominated utility vectors during the computation. For testing the dominance between multi-objective utility vectors, we present three different approaches. The first is based on a linear programming approach, the second is by use of distance-based algorithm (which uses a measure of the distance between a point and a convex cone); the third approach makes use of a matrix multiplication, which results in much faster dominance checks with respect to the preference relation induced by the trade-offs. Furthermore, we show that our trade-offs approach, which is based on a preference inference technique, can also be given an alternative semantics based on the well known Multi-Attribute Utility Theory. Our comprehensive experimental results on common multi-objective constraint optimization benchmarks demonstrate that the proposed enhancements allow the algorithms to scale up to much larger problems than before. For decision making problems under uncertainty, we describe multi-objective influence diagrams, based on a set of p objectives, where utility values are vectors in Rp, and are typically only partially ordered. These can be solved by a variable elimination algorithm, leading to a set of maximal values of expected utility. If the Pareto ordering is used this set can often be prohibitively large. We consider approximate representations of the Pareto set based on ϵ-coverings, allowing much larger problems to be solved. In addition, we define a method for incorporating user trade-offs, which also greatly improves the efficiency.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We present a new algorithm for exactly solving decision-making problems represented as an influence diagram. We do not require the usual assumptions of no forgetting and regularity, which allows us to solve problems with limited information. The algorithm, which implements a sophisticated variable elimination procedure, is empirically shown to outperform a state-of-the-art algorithm in randomly generated problems of up to 150 variables and 10^64 strategies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we present a hybrid BDI-PGM framework, in which PGMs (Probabilistic Graphical Models) are incorporated into a BDI (belief-desire-intention) architecture. This work is motivated by the need to address the scalability and noisy sensing issues in SCADA (Supervisory Control And Data Acquisition) systems. Our approach uses the incorporated PGMs to model the uncertainty reasoning and decision making processes of agents situated in a stochastic environment. In particular, we use Bayesian networks to reason about an agent’s beliefs about the environment based on its sensory observations, and select optimal plans according to the utilities of actions defined in influence diagrams. This approach takes the advantage of the scalability of the BDI architecture and the uncertainty reasoning capability of PGMs. We present a prototype of the proposed approach using a transit scenario to validate its effectiveness.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The redevelopment of Brownfields has taken off in the 1990s, supported by federal and state incentives, and largely accomplished by local initiatives. Brownfields redevelopment has several associated benefits. These include the revitalization of inner-city neighborhoods, creation of jobs, stimulation of tax revenues, greater protection of public health and natural resources, the renewal and reuse existing civil infrastructure and Greenfields protection. While these benefits are numerous, the obstacles to Brownfields redevelopment are also very much alive. Redevelopment issues typically embrace a host of financial and legal liability concerns, technical and economic constraints, competing objectives, and uncertainties arising from inadequate site information. Because the resources for Brownfields redevelopment are usually limited, local programs will require creativity in addressing these existing obstacles in a manner that extends their limited resources for returning Brownfields to productive uses. Such programs may benefit from a structured and defensible decision framework to prioritize sites for redevelopment: one that incorporates the desired objectives, corresponding variables and uncertainties associated with Brownfields redevelopment. This thesis demonstrates the use of a decision analytic tool, Bayesian Influence Diagrams, and related decision analytic tools in developing quantitative decision models to evaluate and rank Brownfields sites on the basis of their redevelopment potential.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Phase studies have been performed for quaternary systems composed of egg lecithin, cosurfactant, water and oil. The lecithin used was the commercially available egg lecithin Ovothin 200 (which comprises ≥ 92% phosphatidylcholine). The cosurfactants employed were propanol and butanol, and these were used at lecithin/cosurfactant mixing ratios (Km) of 1:1 and 1.94:1 (weight basis). Six polar oils were investigated, including the alkanoic acids, octanoic and oleic, their corresponding ethyl esters and the medium and long chain triglycerides, Miglyol 812 and soybean oil. All oils, irrespective of the alcohol and the Km used, gave rise to systems that produced a stable isotropic region along the surfactant/oil axis (designated as a reverse microemulsion system). In addition, the systems incorporating propanol at both Km and butanol at a Km of 1.94: 1, generally gave rise to a liquid crystalline region and, in some cases, a second isotropic non-birefingent area (designated as a normal microemulsion system). The phase behaviour observed was largely dependent upon the alcohol and Km used and the size and the polarity of the oil present.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the framework of lattice fluid model, the Gibbs energy and equation of state are derived by introducing the energy (E-s) stored during flow for polymer blends under shear. From the calculation of the spinodal of poly(vinyl methyl ether) (PVME) and polystyrene (PS) mixtures, we have found the influence of E., an equation of state in pure component is inappreciable, but it is appreciable in the mixture. However, the effect of E, on phase separation behavior is extremely striking. In the calculation of spinodal for the PVME/PS system, a thin, long and banana miscibility gap generated by shear is seen beside the miscibility gap with lower critical solution temperature. Meanwhile, a binodal coalescence of upper and lower miscibility gaps is occurred. The three points of the three-phase equilibrium are forecasted. The shear rate dependence of cloud point temperature at a certain composition is discussed. The calculated results are acceptable compared with the experiment values obtained by Higgins et at. However, the maximum positive shift and the minimum negative shift of cloud point temperature guessed by Higgins are not obtained, Furthermore, the combining effects of pressure and shear on spinodal shift are predicted.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)