831 resultados para symbolic solving
On the complexity of solving polytree-shaped limited memory influence diagrams with binary variables
Resumo:
Influence diagrams are intuitive and concise representations of structured decision problems. When the problem is non-Markovian, an optimal strategy can be exponentially large in the size of the diagram. We can avoid the inherent intractability by constraining the size of admissible strategies, giving rise to limited memory influence diagrams. A valuable question is then how small do strategies need to be to enable efficient optimal planning. Arguably, the smallest strategies one can conceive simply prescribe an action for each time step, without considering past decisions or observations. Previous work has shown that finding such optimal strategies even for polytree-shaped diagrams with ternary variables and a single value node is NP-hard, but the case of binary variables was left open. In this paper we address such a case, by first noting that optimal strategies can be obtained in polynomial time for polytree-shaped diagrams with binary variables and a single value node. We then show that the same problem is NP-hard if the diagram has multiple value nodes. These two results close the fixed-parameter complexity analysis of optimal strategy selection in influence diagrams parametrized by the shape of the diagram, the number of value nodes and the maximum variable cardinality.
Resumo:
We present a new algorithm for exactly solving decision-making problems represented as an influence diagram. We do not require the usual assumptions of no forgetting and regularity, which allows us to solve problems with limited information. The algorithm, which implements a sophisticated variable elimination procedure, is empirically shown to outperform a state-of-the-art algorithm in randomly generated problems of up to 150 variables and 10^64 strategies.
Resumo:
We present a new algorithm for exactly solving decision making problems represented as influence diagrams. We do not require the usual assumptions of no forgetting and regularity; this allows us to solve problems with simultaneous decisions and limited information. The algorithm is empirically shown to outperform a state-of-the-art algorithm on randomly generated problems of up to 150 variables and 10^64 solutions. We show that these problems are NP-hard even if the underlying graph structure of the problem has low treewidth and the variables take on a bounded number of states, and that they admit no provably good approximation if variables can take on an arbitrary number of states.
Resumo:
We present a new algorithm for exactly solving decision making problems represented as influence diagrams. We do not require the usual assumptions of no forgetting and regularity; this allows us to solve problems with simultaneous decisions and limited information. The algorithm is empirically shown to outperform a state-of-the-art algorithm on randomly generated problems of up to 150 variables and 10^64 solutions. We show that the problem is NP-hard even if the underlying graph structure of the problem has small treewidth and the variables take on a bounded number of states, but that a fully polynomial time approximation scheme exists for these cases. Moreover, we show that the bound on the number of states is a necessary condition for any efficient approximation scheme.
Resumo:
Influence diagrams allow for intuitive and yet precise description of complex situations involving decision making under uncertainty. Unfortunately, most of the problems described by influence diagrams are hard to solve. In this paper we discuss the complexity of approximately solving influence diagrams. We do not assume no-forgetting or regularity, which makes the class of problems we address very broad. Remarkably, we show that when both the treewidth and the cardinality of the variables are bounded the problem admits a fully polynomial-time approximation scheme.
Resumo:
We introduce a new parallel pattern derived from a specific application domain and show how it turns out to have application beyond its domain of origin. The pool evolution pattern models the parallel evolution of a population subject to mutations and evolving in such a way that a given fitness function is optimized. The pattern has been demonstrated to be suitable for capturing and modeling the parallel patterns underpinning various evolutionary algorithms, as well as other parallel patterns typical of symbolic computation. In this paper we introduce the pattern, we discuss its implementation on modern multi/many core architectures and finally present experimental results obtained with FastFlow and Erlang implementations to assess its feasibility and scalability.
Resumo:
A framework for assessing the robustness of long-duration repetitive orchestrations in uncertain evolving environments is proposed. The model assumes that service-based evaluation environments are stable over short time-frames only; over longer periods service-based environments evolve as demand fluctuates and contention for shared resources varies. The behaviour of a short-duration orchestration E in a stable environment is assessed by an uncertainty profile U and a corresponding zero-sum angel-daemon game Γ(U) [2]. Here the angel-daemon approach is extended to assess evolving environments by means of a subfamily of stochastic games. These games are called strategy oblivious because their transition probabilities are strategy independent. It is shown that the value of a strategy oblivious stochastic game is well defined and that it can be computed by solving a linear system. Finally, the proposed stochastic framework is used to assess the evolution of the Gabrmn IT system.
Resumo:
Objective: Communication skills can be trained alongside clinical reasoning, history taking or clinical examination skills. This is advocated as a solution to the low transfer of communication skills. Still, students have to integrate the knowledge/skills acquired during different curriculum parts in patient consultations at some point. How do medical students experience these integrated consultations within a simulated environment and in real practice when dealing with responsibility?
Methods: Six focus groups were conducted with (pre-)/clerkship students.
Results: Students were motivated to practice integrated consultations with simulated patients and felt like 'real physicians'. However, their focus on medical problem solving drew attention away from improving their communication skills. Responsibility for real patients triggered students' identity development. This identity formation guided the development of an own consultation style, a process that was hampered by conflicting demands of role models.
Conclusion: Practicing complete consultations results in the dilemma of prioritizing medical problem solving above attention for patient communication. Integrated consultation training advances this dilemma to the pre-clerkship period. During clerkships this dilemma is heightened because real patients trigger empathy and responsibility, which invites students to define their role as doctor.
Practice Implications: When training integrated consultations, educators should pay attention to students' learning priorities and support the development of students' professional identity.
Resumo:
Higham et al (2010) published a large series of new dates from the key French Palaeolithic site of the Grotte du Renne at Arcy-sur-Cure. The site is important because it is one of only two sites in Europe in which Châtelperronian lithic remains co-occur with Neanderthal human remains. A large series of dates from the Mousterian, Châtelperronian, Aurignacian and Gravettian levels of the site was obtained. The 14C results showed great variability, which Higham et al (2010) interpreted as most likely to be due to mixing of archaeological material in the site. In contrast, Caron et al (2011) suggested that the site stratigraphy is well preserved and that the problem with the variability in the radiocarbon ages was due to unremoved contamination in the dated bone. In this paper we address their critique of the original Higham et al (2010) paper
Resumo:
The ability of an agent to make quick, rational decisions in an uncertain environment is paramount for its applicability in realistic settings. Markov Decision Processes (MDP) provide such a framework, but can only model uncertainty that can be expressed as probabilities. Possibilistic counterparts of MDPs allow to model imprecise beliefs, yet they cannot accurately represent probabilistic sources of uncertainty and they lack the efficient online solvers found in the probabilistic MDP community. In this paper we advance the state of the art in three important ways. Firstly, we propose the first online planner for possibilistic MDP by adapting the Monte-Carlo Tree Search (MCTS) algorithm. A key component is the development of efficient search structures to sample possibility distributions based on the DPY transformation as introduced by Dubois, Prade, and Yager. Secondly, we introduce a hybrid MDP model that allows us to express both possibilistic and probabilistic uncertainty, where the hybrid model is a proper extension of both probabilistic and possibilistic MDPs. Thirdly, we demonstrate that MCTS algorithms can readily be applied to solve such hybrid models.
Resumo:
Boolean games are a framework for reasoning about the rational behavior of agents whose goals are formalized using propositional formulas. Compared to normal form games, a well-studied and related game framework, Boolean games allow for an intuitive and more compact representation of the agents’ goals. So far, Boolean games have been mainly studied in the literature from the Knowledge Representation perspective, and less attention has been paid on the algorithmic issues underlying the computation of solution concepts. Although some suggestions for solving specific classes of Boolean games have been made in the literature, there is currently no work available on the practical performance. In this paper, we propose the first technique to solve general Boolean games that does not require an exponential translation to normal-form games. Our method is based on disjunctive answer set programming and computes solutions (equilibria) of arbitrary Boolean games. It can be applied to a wide variety of solution concepts, and can naturally deal with extensions of Boolean games such as constraints and costs. We present detailed experimental results in which we compare the proposed method against a number of existing methods for solving specific classes of Boolean games, as well as adaptations of methods that were initially designed for normal-form games. We found that the heuristic methods that do not require all payoff matrix entries performed well for smaller Boolean games, while our ASP based technique is faster when the problem instances have a higher number of agents or action variables.
Resumo:
Boolean games are a framework for reasoning about the rational behaviour of agents, whose goals are formalized using propositional formulas. They offer an attractive alternative to normal-form games, because they allow for a more intuitive and more compact encoding. Unfortunately, however, there is currently no general, tailor-made method available to compute the equilibria of Boolean games. In this paper, we introduce a method for finding the pure Nash equilibria based on disjunctive answer set programming. Our method is furthermore capable of finding the core elements and the Pareto optimal equilibria, and can easily be modified to support other forms of optimality, thanks to the declarative nature of disjunctive answer set programming. Experimental results clearly demonstrate the effectiveness of the proposed method.
Resumo:
In a number of species, individuals showing lateralized hand/paw usage (i.e. the preferential use of either the right or left paw) compared to ambilateral individuals have been shown to be more proactive in novel situations. In the current study we used an established test to assess preferential paw usage in dogs (the Kong test) and then compared the performance of ambilateral and lateralized dogs as well as left- vs. right-pawed dogs in a novel manipulative problem solving task. Results showed an equal proportion of ambilateral and lateralized dogs but contrary to predictions non-lateralized dogs were faster at accessing the apparatus in test trials. No differences emerged between right- and left-pawed dogs. Results are discussed in relation to previous studies on lateralization. © 2013 Elsevier B.V.
Resumo:
Generating timetables for an institution is a challenging and time consuming task due to different demands on the overall structure of the timetable. In this paper, a new hybrid method which is a combination of a great deluge and artificial bee colony algorithm (INMGD-ABC) is proposed to address the university timetabling problem. Artificial bee colony algorithm (ABC) is a population based method that has been introduced in recent years and has proven successful in solving various optimization problems effectively. However, as with many search based approaches, there exist weaknesses in the exploration and exploitation abilities which tend to induce slow convergence of the overall search process. Therefore, hybridization is proposed to compensate for the identified weaknesses of the ABC. Also, inspired from imperialist competitive algorithms, an assimilation policy is implemented in order to improve the global exploration ability of the ABC algorithm. In addition, Nelder–Mead simplex search method is incorporated within the great deluge algorithm (NMGD) with the aim of enhancing the exploitation ability of the hybrid method in fine-tuning the problem search region. The proposed method is tested on two differing benchmark datasets i.e. examination and course timetabling datasets. A statistical analysis t-test has been conducted and shows the performance of the proposed approach as significantly better than basic ABC algorithm. Finally, the experimental results are compared against state-of-the art methods in the literature, with results obtained that are competitive and in certain cases achieving some of the current best results to those in the literature.