130 resultados para Probabilistic Reasoning


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The effect of additivity pretraining on blocking has been taken as evidence for a reasoning account of human and animal causal learning. If inferential reasoning underpins this effect, then developmental differences in the magnitude of this effect in children would be expected. Experiment 1 examined cue competition effects in children's (4- to 5-year-olds and 6- to 7-year-olds) causal learning using a new paradigm analogous to the food allergy task used in studies of human adult causal learning. Blocking was stronger in the older than the younger children, and additivity pretraining only affected blocking in the older group. Unovershadowing was not affected by age or by pretraining. In experiment 2, levels of blocking were found to be correlated with the ability to answer questions that required children to reason about additivity. Our results support an inferential reasoning explanation of cue competition effects. (c) 2012 APA, all rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Four- and five-year-olds completed two sets of tasks that involved reasoning about the temporal order in which events had occurred in the past or were to occur in the future. Four-year-olds succeeded on the tasks that involved reasoning about the order of past events but not those that involved reasoning about the order of future events, whereas 5-year-olds passed both types of tasks. Individual children who failed the past-event tasks were not particularly likely to fail the more difficult future-event tasks. However, children's performance on the reasoning tasks was predictive of their performance on a task assessing their comprehension of the terms “before” and “after.” Our results suggest that there may be a developmental change over this age range in the ability to flexibly represent and reason about the before-and-after relationships between events.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Reliable prediction of long-term medical device performance using computer simulation requires consideration of variability in surgical procedure, as well as patient-specific factors. However, even deterministic simulation of long-term failure processes for such devices is time and resource consuming so that including variability can lead to excessive time to achieve useful predictions. This study investigates the use of an accelerated probabilistic framework for predicting the likely performance envelope of a device and applies it to femoral prosthesis loosening in cemented hip arthroplasty.
A creep and fatigue damage failure model for bone cement, in conjunction with an interfacial fatigue model for the implant–cement interface, was used to simulate loosening of a prosthesis within a cement mantle. A deterministic set of trial simulations was used to account for variability of a set of surgical and patient factors, and a response surface method was used to perform and accelerate a Monte Carlo simulation to achieve an estimate of the likely range of prosthesis loosening. The proposed framework was used to conceptually investigate the influence of prosthesis selection and surgical placement on prosthesis migration.
Results demonstrate that the response surface method is capable of dramatically reducing the time to achieve convergence in mean and variance of predicted response variables. A critical requirement for realistic predictions is the size and quality of the initial training dataset used to generate the response surface and further work is required to determine the recommendations for a minimum number of initial trials. Results of this conceptual application predicted that loosening was sensitive to the implant size and femoral width. Furthermore, different rankings of implant performance were predicted when only individual simulations (e.g. an average condition) were used to rank implants, compared with when stochastic simulations were used. In conclusion, the proposed framework provides a viable approach to predicting realistic ranges of loosening behaviour for orthopaedic implants in reduced timeframes compared with conventional Monte Carlo simulations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

When people evaluate syllogisms, their judgments of validity are often biased by the believability of the conclusions of the problems. Thus, it has been suggested that syllogistic reasoning performance is based on an interplay between a conscious and effortful evaluation of logicality and an intuitive appreciation of the believability of the conclusions (e.g., Evans, Newstead, Allen, & Pollard, 1994). However, logic effects in syllogistic reasoning emerge even when participants are unlikely to carry out a full logical analysis of the problems (e.g., Shynkaruk & Thompson, 2006). There is also evidence that people can implicitly detect the conflict between their beliefs and the validity of the problems, even if they are unable to consciously produce a logical response (e.g., De Neys, Moyens, & Vansteenwegen, 2010). In 4 experiments we demonstrate that people intuitively detect the logicality of syllogisms, and this effect emerges independently of participants' conscious mindset and their cognitive capacity. This logic effect is also unrelated to the superficial structure of the problems. Additionally, we provide evidence that the logicality of the syllogisms is detected through slight changes in participants' affective states. In fact, subliminal affective priming had an effect on participants' subjective evaluations of the problems. Finally, when participants misattributed their emotional reactions to background music, this significantly reduced the logic effect.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper introduces a logical model of inductive generalization, and specifically of the machine learning task of inductive concept learning (ICL). We argue that some inductive processes, like ICL, can be seen as a form of defeasible reasoning. We define a consequence relation characterizing which hypotheses can be induced from given sets of examples, and study its properties, showing they correspond to a rather well-behaved non-monotonic logic. We will also show that with the addition of a preference relation on inductive theories we can characterize the inductive bias of ICL algorithms. The second part of the paper shows how this logical characterization of inductive generalization can be integrated with another form of non-monotonic reasoning (argumentation), to define a model of multiagent ICL. This integration allows two or more agents to learn, in a consistent way, both from induction and from arguments used in the communication between them. We show that the inductive theories achieved by multiagent induction plus argumentation are sound, i.e. they are precisely the same as the inductive theories built by a single agent with all data. © 2012 Elsevier B.V.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Decision making is an important element throughout the life-cycle of large-scale projects. Decisions are critical as they have a direct impact upon the success/outcome of a project and are affected by many factors including the certainty and precision of information. In this paper we present an evidential reasoning framework which applies Dempster-Shafer Theory and its variant Dezert-Smarandache Theory to aid decision makers in making decisions where the knowledge available may be imprecise, conflicting and uncertain. This conceptual framework is novel as natural language based information extraction techniques are utilized in the extraction and estimation of beliefs from diverse textual information sources, rather than assuming these estimations as already given. Furthermore we describe an algorithm to define a set of maximal consistent subsets before fusion occurs in the reasoning framework. This is important as inconsistencies between subsets may produce results which are incorrect/adverse in the decision making process. The proposed framework can be applied to problems involving material selection and a Use Case based in the Engineering domain is presented to illustrate the approach. © 2013 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a novel method that leverages reasoning capabilities in a computer vision system dedicated to human action recognition. The proposed methodology is decomposed into two stages. First, a machine learning based algorithm - known as bag of words - gives a first estimate of action classification from video sequences, by performing an image feature analysis. Those results are afterward passed to a common-sense reasoning system, which analyses, selects and corrects the initial estimation yielded by the machine learning algorithm. This second stage resorts to the knowledge implicit in the rationality that motivates human behaviour. Experiments are performed in realistic conditions, where poor recognition rates by the machine learning techniques are significantly improved by the second stage in which common-sense knowledge and reasoning capabilities have been leveraged. This demonstrates the value of integrating common-sense capabilities into a computer vision pipeline. © 2012 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We address the problem of multi-target tracking in realistic crowded conditions by introducing a novel dual-stage online tracking algorithm. The problem of data-association between tracks and detections, based on appearance, is often complicated by partial occlusion. In the first stage, we address the issue of occlusion with a novel method of robust data-association, that can be used to compute the appearance similarity between tracks and detections without the need for explicit knowledge of the occluded regions. In the second stage, broken tracks are linked based on motion and appearance, using an online-learned linking model. The online-learned motion-model for track linking uses the confident tracks from the first stage tracker as training examples. The new approach has been tested on the town centre dataset and has performance comparable with the present state-of-the-art

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In conditional probabilistic logic programming, given a query, the two most common forms for answering the query are either a probability interval or a precise probability obtained by using the maximum entropy principle. The former can be noninformative (e.g.,interval [0; 1]) and the reliability of the latter is questionable when the priori knowledge isimprecise. To address this problem, in this paper, we propose some methods to quantitativelymeasure if a probability interval or a single probability is sufficient for answering a query. We first propose an approach to measuring the ignorance of a probabilistic logic program with respect to a query. The measure of ignorance (w.r.t. a query) reflects howreliable a precise probability for the query can be and a high value of ignorance suggests that a single probability is not suitable for the query. We then propose a method to measure the probability that the exact probability of a query falls in a given interval, e.g., a second order probability. We call it the degree of satisfaction. If the degree of satisfaction is highenough w.r.t. the query, then the given interval can be accepted as the answer to the query. We also prove our measures satisfy many properties and we use a case study to demonstrate the significance of the measures. © Springer Science+Business Media B.V. 2012

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Reasoning about problems with empirically false content can be hard, as the inferences that people draw are heavily influenced by their background knowledge. However, presenting empirically false premises in a fantasy context helps children and adolescents to disregard their beliefs, and to reason on the basis of the premises. The aim of the present experiments was to see if high-functioning adolescents with autism are able to utilize fantasy context to the same extent as typically developing adolescents when they reason about empirically false premises. The results indicate that problems with engaging in pretence in autism persist into adolescence, and this hinders the ability of autistic individuals to disregard their beliefs when empirical knowledge is irrelevant.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Based on the Dempster-Shafer (D-S) theory of evidence and G. Yen's (1989), extension of the theory, the authors propose approaches to representing heuristic knowledge by evidential mapping and pooling the mass distribution in a complex frame by partitioning that frame using Shafter's partition technique. The authors have generalized Yen's model from Bayesian probability theory to the D-S theory of evidence. Based on such a generalized model, an extended framework for evidential reasoning systems is briefly specified in which a semi-graph method is used to describe the heuristic knowledge. The advantage of such a method is that it can avoid the complexity of graphs without losing the explicitness of graphs. The extended framework can be widely used to build expert systems