103 resultados para PROBABILITY REPRESENTATION
Resumo:
In this paper, we introduce an efficient method for particle selection in tracking objects in complex scenes. Firstly, we improve the proposal distribution function of the tracking algorithm, including current observation, reducing the cost of evaluating particles with a very low likelihood. In addition, we use a partitioned sampling approach to decompose the dynamic state in several stages. It enables to deal with high-dimensional states without an excessive computational cost. To represent the color distribution, the appearance of the tracked object is modelled by sampled pixels. Based on this representation, the probability of any observation is estimated using non-parametric techniques in color space. As a result, we obtain a Probability color Density Image (PDI) where each pixel points its membership to the target color model. In this way, the evaluation of all particles is accelerated by computing the likelihood p(z|x) using the Integral Image of the PDI.
Resumo:
An important issue in risk analysis is the distinction between epistemic and aleatory uncertainties. In this paper, the use of distinct representation formats for aleatory and epistemic uncertainties is advocated, the latter being modelled by sets of possible values. Modern uncertainty theories based on convex sets of probabilities are known to be instrumental for hybrid representations where aleatory and epistemic components of uncertainty remain distinct. Simple uncertainty representation techniques based on fuzzy intervals and p-boxes are used in practice. This paper outlines a risk analysis methodology from elicitation of knowledge about parameters to decision. It proposes an elicitation methodology where the chosen representation format depends on the nature and the amount of available information. Uncertainty propagation methods then blend Monte Carlo simulation and interval analysis techniques. Nevertheless, results provided by these techniques, often in terms of probability intervals, may be too complex to interpret for a decision-maker and we, therefore, propose to compute a unique indicator of the likelihood of risk, called confidence index. It explicitly accounts for the decisionmaker’s attitude in the face of ambiguity. This step takes place at the end of the risk analysis process, when no further collection of evidence is possible that might reduce the ambiguity due to epistemic uncertainty. This last feature stands in contrast with the Bayesian methodology, where epistemic uncertainties on input parameters are modelled by single subjective probabilities at the beginning of the risk analysis process.
Resumo:
Handling appearance variations is a very challenging problem for visual tracking. Existing methods usually solve this problem by relying on an effective appearance model with two features: (1) being capable of discriminating the tracked target from its background, (2) being robust to the target's appearance variations during tracking. Instead of integrating the two requirements into the appearance model, in this paper, we propose a tracking method that deals with these problems separately based on sparse representation in a particle filter framework. Each target candidate defined by a particle is linearly represented by the target and background templates with an additive representation error. Discriminating the target from its background is achieved by activating the target templates or the background templates in the linear system in a competitive manner. The target's appearance variations are directly modeled as the representation error. An online algorithm is used to learn the basis functions that sparsely span the representation error. The linear system is solved via ℓ1 minimization. The candidate with the smallest reconstruction error using the target templates is selected as the tracking result. We test the proposed approach using four sequences with heavy occlusions, large pose variations, drastic illumination changes and low foreground-background contrast. The proposed approach shows excellent performance in comparison with two latest state-of-the-art trackers.
Resumo:
This paper compares the applicability of three ground survey methods for modelling terrain: one man electronic tachymetry (TPS), real time kinematic GPS (GPS), and terrestrial laser scanning (TLS). Vertical accuracy of digital terrain models (DTMs) derived from GPS, TLS and airborne laser scanning (ALS) data is assessed. Point elevations acquired by the four methods represent two sections of a mountainous area in Cumbria, England. They were chosen so that the presence of non-terrain features is constrained to the smallest amount. The vertical accuracy of the DTMs was addressed by subtracting each DTM from TPS point elevations. The error was assessed using exploratory measures including statistics, histograms, and normal probability plots. The results showed that the internal measurement accuracy of TPS, GPS, and TLS was below a centimetre. TPS and GPS can be considered equally applicable alternatives for sampling the terrain in areas accessible on foot. The highest DTM vertical accuracy was achieved with GPS data, both on sloped terrain (RMSE 0.16. m) and flat terrain (RMSE 0.02. m). TLS surveying was the most efficient overall but veracity of terrain representation was subject to dense vegetation cover. Therefore, the DTM accuracy was the lowest for the sloped area with dense bracken (RMSE 0.52. m) although it was the second highest on the flat unobscured terrain (RMSE 0.07. m). ALS data represented the sloped terrain more realistically (RMSE 0.23. m) than the TLS. However, due to a systematic bias identified on the flat terrain the DTM accuracy was the lowest (RMSE 0.29. m) which was above the level stated by the data provider. Error distribution models were more closely approximated by normal distribution defined using median and normalized median absolute deviation which supports the use of the robust measures in DEM error modelling and its propagation. © 2012 Elsevier Ltd.
Resumo:
In three studies we looked at two typical misconceptions of probability: the representativeness heuristic, and the equiprobability bias. The literature on statistics education predicts that some typical errors and biases (e.g., the equiprobability bias) increase with education, whereas others decrease. This is in contrast with reasoning theorists’ prediction who propose that education reduces misconceptions in general. They also predict that students with higher cognitive ability and higher need for cognition are less susceptible to biases. In Experiments 1 and 2 we found that the equiprobability bias increased with statistics education, and it was negatively correlated with students’ cognitive abilities. The representativeness heuristic was mostly unaffected by education, and it was also unrelated to cognitive abilities. In Experiment 3 we demonstrated through an instruction manipulation (by asking participants to think logically vs. rely on their intuitions) that the reason for these differences was that these biases originated in different cognitive processes.
Dual-processes in learning and judgment:Evidence from the multiple cue probability learning paradigm
Resumo:
Multiple cue probability learning (MCPL) involves learning to predict a criterion based on a set of novel cues when feedback is provided in response to each judgment made. But to what extent does MCPL require controlled attention and explicit hypothesis testing? The results of two experiments show that this depends on cue polarity. Learning about cues that predict positively is aided by automatic cognitive processes, whereas learning about cues that predict negatively is especially demanding on controlled attention and hypothesis testing processes. In the studies reported here, negative, but not positive cue learning related to individual differences in working memory capacity both on measures of overall judgment performance and modelling of the implicit learning process. However, the introduction of a novel method to monitor participants' explicit beliefs about a set of cues on a trial-by-trial basis revealed that participants were engaged in explicit hypothesis testing about positive and negative cues, and explicit beliefs about both types of cues were linked to working memory capacity. Taken together, our results indicate that while people are engaged in explicit hypothesis testing during cue learning, explicit beliefs are applied to judgment only when cues are negative. © 2012 Elsevier Inc.
Resumo:
Multiple-cue probability learning (MCPL) involves learning to predict a criterion when outcome feedback is provided for multiple cues. A great deal of research suggests that working memory capacity (WMC) is involved in a wide range of tasks that draw on higher level cognitive processes. In three experiments, we examined the role of WMC in MCPL by introducing measures of working memory capacity, as well as other task manipulations. While individual differences in WMC positively predicted performance in some kinds of multiple-cue tasks, performance on other tasks was entirely unrelated to these differences. Performance on tasks that contained negative cues was correlated with working memory capacity, as well as measures of explicit knowledge obtained in the learning process. When the relevant cues predicted positively, however, WMC became irrelevant. The results are discussed in terms of controlled and automatic processes in learning and judgement. © 2011 The Experimental Psychology Society.
Resumo:
Interest in ‘mutual gains’ has principally been confined to studies of the unionised sector. Yet there is no reason why this conceptual dynamic cannot be extended to the non-unionised realm, specifically in relation to non-union employee representation (NER). Although extant research views NER as unfertile terrain for mutual gains, the paper examines whether NER developed in response to the European Directive on Information and Consultation (I&C) of Employees may offer a potentially more fruitful route. The paper examines this possibility by considering three cases of NER established under the I&C Directive in Ireland, assessing the extent to which mutual gains were achieved.
Resumo:
A new scheme, sketch-map, for obtaining a low-dimensional representation of the region of phase space explored during an enhanced dynamics simulation is proposed. We show evidence, from an examination of the distribution of pairwise distances between frames, that some features of the free-energy surface are inherently high-dimensional. This makes dimensionality reduction problematic because the data does not satisfy the assumptions made in conventional manifold learning algorithms We therefore propose that when dimensionality reduction is performed on trajectory data one should think of the resultant embedding as a quickly sketched set of directions rather than a road map. In other words, the embedding tells one about the connectivity between states but does not provide the vectors that correspond to the slow degrees of freedom. This realization informs the development of sketch-map, which endeavors to reproduce the proximity information from the high-dimensionality description in a space of lower dimensionality even when a faithful embedding is not possible.
Resumo:
Soil fauna in the extreme conditions of Antarctica consists of a few microinvertebrate species patchily distributed at different spatial scales. Populations of the prostigmatic mite Stereotydeus belli and the collembolan Gressittacantha terranova from northern Victoria Land (Antarctica) were used as models to study the effect of soil properties on microarthropod distributions. In agreement with the general assumption that the development and distribution of life in these ecosystems is mainly controlled by abiotic factors, we found that the probability of occurrence of S. belli depends on soil moisture and texture and on the sampling period (which affects the general availability of water); surprisingly, none of the analysed variables were significantly related to the G. terranova distribution. Based on our results and literature data, we propose a theoretical model that introduces biotic interactions among the major factors driving the local distribution of collembolans in Antarctic terrestrial ecosystems. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Ninety-one patients were studied serially for chimeric status following allogeneic stem cell transplantation (SCT) for severe aplastic anaemia (SAA) or Fanconi Anaemia (FA). Short tandem repeat polymerase chain reaction (STR-PCR) was used to stratify patients into five groups: (A) complete donor chimeras (n = 39), (B) transient mixed chimeras (n = 15) (C) stable mixed chimeras (n = 18), (D) progressive mixed chimeras (n = 14) (E) recipient chimeras with early graft rejection (n = 5). As serial sampling was not possible in Group E, serial chimerism results for 86 patients were available for analysis. The following factors were analysed for association with chimeric status: age, sex match, donor type, aetiology of aplasia, source of stem cells, number of cells engrafted, conditioning regimen, graft-versus-host disease (GvHD) prophylaxis, occurrence of acute and chronic GvHD and survival. Progressive mixed chimeras (PMCs) were at high risk of late graft rejection (n = 10, P <0.0001). Seven of these patients lost their graft during withdrawal of immunosuppressive therapy. STR-PCR indicated an inverse correlation between detection of recipient cells post-SCT and occurrence of acute GvHD (P = 0.008). PMC was a bad prognostic indicator of survival (P = 0.003). Monitoring of chimeric status during cyclosporin withdrawal may facilitate therapeutic intervention to prevent late graft rejection in patients transplanted for SAA.
Resumo:
Bayesian probabilistic analysis offers a new approach to characterize semantic representations by inferring the most likely feature structure directly from the patterns of brain activity. In this study, infinite latent feature models [1] are used to recover the semantic features that give rise to the brain activation vectors when people think about properties associated with 60 concrete concepts. The semantic features recovered by ILFM are consistent with the human ratings of the shelter, manipulation, and eating factors that were recovered by a previous factor analysis. Furthermore, different areas of the brain encode different perceptual and conceptual features. This neurally-inspired semantic representation is consistent with some existing conjectures regarding the role of different brain areas in processing different semantic and perceptual properties. © 2012 Springer-Verlag.
Resumo:
The equiprobability bias is a tendency for individuals to think of probabilistic events as 'equiprobable' by nature, and to judge outcomes that occur with different probabilities as equally likely. The equiprobability bias has been repeatedly found to be related to formal education in statistics, and it is claimed to be based on a misunderstanding of the concept of randomness.
Resumo:
A practical machine-vision-based system is developed for fast detection of defects occurring on the surface of bottle caps. This system can be used to extract the circular region as the region of interests (ROI) from the surface of a bottle cap, and then use the circular region projection histogram (CRPH) as the matching features. We establish two dictionaries for the template and possible defect, respectively. Due to the requirements of high-speed production as well as detecting quality, a fast algorithm based on a sparse representation is proposed to speed up the searching. In the sparse representation, non-zero elements in the sparse factors indicate the defect's size and position. Experimental results in industrial trials show that the proposed method outperforms the orientation code method (OCM) and is able to produce promising results for detecting defects on the surface of bottle caps.