934 resultados para PROBABILISTIC TELEPORTATION


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Housing stock models can be useful tools in helping to assess the environmental and socio-economic impacts of retrofits to residential buildings; however, existing housing stock models are not able to quantify the uncertainties that arise in the modelling process from various sources, thus limiting the role that they can play in helping decision makers. This paper examines the different sources of uncertainty involved in housing stock models and proposes a framework for handling these uncertainties. This framework involves integrating probabilistic sensitivity analysis with a Bayesian calibration process in order to quantify uncertain parameters more accurately. The proposed framework is tested on a case study building, and suggestions are made on how to expand the framework for retrofit analysis at an urban-scale. © 2011 Elsevier Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Climate change is expected to have significant impact on the future thermal performance of buildings. Building simulation and sensitivity analysis can be employed to predict these impacts, guiding interventions to adapt buildings to future conditions. This article explores the use of simulation to study the impact of climate change on a theoretical office building in the UK, employing a probabilistic approach. The work studies (1) appropriate performance metrics and underlying modelling assumptions, (2) sensitivity of computational results to identify key design parameters and (3) the impact of zonal resolution. The conclusions highlight the importance of assumptions in the field of electricity conversion factors, proper management of internal heat gains, and the need to use an appropriately detailed zonal resolution. © 2010 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present in this paper a new multivariate probabilistic approach to Acoustic Pulse Recognition (APR) for tangible interface applications. This model uses Principle Component Analysis (PCA) in a probabilistic framework to classify tapping pulses with a high degree of variability. It was found that this model, achieves a higher robustness to pulse variability than simpler template matching methods, specifically when allowed to train on data containing high variability. © 2011 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Information theoretic active learning has been widely studied for probabilistic models. For simple regression an optimal myopic policy is easily tractable. However, for other tasks and with more complex models, such as classification with nonparametric models, the optimal solution is harder to compute. Current approaches make approximations to achieve tractability. We propose an approach that expresses information gain in terms of predictive entropies, and apply this method to the Gaussian Process Classifier (GPC). Our approach makes minimal approximations to the full information theoretic objective. Our experimental performance compares favourably to many popular active learning algorithms, and has equal or lower computational complexity. We compare well to decision theoretic approaches also, which are privy to more information and require much more computational time. Secondly, by developing further a reformulation of binary preference learning to a classification problem, we extend our algorithm to Gaussian Process preference learning.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent work in the area of probabilistic user simulation for training statistical dialogue managers has investigated a new agenda-based user model and presented preliminary experiments with a handcrafted model parameter set. Training the model on dialogue data is an important next step, but non-trivial since the user agenda states are not observable in data and the space of possible states and state transitions is intractably large. This paper presents a summary-space mapping which greatly reduces the number of state transitions and introduces a tree-based method for representing the space of possible agenda state sequences. Treating the user agenda as a hidden variable, the forward/backward algorithm can then be successfully applied to iteratively estimate the model parameters on dialogue data. © 2007 Association for Computational Linguistics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is extremely difficult to explore mRNA folding structure by biological experiments. In this report, we use stochastic sampling and folding simulation to test the existence of the stable secondary structural units of-mRNA, look for the folding units, and explore the probabilistic stabilization of the units. Using this method, We made simulations for all possible local optimum secondary structures of a single strand mRNA within a certain range, and searched for the common parts of the secondary structures. The consensus secondary structure units (CSSUs) extracted from the above method are mainly hairpins, with a few single strands. These CSSUs suggest that the mRNA folding units could be relatively stable and could perform specific biological function. The significance of these observations for the mRNA folding problem in general is also discussed. (c) 2004 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Players cooperate in experiments more than game theory would predict. We introduce the ‘returns-based beliefs’ approach: the expected returns of a particular strategy in proportion to total expected returns of all strategies. Using a decision analytic solution concept, Luce’s (1959) probabilistic choice model, and ‘hyperpriors’ for ambiguity in players’ cooperability, our approach explains empirical observations in various classes of games including the Prisoner’s and Traveler’s Dilemmas. Testing the closeness of fit of our model on Selten and Chmura (2008) data for completely mixed 2 × 2 games shows that with loss aversion, returns-based beliefs explain the data better than other equilibrium concepts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper tackles the novel challenging problem of 3D object phenotype recognition from a single 2D silhouette. To bridge the large pose (articulation or deformation) and camera viewpoint changes between the gallery images and query image, we propose a novel probabilistic inference algorithm based on 3D shape priors. Our approach combines both generative and discriminative learning. We use latent probabilistic generative models to capture 3D shape and pose variations from a set of 3D mesh models. Based on these 3D shape priors, we generate a large number of projections for different phenotype classes, poses, and camera viewpoints, and implement Random Forests to efficiently solve the shape and pose inference problems. By model selection in terms of the silhouette coherency between the query and the projections of 3D shapes synthesized using the galleries, we achieve the phenotype recognition result as well as a fast approximate 3D reconstruction of the query. To verify the efficacy of the proposed approach, we present new datasets which contain over 500 images of various human and shark phenotypes and motions. The experimental results clearly show the benefits of using the 3D priors in the proposed method over previous 2D-based methods. © 2011 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The diversity of non-domestic buildings at urban scale poses a number of difficulties to develop building stock models. This research proposes an engineering-based bottom-up stock model in a probabilistic manner to address these issues. School buildings are used for illustrating the application of this probabilistic method. Two sampling-based global sensitivity methods are used to identify key factors affecting building energy performance. The sensitivity analysis methods can also create statistical regression models for inverse analysis, which are used to estimate input information for building stock energy models. The effects of different energy saving measures are analysed by changing these building stock input distributions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The visual system must learn to infer the presence of objects and features in the world from the images it encounters, and as such it must, either implicitly or explicitly, model the way these elements interact to create the image. Do the response properties of cells in the mammalian visual system reflect this constraint? To address this question, we constructed a probabilistic model in which the identity and attributes of simple visual elements were represented explicitly and learnt the parameters of this model from unparsed, natural video sequences. After learning, the behaviour and grouping of variables in the probabilistic model corresponded closely to functional and anatomical properties of simple and complex cells in the primary visual cortex (V1). In particular, feature identity variables were activated in a way that resembled the activity of complex cells, while feature attribute variables responded much like simple cells. Furthermore, the grouping of the attributes within the model closely parallelled the reported anatomical grouping of simple cells in cat V1. Thus, this generative model makes explicit an interpretation of complex and simple cells as elements in the segmentation of a visual scene into basic independent features, along with a parametrisation of their moment-by-moment appearances. We speculate that such a segmentation may form the initial stage of a hierarchical system that progressively separates the identity and appearance of more articulated visual elements, culminating in view-invariant object recognition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The brain extracts useful features from a maelstrom of sensory information, and a fundamental goal of theoretical neuroscience is to work out how it does so. One proposed feature extraction strategy is motivated by the observation that the meaning of sensory data, such as the identity of a moving visual object, is often more persistent than the activation of any single sensory receptor. This notion is embodied in the slow feature analysis (SFA) algorithm, which uses “slowness” as an heuristic by which to extract semantic information from multi-dimensional time-series. Here, we develop a probabilistic interpretation of this algorithm showing that inference and learning in the limiting case of a suitable probabilistic model yield exactly the results of SFA. Similar equivalences have proved useful in interpreting and extending comparable algorithms such as independent component analysis. For SFA, we use the equivalent probabilistic model as a conceptual spring-board, with which to motivate several novel extensions to the algorithm.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study unsupervised learning in a probabilistic generative model for occlusion. The model uses two types of latent variables: one indicates which objects are present in the image, and the other how they are ordered in depth. This depth order then determines how the positions and appearances of the objects present, specified in the model parameters, combine to form the image. We show that the object parameters can be learnt from an unlabelled set of images in which objects occlude one another. Exact maximum-likelihood learning is intractable. However, we show that tractable approximations to Expectation Maximization (EM) can be found if the training images each contain only a small number of objects on average. In numerical experiments it is shown that these approximations recover the correct set of object parameters. Experiments on a novel version of the bars test using colored bars, and experiments on more realistic data, show that the algorithm performs well in extracting the generating causes. Experiments based on the standard bars benchmark test for object learning show that the algorithm performs well in comparison to other recent component extraction approaches. The model and the learning algorithm thus connect research on occlusion with the research field of multiple-causes component extraction methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Variational methods are a key component of the approximate inference and learning toolbox. These methods fill an important middle ground, retaining distributional information about uncertainty in latent variables, unlike maximum a posteriori methods (MAP), and yet generally requiring less computational time than Monte Carlo Markov Chain methods. In particular the variational Expectation Maximisation (vEM) and variational Bayes algorithms, both involving variational optimisation of a free-energy, are widely used in time-series modelling. Here, we investigate the success of vEM in simple probabilistic time-series models. First we consider the inference step of vEM, and show that a consequence of the well-known compactness property of variational inference is a failure to propagate uncertainty in time, thus limiting the usefulness of the retained distributional information. In particular, the uncertainty may appear to be smallest precisely when the approximation is poorest. Second, we consider parameter learning and analytically reveal systematic biases in the parameters found by vEM. Surprisingly, simpler variational approximations (such a mean-field) can lead to less bias than more complicated structured approximations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Change detection is a classic paradigm that has been used for decades to argue that working memory can hold no more than a fixed number of items ("item-limit models"). Recent findings force us to consider the alternative view that working memory is limited by the precision in stimulus encoding, with mean precision decreasing with increasing set size ("continuous-resource models"). Most previous studies that used the change detection paradigm have ignored effects of limited encoding precision by using highly discriminable stimuli and only large changes. We conducted two change detection experiments (orientation and color) in which change magnitudes were drawn from a wide range, including small changes. In a rigorous comparison of five models, we found no evidence of an item limit. Instead, human change detection performance was best explained by a continuous-resource model in which encoding precision is variable across items and trials even at a given set size. This model accounts for comparison errors in a principled, probabilistic manner. Our findings sharply challenge the theoretical basis for most neural studies of working memory capacity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Quantile regression refers to the process of estimating the quantiles of a conditional distribution and has many important applications within econometrics and data mining, among other domains. In this paper, we show how to estimate these conditional quantile functions within a Bayes risk minimization framework using a Gaussian process prior. The resulting non-parametric probabilistic model is easy to implement and allows non-crossing quantile functions to be enforced. Moreover, it can directly be used in combination with tools and extensions of standard Gaussian Processes such as principled hyperparameter estimation, sparsification, and quantile regression with input-dependent noise rates. No existing approach enjoys all of these desirable properties. Experiments on benchmark datasets show that our method is competitive with state-of-the-art approaches. © 2009 IEEE.