117 resultados para SPARSE REPRESENTATION


Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Deep belief networks are a powerful way to model complex probability distributions. However, learning the structure of a belief network, particularly one with hidden units, is difficult. The Indian buffet process has been used as a nonparametric Bayesian prior on the directed structure of a belief network with a single infinitely wide hidden layer. In this paper, we introduce the cascading Indian buffet process (CIBP), which provides a nonparametric prior on the structure of a layered, directed belief network that is unbounded in both depth and width, yet allows tractable inference. We use the CIBP prior with the nonlinear Gaussian belief network so each unit can additionally vary its behavior between discrete and continuous representations. We provide Markov chain Monte Carlo algorithms for inference in these belief networks and explore the structures learned on several image data sets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A nonparametric Bayesian extension of Factor Analysis (FA) is proposed where observed data $\mathbf{Y}$ is modeled as a linear superposition, $\mathbf{G}$, of a potentially infinite number of hidden factors, $\mathbf{X}$. The Indian Buffet Process (IBP) is used as a prior on $\mathbf{G}$ to incorporate sparsity and to allow the number of latent features to be inferred. The model's utility for modeling gene expression data is investigated using randomly generated data sets based on a known sparse connectivity matrix for E. Coli, and on three biological data sets of increasing complexity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of L1 regularisation for sparse learning has generated immense research interest, with successful application in such diverse areas as signal acquisition, image coding, genomics and collaborative filtering. While existing work highlights the many advantages of L1 methods, in this paper we find that L1 regularisation often dramatically underperforms in terms of predictive performance when compared with other methods for inferring sparsity. We focus on unsupervised latent variable models, and develop L1 minimising factor models, Bayesian variants of "L1", and Bayesian models with a stronger L0-like sparsity induced through spike-and-slab distributions. These spike-and-slab Bayesian factor models encourage sparsity while accounting for uncertainty in a principled manner and avoiding unnecessary shrinkage of non-zero values. We demonstrate on a number of data sets that in practice spike-and-slab Bayesian methods outperform L1 minimisation, even on a computational budget. We thus highlight the need to re-assess the wide use of L1 methods in sparsity-reliant applications, particularly when we care about generalising to previously unseen data, and provide an alternative that, over many varying conditions, provides improved generalisation performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we present Poisson sum series representations for α-stable (αS) random variables and a-stable processes, in particular concentrating on continuous-time autoregressive (CAR) models driven by α-stable Lévy processes. Our representations aim to provide a conditionally Gaussian framework, which will allow parameter estimation using Rao-Blackwellised versions of state of the art Bayesian computational methods such as particle filters and Markov chain Monte Carlo (MCMC). To overcome the issues due to truncation of the series, novel residual approximations are developed. Simulations demonstrate the potential of these Poisson sum representations for inference in otherwise intractable α-stable models. © 2011 IEEE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To explore the neural mechanisms related to representation of the manipulation dynamics of objects, we performed whole-brain fMRI while subjects balanced an object in stable and highly unstable states and while they balanced a rigid object and a flexible object in the same unstable state, in all cases without vision. In this way, we varied the extent to which an internal model of the manipulation dynamics was required in the moment-to-moment control of the object's orientation. We hypothesized that activity in primary motor cortex would reflect the amount of muscle activation under each condition. In contrast, we hypothesized that cerebellar activity would be more strongly related to the stability and complexity of the manipulation dynamics because the cerebellum has been implicated in internal model-based control. As hypothesized, the dynamics-related activation of the cerebellum was quite different from that of the primary motor cortex. Changes in cerebellar activity were much greater than would have been predicted from differences in muscle activation when the stability and complexity of the manipulation dynamics were contrasted. On the other hand, the activity of the primary motor cortex more closely resembled the mean motor output necessary to execute the task. We also discovered a small region near the anterior edge of the ipsilateral (right) inferior parietal lobule where activity was modulated with the complexity of the manipulation dynamics. We suggest that this is related to imagining the location and motion of an object with complex manipulation dynamics.