17 resultados para Sparsity
em Cambridge University Engineering Department Publications Database
Restoration of images and 3D data to higher resolution by deconvolution with sparsity regularization
Restoration of images and 3D data to higher resolution by deconvolution with sparsity regularization
Resumo:
Image convolution is conventionally approximated by the LTI discrete model. It is well recognized that the higher the sampling rate, the better is the approximation. However sometimes images or 3D data are only available at a lower sampling rate due to physical constraints of the imaging system. In this paper, we model the under-sampled observation as the result of combining convolution and subsampling. Because the wavelet coefficients of piecewise smooth images tend to be sparse and well modelled by tree-like structures, we propose the L0 reweighted-L2 minimization (L0RL2 ) algorithm to solve this problem. This promotes model-based sparsity by minimizing the reweighted L2 norm, which approximates the L0 norm, and by enforcing a tree model over the weights. We test the algorithm on 3 examples: a simple ring, the cameraman image and a 3D microscope dataset; and show that good results can be obtained. © 2010 IEEE.
Resumo:
Statistical dependencies among wavelet coefficients are commonly represented by graphical models such as hidden Markov trees (HMTs). However, in linear inverse problems such as deconvolution, tomography, and compressed sensing, the presence of a sensing or observation matrix produces a linear mixing of the simple Markovian dependency structure. This leads to reconstruction problems that are non-convex optimizations. Past work has dealt with this issue by resorting to greedy or suboptimal iterative reconstruction methods. In this paper, we propose new modeling approaches based on group-sparsity penalties that leads to convex optimizations that can be solved exactly and efficiently. We show that the methods we develop perform significantly better in de-convolution and compressed sensing applications, while being as computationally efficient as standard coefficient-wise approaches such as lasso. © 2011 IEEE.
Resumo:
Most of the manual labor needed to create the geometric building information model (BIM) of an existing facility is spent converting raw point cloud data (PCD) to a BIM description. Automating this process would drastically reduce the modeling cost. Surface extraction from PCD is a fundamental step in this process. Compact modeling of redundant points in PCD as a set of planes leads to smaller file size and fast interactive visualization on cheap hardware. Traditional approaches for smooth surface reconstruction do not explicitly model the sparse scene structure or significantly exploit the redundancy. This paper proposes a method based on sparsity-inducing optimization to address the planar surface extraction problem. Through sparse optimization, points in PCD are segmented according to their embedded linear subspaces. Within each segmented part, plane models can be estimated. Experimental results on a typical noisy PCD demonstrate the effectiveness of the algorithm.
Resumo:
Computational models of visual cortex, and in particular those based on sparse coding, have enjoyed much recent attention. Despite this currency, the question of how sparse or how over-complete a sparse representation should be, has gone without principled answer. Here, we use Bayesian model-selection methods to address these questions for a sparse-coding model based on a Student-t prior. Having validated our methods on toy data, we find that natural images are indeed best modelled by extremely sparse distributions; although for the Student-t prior, the associated optimal basis size is only modestly over-complete.
Resumo:
Sensor networks can be naturally represented as graphical models, where the edge set encodes the presence of sparsity in the correlation structure between sensors. Such graphical representations can be valuable for information mining purposes as well as for optimizing bandwidth and battery usage with minimal loss of estimation accuracy. We use a computationally efficient technique for estimating sparse graphical models which fits a sparse linear regression locally at each node of the graph via the Lasso estimator. Using a recently suggested online, temporally adaptive implementation of the Lasso, we propose an algorithm for streaming graphical model selection over sensor networks. With battery consumption minimization applications in mind, we use this algorithm as the basis of an adaptive querying scheme. We discuss implementation issues in the context of environmental monitoring using sensor networks, where the objective is short-term forecasting of local wind direction. The algorithm is tested against real UK weather data and conclusions are drawn about certain tradeoffs inherent in decentralized sensor networks data analysis. © 2010 The Author. Published by Oxford University Press on behalf of The British Computer Society. All rights reserved.
Resumo:
A nonparametric Bayesian extension of Factor Analysis (FA) is proposed where observed data $\mathbf{Y}$ is modeled as a linear superposition, $\mathbf{G}$, of a potentially infinite number of hidden factors, $\mathbf{X}$. The Indian Buffet Process (IBP) is used as a prior on $\mathbf{G}$ to incorporate sparsity and to allow the number of latent features to be inferred. The model's utility for modeling gene expression data is investigated using randomly generated data sets based on a known sparse connectivity matrix for E. Coli, and on three biological data sets of increasing complexity.
Resumo:
The use of L1 regularisation for sparse learning has generated immense research interest, with successful application in such diverse areas as signal acquisition, image coding, genomics and collaborative filtering. While existing work highlights the many advantages of L1 methods, in this paper we find that L1 regularisation often dramatically underperforms in terms of predictive performance when compared with other methods for inferring sparsity. We focus on unsupervised latent variable models, and develop L1 minimising factor models, Bayesian variants of "L1", and Bayesian models with a stronger L0-like sparsity induced through spike-and-slab distributions. These spike-and-slab Bayesian factor models encourage sparsity while accounting for uncertainty in a principled manner and avoiding unnecessary shrinkage of non-zero values. We demonstrate on a number of data sets that in practice spike-and-slab Bayesian methods outperform L1 minimisation, even on a computational budget. We thus highlight the need to re-assess the wide use of L1 methods in sparsity-reliant applications, particularly when we care about generalising to previously unseen data, and provide an alternative that, over many varying conditions, provides improved generalisation performance.
Resumo:
The adaptive BDDC method is extended to the selection of face constraints in three dimensions. A new implementation of the BDDC method is presented based on a global formulation without an explicit coarse problem, with massive parallelism provided by a multifrontal solver. Constraints are implemented by a projection and sparsity of the projected operator is preserved by a generalized change of variables. The effectiveness of the method is illustrated on several engineering problems.
Resumo:
This paper develops an algorithm for finding sparse signals from limited observations of a linear system. We assume an adaptive Gaussian model for sparse signals. This model results in a least square problem with an iteratively reweighted L2 penalty that approximates the L0-norm. We propose a fast algorithm to solve the problem within a continuation framework. In our examples, we show that the correct sparsity map and sparsity level are gradually learnt during the iterations even when the number of observations is reduced, or when observation noise is present. In addition, with the help of sophisticated interscale signal models, the algorithm is able to recover signals to a better accuracy and with reduced number of observations than typical L1-norm and reweighted L1 norm methods. ©2010 IEEE.
Resumo:
A dynamical system can exhibit structure on multiple levels. Different system representations can capture different elements of a dynamical system's structure. We consider LTI input-output dynamical systems and present four representations of structure: complete computational structure, subsystem structure, signal structure, and input output sparsity structure. We then explore some of the mathematical relationships that relate these different representations of structure. In particular, we show that signal and subsystem structure are fundamentally different ways of representing system structure. A signal structure does not always specify a unique subsystem structure nor does subsystem structure always specify a unique signal structure. We illustrate these concepts with a numerical example. © 2011 AACC American Automatic Control Council.