521 resultados para Tikhonov regularization
Resumo:
Removing noise from piecewise constant (PWC) signals is a challenging signal processing problem arising in many practical contexts. For example, in exploration geosciences, noisy drill hole records need to be separated into stratigraphic zones, and in biophysics, jumps between molecular dwell states have to be extracted from noisy fluorescence microscopy signals. Many PWC denoising methods exist, including total variation regularization, mean shift clustering, stepwise jump placement, running medians, convex clustering shrinkage and bilateral filtering; conventional linear signal processing methods are fundamentally unsuited. This paper (part I, the first of two) shows that most of these methods are associated with a special case of a generalized functional, minimized to achieve PWC denoising. The minimizer can be obtained by diverse solver algorithms, including stepwise jump placement, convex programming, finite differences, iterated running medians, least angle regression, regularization path following and coordinate descent. In the second paper, part II, we introduce novel PWC denoising methods, and comparisons between these methods performed on synthetic and real signals, showing that the new understanding of the problem gained in part I leads to new methods that have a useful role to play.
Resumo:
We consider the problem of stable determination of a harmonic function from knowledge of the solution and its normal derivative on a part of the boundary of the (bounded) solution domain. The alternating method is a procedure to generate an approximation to the harmonic function from such Cauchy data and we investigate a numerical implementation of this procedure based on Fredholm integral equations and Nyström discretization schemes, which makes it possible to perform a large number of iterations (millions) with minor computational cost (seconds) and high accuracy. Moreover, the original problem is rewritten as a fixed point equation on the boundary, and various other direct regularization techniques are discussed to solve that equation. We also discuss how knowledge of the smoothness of the data can be used to further improve the accuracy. Numerical examples are presented showing that accurate approximations of both the solution and its normal derivative can be obtained with much less computational time than in previous works.
Resumo:
The shape of a plane acoustical sound-soft obstacle is detected from knowledge of the far field pattern for one time-harmonic incident field. Two methods based on solving a system of integral equations for the incoming wave and the far field pattern are investigated. Properties of the integral operators required in order to apply regularization, i.e. injectivity and denseness of the range, are proved.
Resumo:
We extend a meshless method of fundamental solutions recently proposed by the authors for the one-dimensional two-phase inverse linear Stefan problem, to the nonlinear case. In this latter situation the free surface is also considered unknown which is more realistic from the practical point of view. Building on the earlier work, the solution is approximated in each phase by a linear combination of fundamental solutions to the heat equation. The implementation and analysis are more complicated in the present situation since one needs to deal with a nonlinear minimization problem to identify the free surface. Furthermore, the inverse problem is ill-posed since small errors in the input measured data can cause large deviations in the desired solution. Therefore, regularization needs to be incorporated in the objective function which is minimized in order to obtain a stable solution. Numerical results are presented and discussed. © 2014 IMACS.
Resumo:
In this paper, we explore the idea of social role theory (SRT) and propose a novel regularized topic model which incorporates SRT into the generative process of social media content. We assume that a user can play multiple social roles, and each social role serves to fulfil different duties and is associated with a role-driven distribution over latent topics. In particular, we focus on social roles corresponding to the most common social activities on social networks. Our model is instantiated on microblogs, i.e., Twitter and community question-answering (cQA), i.e., Yahoo! Answers, where social roles on Twitter include "originators" and "propagators", and roles on cQA are "askers" and "answerers". Both explicit and implicit interactions between users are taken into account and modeled as regularization factors. To evaluate the performance of our proposed method, we have conducted extensive experiments on two Twitter datasets and two cQA datasets. Furthermore, we also consider multi-role modeling for scientific papers where an author's research expertise area is considered as a social role. A novel application of detecting users' research interests through topical keyword labeling based on the results of our multi-role model has been presented. The evaluation results have shown the feasibility and effectiveness of our model.
Resumo:
Let H be a real Hilbert space and T be a maximal monotone operator on H. A well-known algorithm, developed by R. T. Rockafellar [16], for solving the problem (P) ”To find x ∈ H such that 0 ∈ T x” is the proximal point algorithm. Several generalizations have been considered by several authors: introduction of a perturbation, introduction of a variable metric in the perturbed algorithm, introduction of a pseudo-metric in place of the classical regularization, . . . We summarize some of these extensions by taking simultaneously into account a pseudo-metric as regularization and a perturbation in an inexact version of the algorithm.
Resumo:
We are concerned with two-level optimization problems called strongweak Stackelberg problems, generalizing the class of Stackelberg problems in the strong and weak sense. In order to handle the fact that the considered two-level optimization problems may fail to have a solution under mild assumptions, we consider a regularization involving ε-approximate optimal solutions in the lower level problems. We prove the existence of optimal solutions for such regularized problems and present some approximation results when the parameter ǫ goes to zero. Finally, as an example, we consider an optimization problem associated to a best bound given in [2] for a system of nondifferentiable convex inequalities.
Resumo:
Two jamming cancellation algorithms are developed based on a stable solution of least squares problem (LSP) provided by regularization. They are based on filtered singular value decomposition (SVD) and modifications of the Greville formula. Both algorithms allow an efficient hardware implementation. Testing results on artificial data modeling difficult real-world situations are also provided.
Resumo:
AMS Subj. Classification: 49J15, 49M15
Resumo:
2000 Mathematics Subject Classification: 90C25, 68W10, 49M37.
An efficient, approximate path-following algorithm for elastic net based nonlinear spike enhancement
Resumo:
Unwanted spike noise in a digital signal is a common problem in digital filtering. However, sometimes the spikes are wanted and other, superimposed, signals are unwanted, and linear, time invariant (LTI) filtering is ineffective because the spikes are wideband - overlapping with independent noise in the frequency domain. So, no LTI filter can separate them, necessitating nonlinear filtering. However, there are applications in which the noise includes drift or smooth signals for which LTI filters are ideal. We describe a nonlinear filter formulated as the solution to an elastic net regularization problem, which attenuates band-limited signals and independent noise, while enhancing superimposed spikes. Making use of known analytic solutions a novel, approximate path-following algorithm is given that provides a good, filtered output with reduced computational effort by comparison to standard convex optimization methods. Accurate performance is shown on real, noisy electrophysiological recordings of neural spikes.
Resumo:
2000 Mathematics Subject Classification: 62H30, 62P99
Resumo:
2002 Mathematics Subject Classification: 35L15, 35L80, 35S05, 35S30
Resumo:
Big data comes in various ways, types, shapes, forms and sizes. Indeed, almost all areas of science, technology, medicine, public health, economics, business, linguistics and social science are bombarded by ever increasing flows of data begging to be analyzed efficiently and effectively. In this paper, we propose a rough idea of a possible taxonomy of big data, along with some of the most commonly used tools for handling each particular category of bigness. The dimensionality p of the input space and the sample size n are usually the main ingredients in the characterization of data bigness. The specific statistical machine learning technique used to handle a particular big data set will depend on which category it falls in within the bigness taxonomy. Large p small n data sets for instance require a different set of tools from the large n small p variety. Among other tools, we discuss Preprocessing, Standardization, Imputation, Projection, Regularization, Penalization, Compression, Reduction, Selection, Kernelization, Hybridization, Parallelization, Aggregation, Randomization, Replication, Sequentialization. Indeed, it is important to emphasize right away that the so-called no free lunch theorem applies here, in the sense that there is no universally superior method that outperforms all other methods on all categories of bigness. It is also important to stress the fact that simplicity in the sense of Ockham’s razor non-plurality principle of parsimony tends to reign supreme when it comes to massive data. We conclude with a comparison of the predictive performance of some of the most commonly used methods on a few data sets.
Resumo:
This research evaluates pattern recognition techniques on a subclass of big data where the dimensionality of the input space (p) is much larger than the number of observations (n). Specifically, we evaluate massive gene expression microarray cancer data where the ratio κ is less than one. We explore the statistical and computational challenges inherent in these high dimensional low sample size (HDLSS) problems and present statistical machine learning methods used to tackle and circumvent these difficulties. Regularization and kernel algorithms were explored in this research using seven datasets where κ < 1. These techniques require special attention to tuning necessitating several extensions of cross-validation to be investigated to support better predictive performance. While no single algorithm was universally the best predictor, the regularization technique produced lower test errors in five of the seven datasets studied.