404 resultados para Regularization
Resumo:
We present a novel array RLS algorithm with forgetting factor that circumvents the problem of fading regularization, inherent to the standard exponentially-weighted RLS, by allowing for time-varying regularization matrices with generic structure. Simulations in finite precision show the algorithm`s superiority as compared to alternative algorithms in the context of adaptive beamforming.
Resumo:
The problem of extracting pore size distributions from characterization data is solved here with particular reference to adsorption. The technique developed is based on a finite element collocation discretization of the adsorption integral, with fitting of the isotherm data by least squares using regularization. A rapid and simple technique for ensuring non-negativity of the solutions is also developed which modifies the original solution having some negativity. The technique yields stable and converged solutions, and is implemented in a package RIDFEC. The package is demonstrated to be robust, yielding results which are less sensitive to experimental error than conventional methods, with fitting errors matching the known data error. It is shown that the choice of relative or absolute error norm in the least-squares analysis is best based on the kind of error in the data. (C) 1998 Elsevier Science Ltd. All rights reserved.
Resumo:
On this paper we present a modified regularization scheme for Mathematical Programs with Complementarity Constraints. In the regularized formulations the complementarity condition is replaced by a constraint involving a positive parameter that can be decreased to zero. In our approach both the complementarity condition and the nonnegativity constraints are relaxed. An iterative algorithm is implemented in MATLAB language and a set of AMPL problems from MacMPEC database were tested.
Resumo:
Diffusion MRI is a well established imaging modality providing a powerful way to probe the structure of the white matter non-invasively. Despite its potential, the intrinsic long scan times of these sequences have hampered their use in clinical practice. For this reason, a large variety of methods have been recently proposed to shorten the acquisition times. Among them, spherical deconvolution approaches have gained a lot of interest for their ability to reliably recover the intra-voxel fiber configuration with a relatively small number of data samples. To overcome the intrinsic instabilities of deconvolution, these methods use regularization schemes generally based on the assumption that the fiber orientation distribution (FOD) to be recovered in each voxel is sparse. The well known Constrained Spherical Deconvolution (CSD) approach resorts to Tikhonov regularization, based on an ℓ(2)-norm prior, which promotes a weak version of sparsity. Also, in the last few years compressed sensing has been advocated to further accelerate the acquisitions and ℓ(1)-norm minimization is generally employed as a means to promote sparsity in the recovered FODs. In this paper, we provide evidence that the use of an ℓ(1)-norm prior to regularize this class of problems is somewhat inconsistent with the fact that the fiber compartments all sum up to unity. To overcome this ℓ(1) inconsistency while simultaneously exploiting sparsity more optimally than through an ℓ(2) prior, we reformulate the reconstruction problem as a constrained formulation between a data term and a sparsity prior consisting in an explicit bound on the ℓ(0)norm of the FOD, i.e. on the number of fibers. The method has been tested both on synthetic and real data. Experimental results show that the proposed ℓ(0) formulation significantly reduces modeling errors compared to the state-of-the-art ℓ(2) and ℓ(1) regularization approaches.
Resumo:
For a few years now, the study of quantum field theories in partially compactified space-time manifolds has acquired increasing importance in several domains of quantum physics. Let me just mention the issues of dimensional reduction and spontaneous compactification, and the multiple questions associated with the study of quantum field theories in the presence of boundaries (like the Casimir effect) and on curved space-time (manifolds with curvature and nontrivial topology), a step towards quantum gravity.
Resumo:
We present an alternative approach to the usual treatments of singular Lagrangians. It is based on a Hamiltonian regularization scheme inspired on the coisotropic embedding of presymplectic systems. A Lagrangian regularization of a singular Lagrangian is a regular Lagrangian defined on an extended velocity phase space that reproduces the original theory when restricted to the initial configuration space. A Lagrangian regularization does not always exists, but a family of singular Lagrangians is studied for which such a regularization can be described explicitly. These regularizations turn out to be essentially unique and provide an alternative setting to quantize the corresponding physical systems. These ideas can be applied both in classical mechanics and field theories. Several examples are discussed in detail. 1995 American Institute of Physics.
Resumo:
Although fetal anatomy can be adequately viewed in new multi-slice MR images, many critical limitations remain for quantitative data analysis. To this end, several research groups have recently developed advanced image processing methods, often denoted by super-resolution (SR) techniques, to reconstruct from a set of clinical low-resolution (LR) images, a high-resolution (HR) motion-free volume. It is usually modeled as an inverse problem where the regularization term plays a central role in the reconstruction quality. Literature has been quite attracted by Total Variation energies because of their ability in edge preserving but only standard explicit steepest gradient techniques have been applied for optimization. In a preliminary work, it has been shown that novel fast convex optimization techniques could be successfully applied to design an efficient Total Variation optimization algorithm for the super-resolution problem. In this work, two major contributions are presented. Firstly, we will briefly review the Bayesian and Variational dual formulations of current state-of-the-art methods dedicated to fetal MRI reconstruction. Secondly, we present an extensive quantitative evaluation of our SR algorithm previously introduced on both simulated fetal and real clinical data (with both normal and pathological subjects). Specifically, we study the robustness of regularization terms in front of residual registration errors and we also present a novel strategy for automatically select the weight of the regularization as regards the data fidelity term. Our results show that our TV implementation is highly robust in front of motion artifacts and that it offers the best trade-off between speed and accuracy for fetal MRI recovery as in comparison with state-of-the art methods.
Resumo:
Solid state nuclear magnetic resonance (NMR) spectroscopy is a powerful technique for studying structural and dynamical properties of disordered and partially ordered materials, such as glasses, polymers, liquid crystals, and biological materials. In particular, twodimensional( 2D) NMR methods such as ^^C-^^C correlation spectroscopy under the magicangle- spinning (MAS) conditions have been used to measure structural constraints on the secondary structure of proteins and polypeptides. Amyloid fibrils implicated in a broad class of diseases such as Alzheimer's are known to contain a particular repeating structural motif, called a /5-sheet. However, the details of such structures are poorly understood, primarily because the structural constraints extracted from the 2D NMR data in the form of the so-called Ramachandran (backbone torsion) angle distributions, g{^,'4)), are strongly model-dependent. Inverse theory methods are used to extract Ramachandran angle distributions from a set of 2D MAS and constant-time double-quantum-filtered dipolar recoupling (CTDQFD) data. This is a vastly underdetermined problem, and the stability of the inverse mapping is problematic. Tikhonov regularization is a well-known method of improving the stability of the inverse; in this work it is extended to use a new regularization functional based on the Laplacian rather than on the norm of the function itself. In this way, one makes use of the inherently two-dimensional nature of the underlying Ramachandran maps. In addition, a modification of the existing numerical procedure is performed, as appropriate for an underdetermined inverse problem. Stability of the algorithm with respect to the signal-to-noise (S/N) ratio is examined using a simulated data set. The results show excellent convergence to the true angle distribution function g{(j),ii) for the S/N ratio above 100.
Resumo:
We had previously shown that regularization principles lead to approximation schemes, as Radial Basis Functions, which are equivalent to networks with one layer of hidden units, called Regularization Networks. In this paper we show that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models, Breiman's hinge functions and some forms of Projection Pursuit Regression. In the probabilistic interpretation of regularization, the different classes of basis functions correspond to different classes of prior probabilities on the approximating function spaces, and therefore to different types of smoothness assumptions. In the final part of the paper, we also show a relation between activation functions of the Gaussian and sigmoidal type.
Resumo:
We derive a new representation for a function as a linear combination of local correlation kernels at optimal sparse locations and discuss its relation to PCA, regularization, sparsity principles and Support Vector Machines. We first review previous results for the approximation of a function from discrete data (Girosi, 1998) in the context of Vapnik"s feature space and dual representation (Vapnik, 1995). We apply them to show 1) that a standard regularization functional with a stabilizer defined in terms of the correlation function induces a regression function in the span of the feature space of classical Principal Components and 2) that there exist a dual representations of the regression function in terms of a regularization network with a kernel equal to a generalized correlation function. We then describe the main observation of the paper: the dual representation in terms of the correlation function can be sparsified using the Support Vector Machines (Vapnik, 1982) technique and this operation is equivalent to sparsify a large dictionary of basis functions adapted to the task, using a variation of Basis Pursuit De-Noising (Chen, Donoho and Saunders, 1995; see also related work by Donahue and Geiger, 1994; Olshausen and Field, 1995; Lewicki and Sejnowski, 1998). In addition to extending the close relations between regularization, Support Vector Machines and sparsity, our work also illuminates and formalizes the LFA concept of Penev and Atick (1996). We discuss the relation between our results, which are about regression, and the different problem of pattern classification.
Resumo:
Regularization Networks and Support Vector Machines are techniques for solving certain problems of learning from examples -- in particular the regression problem of approximating a multivariate function from sparse data. We present both formulations in a unified framework, namely in the context of Vapnik's theory of statistical learning which provides a general foundation for the learning problem, combining functional analysis and statistics.
Resumo:
This paper presents an efficient construction algorithm for obtaining sparse kernel density estimates based on a regression approach that directly optimizes model generalization capability. Computational efficiency of the density construction is ensured using an orthogonal forward regression, and the algorithm incrementally minimizes the leave-one-out test score. A local regularization method is incorporated naturally into the density construction process to further enforce sparsity. An additional advantage of the proposed algorithm is that it is fully automatic and the user is not required to specify any criterion to terminate the density construction procedure. This is in contrast to an existing state-of-art kernel density estimation method using the support vector machine (SVM), where the user is required to specify some critical algorithm parameter. Several examples are included to demonstrate the ability of the proposed algorithm to effectively construct a very sparse kernel density estimate with comparable accuracy to that of the full sample optimized Parzen window density estimate. Our experimental results also demonstrate that the proposed algorithm compares favorably with the SVM method, in terms of both test accuracy and sparsity, for constructing kernel density estimates.