12 resultados para Regularization

em Bulgarian Digital Mathematics Library at IMI-BAS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

AMS subject classification: 65K10, 49M07, 90C25, 90C48.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Let H be a real Hilbert space and T be a maximal monotone operator on H. A well-known algorithm, developed by R. T. Rockafellar [16], for solving the problem (P) ”To find x ∈ H such that 0 ∈ T x” is the proximal point algorithm. Several generalizations have been considered by several authors: introduction of a perturbation, introduction of a variable metric in the perturbed algorithm, introduction of a pseudo-metric in place of the classical regularization, . . . We summarize some of these extensions by taking simultaneously into account a pseudo-metric as regularization and a perturbation in an inexact version of the algorithm.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We work on the research of a zero of a maximal monotone operator on a real Hilbert space. Following the recent progress made in the context of the proximal point algorithm devoted to this problem, we introduce simultaneously a variable metric and a kind of relaxation in the perturbed Tikhonov’s algorithm studied by P. Tossings. So, we are led to work in the context of the variational convergence theory.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

* Supported by the Army Research Office under grant DAAD-19-02-10059.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We are concerned with two-level optimization problems called strongweak Stackelberg problems, generalizing the class of Stackelberg problems in the strong and weak sense. In order to handle the fact that the considered two-level optimization problems may fail to have a solution under mild assumptions, we consider a regularization involving ε-approximate optimal solutions in the lower level problems. We prove the existence of optimal solutions for such regularized problems and present some approximation results when the parameter ǫ goes to zero. Finally, as an example, we consider an optimization problem associated to a best bound given in [2] for a system of nondifferentiable convex inequalities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Two jamming cancellation algorithms are developed based on a stable solution of least squares problem (LSP) provided by regularization. They are based on filtered singular value decomposition (SVD) and modifications of the Greville formula. Both algorithms allow an efficient hardware implementation. Testing results on artificial data modeling difficult real-world situations are also provided.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

AMS Subj. Classification: 49J15, 49M15

Relevância:

10.00% 10.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 90C25, 68W10, 49M37.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 62H30, 62P99

Relevância:

10.00% 10.00%

Publicador:

Resumo:

2002 Mathematics Subject Classification: 35L15, 35L80, 35S05, 35S30

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Big data comes in various ways, types, shapes, forms and sizes. Indeed, almost all areas of science, technology, medicine, public health, economics, business, linguistics and social science are bombarded by ever increasing flows of data begging to be analyzed efficiently and effectively. In this paper, we propose a rough idea of a possible taxonomy of big data, along with some of the most commonly used tools for handling each particular category of bigness. The dimensionality p of the input space and the sample size n are usually the main ingredients in the characterization of data bigness. The specific statistical machine learning technique used to handle a particular big data set will depend on which category it falls in within the bigness taxonomy. Large p small n data sets for instance require a different set of tools from the large n small p variety. Among other tools, we discuss Preprocessing, Standardization, Imputation, Projection, Regularization, Penalization, Compression, Reduction, Selection, Kernelization, Hybridization, Parallelization, Aggregation, Randomization, Replication, Sequentialization. Indeed, it is important to emphasize right away that the so-called no free lunch theorem applies here, in the sense that there is no universally superior method that outperforms all other methods on all categories of bigness. It is also important to stress the fact that simplicity in the sense of Ockham’s razor non-plurality principle of parsimony tends to reign supreme when it comes to massive data. We conclude with a comparison of the predictive performance of some of the most commonly used methods on a few data sets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This research evaluates pattern recognition techniques on a subclass of big data where the dimensionality of the input space (p) is much larger than the number of observations (n). Specifically, we evaluate massive gene expression microarray cancer data where the ratio κ is less than one. We explore the statistical and computational challenges inherent in these high dimensional low sample size (HDLSS) problems and present statistical machine learning methods used to tackle and circumvent these difficulties. Regularization and kernel algorithms were explored in this research using seven datasets where κ < 1. These techniques require special attention to tuning necessitating several extensions of cross-validation to be investigated to support better predictive performance. While no single algorithm was universally the best predictor, the regularization technique produced lower test errors in five of the seven datasets studied.