21 resultados para Lanczos, Linear systems, Generalized cross validation

em Bulgarian Digital Mathematics Library at IMI-BAS


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper has been presented at the 12th International Conference on Applications of Computer Algebra, Varna, Bulgaria, June, 2006

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Let C = (C, g^1/4 ) be a tetragonal curve. We consider the scrollar invariants e1 , e2 , e3 of g^1/4 . We prove that if W^1/4 (C) is a non-singular variety, then every g^1/4 ∈ W^1/4 (C) has the same scrollar invariants.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A modification of the Nekrassov method for finding a solution of a linear system of algebraic equations is given and a numerical example is shown.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work reports on a new software for solving linear systems involving affine-linear dependencies between complex-valued interval parameters. We discuss the implementation of a parametric residual iteration for linear interval systems by advanced communication between the system Mathematica and the library C-XSC supporting rigorous complex interval arithmetic. An example of AC electrical circuit illustrates the use of the presented software.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper has been presented at the International Conference Pioneers of Bulgarian Mathematics, Dedicated to Nikola Obreshko ff and Lubomir Tschakaloff , Sofi a, July, 2006.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

* This paper is partially supported by the National Science Fund of Bulgarian Ministry of Education and Science under contract № I–1401\2004 "Interactive Algorithms and Software Systems Supporting Multicriteria Decision Making".

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Research partially supported by INTAS grant 97-1644

Relevância:

100.00% 100.00%

Publicador:

Resumo:

* Работа выполнена при поддержке РФФИ, гранты 07-01-00331-a и 08-01-00944-a

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Theodore Motzkin proved, in 1936, that any polyhedral convex set can be expressed as the (Minkowski) sum of a polytope and a polyhedral convex cone. We have provided several characterizations of the larger class of closed convex sets, Motzkin decomposable, in finite dimensional Euclidean spaces which are the sum of a compact convex set with a closed convex cone. These characterizations involve different types of representations of closed convex sets as the support functions, dual cones and linear systems whose relationships are also analyzed. The obtaining of information about a given closed convex set F and the parametric linear optimization problem with feasible set F from each of its different representations, including the Motzkin decomposition, is also discussed. Another result establishes that a closed convex set is Motzkin decomposable if and only if the set of extreme points of its intersection with the linear subspace orthogonal to its lineality is bounded. We characterize the class of the extended functions whose epigraphs are Motzkin decomposable sets showing, in particular, that these functions attain their global minima when they are bounded from below. Calculus of Motzkin decomposable sets and functions is provided.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Stochastic arithmetic has been developed as a model for exact computing with imprecise data. Stochastic arithmetic provides confidence intervals for the numerical results and can be implemented in any existing numerical software by redefining types of the variables and overloading the operators on them. Here some properties of stochastic arithmetic are further investigated and applied to the computation of inner products and the solution to linear systems. Several numerical experiments are performed showing the efficiency of the proposed approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Big data comes in various ways, types, shapes, forms and sizes. Indeed, almost all areas of science, technology, medicine, public health, economics, business, linguistics and social science are bombarded by ever increasing flows of data begging to be analyzed efficiently and effectively. In this paper, we propose a rough idea of a possible taxonomy of big data, along with some of the most commonly used tools for handling each particular category of bigness. The dimensionality p of the input space and the sample size n are usually the main ingredients in the characterization of data bigness. The specific statistical machine learning technique used to handle a particular big data set will depend on which category it falls in within the bigness taxonomy. Large p small n data sets for instance require a different set of tools from the large n small p variety. Among other tools, we discuss Preprocessing, Standardization, Imputation, Projection, Regularization, Penalization, Compression, Reduction, Selection, Kernelization, Hybridization, Parallelization, Aggregation, Randomization, Replication, Sequentialization. Indeed, it is important to emphasize right away that the so-called no free lunch theorem applies here, in the sense that there is no universally superior method that outperforms all other methods on all categories of bigness. It is also important to stress the fact that simplicity in the sense of Ockham’s razor non-plurality principle of parsimony tends to reign supreme when it comes to massive data. We conclude with a comparison of the predictive performance of some of the most commonly used methods on a few data sets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research evaluates pattern recognition techniques on a subclass of big data where the dimensionality of the input space (p) is much larger than the number of observations (n). Specifically, we evaluate massive gene expression microarray cancer data where the ratio κ is less than one. We explore the statistical and computational challenges inherent in these high dimensional low sample size (HDLSS) problems and present statistical machine learning methods used to tackle and circumvent these difficulties. Regularization and kernel algorithms were explored in this research using seven datasets where κ < 1. These techniques require special attention to tuning necessitating several extensions of cross-validation to be investigated to support better predictive performance. While no single algorithm was universally the best predictor, the regularization technique produced lower test errors in five of the seven datasets studied.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

MSC 2010: 05C50, 15A03, 15A06, 65K05, 90C08, 90C35

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This work was supported by the Bulgarian National Science Fund under grant BY-TH-105/2005.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We present some results on the formation of singularities for C^1 - solutions of the quasi-linear N × N strictly hyperbolic system Ut + A(U )Ux = 0 in [0, +∞) × Rx . Under certain weak non-linearity conditions (weaker than genuine non-linearity), we prove that the first order derivative of the solution blows-up in finite time.