11 resultados para Almost Optimal Density Function

em Bulgarian Digital Mathematics Library at IMI-BAS


Relevância:

100.00% 100.00%

Publicador:

Resumo:

An iterative Monte Carlo algorithm for evaluating linear functionals of the solution of integral equations with polynomial non-linearity is proposed and studied. The method uses a simulation of branching stochastic processes. It is proved that the mathematical expectation of the introduced random variable is equal to a linear functional of the solution. The algorithm uses the so-called almost optimal density function. Numerical examples are considered. Parallel implementation of the algorithm is also realized using the package ATHAPASCAN as an environment for parallel realization.The computational results demonstrate high parallel efficiency of the presented algorithm and give a good solution when almost optimal density function is used as a transition density.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 62G07, 60F10.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Евелина Илиева Велева - Разпределението на Уишарт се среща в практиката като разпределението на извадъчната ковариационна матрица за наблюдения над многомерно нормално разпределение. Изведени са някои маргинални плътности, получени чрез интегриране на плътността на Уишарт разпределението. Доказани са необходими и достатъчни условия за положителна определеност на една матрица, които дават нужните граници за интегрирането.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

2002 Mathematics Subject Classification: 65C05.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 62H15, 62P10.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

2010 Mathematics Subject Classification: 35R60, 60H15, 74H35.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 37F21, 70H20, 37L40, 37C40, 91G80, 93E20.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present paper the problems of the optimal control of systems when constraints are imposed on the control is considered. The optimality conditions are given in the form of Pontryagin’s maximum principle. The obtained piecewise linear function is approximated by using feedforward neural network. A numerical example is given.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Л. И. Каранджулов, Н. Д. Сиракова - В работата се прилага методът на Поанкаре за решаване на почти регулярни нелинейни гранични задачи при общи гранични условия. Предполага се, че диференциалната система съдържа сингулярна функция по отношение на малкия параметър. При определени условия се доказва асимптотичност на решението на поставената задача.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Здравко Д. Славов - В тази статия се разглежда математически модел на икономика с фиксирани общи ресурси, както и краен брой агенти и блага. Обсъжда се ролята на някои предположения за отношенията на предпочитание на икономическите агенти, които влияят на характеристиките на оптимално разпределените дялове. Доказва се, че множеството на оптимално разпределените дялове е свиваемо и притежава свойството на неподвижната точка.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Big data comes in various ways, types, shapes, forms and sizes. Indeed, almost all areas of science, technology, medicine, public health, economics, business, linguistics and social science are bombarded by ever increasing flows of data begging to be analyzed efficiently and effectively. In this paper, we propose a rough idea of a possible taxonomy of big data, along with some of the most commonly used tools for handling each particular category of bigness. The dimensionality p of the input space and the sample size n are usually the main ingredients in the characterization of data bigness. The specific statistical machine learning technique used to handle a particular big data set will depend on which category it falls in within the bigness taxonomy. Large p small n data sets for instance require a different set of tools from the large n small p variety. Among other tools, we discuss Preprocessing, Standardization, Imputation, Projection, Regularization, Penalization, Compression, Reduction, Selection, Kernelization, Hybridization, Parallelization, Aggregation, Randomization, Replication, Sequentialization. Indeed, it is important to emphasize right away that the so-called no free lunch theorem applies here, in the sense that there is no universally superior method that outperforms all other methods on all categories of bigness. It is also important to stress the fact that simplicity in the sense of Ockham’s razor non-plurality principle of parsimony tends to reign supreme when it comes to massive data. We conclude with a comparison of the predictive performance of some of the most commonly used methods on a few data sets.