5 resultados para Optimal Portfolio Selection

em Bulgarian Digital Mathematics Library at IMI-BAS


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Portfolio analysis exists, perhaps, as long, as people think about acceptance of rational decisions connected with use of the limited resources. However the occurrence moment of portfolio analysis can be dated precisely enough is having connected it with a publication of pioneer work of Harry Markovittz (Markovitz H. Portfolio Selection) in 1952. The model offered in this work, simple enough in essence, has allowed catching the basic features of the financial market, from the point of view of the investor, and has supplied the last with the tool for development of rational investment decisions. The central problem in Markovitz theory is the portfolio choice that is a set of operations. Thus in estimation, both separate operations and their portfolios two major factors are considered: profitableness and risk of operations and their portfolios. The risk thus receives a quantitative estimation. The account of mutual correlation dependences between profitablenesses of operations appears the essential moment in the theory. This account allows making effective diversification of portfolio, leading to essential decrease in risk of a portfolio in comparison with risk of the operations included in it. At last, the quantitative characteristic of the basic investment characteristics allows defining and solving a problem of a choice of an optimum portfolio in the form of a problem of quadratic optimization.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

AMS subject classification: 93C95, 90A09.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

AMS subject classification: 90C31, 90A09, 49K15, 49L20.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of discussed optimal valid partitioning (OVP) methods is uncovering of ordinal or continuous explanatory variables effect on outcome variables of different types. The OVP approach is based on searching partitions of explanatory variables space that in the best way separate observations with different levels of outcomes. Partitions of single variables ranges or two-dimensional admissible areas for pairs of variables are searched inside corresponding families. Statistical validity associated with revealed regularities is estimated with the help of permutation test repeating search of optimal partition for each permuted dataset. Method for output regularities selection is discussed that is based on validity evaluating with the help of two types of permutation tests.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Big data comes in various ways, types, shapes, forms and sizes. Indeed, almost all areas of science, technology, medicine, public health, economics, business, linguistics and social science are bombarded by ever increasing flows of data begging to be analyzed efficiently and effectively. In this paper, we propose a rough idea of a possible taxonomy of big data, along with some of the most commonly used tools for handling each particular category of bigness. The dimensionality p of the input space and the sample size n are usually the main ingredients in the characterization of data bigness. The specific statistical machine learning technique used to handle a particular big data set will depend on which category it falls in within the bigness taxonomy. Large p small n data sets for instance require a different set of tools from the large n small p variety. Among other tools, we discuss Preprocessing, Standardization, Imputation, Projection, Regularization, Penalization, Compression, Reduction, Selection, Kernelization, Hybridization, Parallelization, Aggregation, Randomization, Replication, Sequentialization. Indeed, it is important to emphasize right away that the so-called no free lunch theorem applies here, in the sense that there is no universally superior method that outperforms all other methods on all categories of bigness. It is also important to stress the fact that simplicity in the sense of Ockham’s razor non-plurality principle of parsimony tends to reign supreme when it comes to massive data. We conclude with a comparison of the predictive performance of some of the most commonly used methods on a few data sets.