5 resultados para Ragin, Charles C.: Fuzzy-set social science
em Bulgarian Digital Mathematics Library at IMI-BAS
Resumo:
The problems of formalization of the process of matching different management subjects’ functioning characteristics obtained on the financial flows analysis basis is considered. Formal generalizations for gaining economical security system knowledge bases elements are presented. One of feedback directions establishment between knowledge base of the system of economical security and financial flows database analysis is substantiated.
Resumo:
The basic matrixes method is suggested for the Leontief model analysis (LM) with some of its components indistinctly given. LM can be construed as a forecast task of product’s expenses-output on the basis of the known statistic information at indistinctly given several elements’ meanings of technological matrix, restriction vector and variables’ limits. Elements of technological matrix, right parts of restriction vector LM can occur as functions of some arguments. In this case the task’s dynamic analog occurs. LM essential complication lies in inclusion of variables restriction and criterion function in it.
Resumo:
* This work is partially supported by CICYT (Spain) under project TIN 2005-08943-C02-001 and by UPM-CAM (Spain) under project R05/11240.
Resumo:
For inference purposes in both classical and fuzzy logic, neither the information itself should be contradictory, nor should any of the items of available information contradict each other. In order to avoid these troubles in fuzzy logic, a study about contradiction was initiated by Trillas et al. in [5] and [6]. They introduced the concepts of both self-contradictory fuzzy set and contradiction between two fuzzy sets. Moreover, the need to study not only contradiction but also the degree of such contradiction is pointed out in [1] and [2], suggesting some measures for this purpose. Nevertheless, contradiction could have been measured in some other way. This paper focuses on the study of contradiction between two fuzzy sets dealing with the problem from a geometrical point of view that allow us to find out new ways to measure the contradiction degree. To do this, the two fuzzy sets are interpreted as a subset of the unit square, and the so called contradiction region is determined. Specially we tackle the case in which both sets represent a curve in [0,1]2. This new geometrical approach allows us to obtain different functions to measure contradiction throughout distances. Moreover, some properties of these contradiction measure functions are established and, in some particular case, the relations among these different functions are obtained.
Resumo:
Big data comes in various ways, types, shapes, forms and sizes. Indeed, almost all areas of science, technology, medicine, public health, economics, business, linguistics and social science are bombarded by ever increasing flows of data begging to be analyzed efficiently and effectively. In this paper, we propose a rough idea of a possible taxonomy of big data, along with some of the most commonly used tools for handling each particular category of bigness. The dimensionality p of the input space and the sample size n are usually the main ingredients in the characterization of data bigness. The specific statistical machine learning technique used to handle a particular big data set will depend on which category it falls in within the bigness taxonomy. Large p small n data sets for instance require a different set of tools from the large n small p variety. Among other tools, we discuss Preprocessing, Standardization, Imputation, Projection, Regularization, Penalization, Compression, Reduction, Selection, Kernelization, Hybridization, Parallelization, Aggregation, Randomization, Replication, Sequentialization. Indeed, it is important to emphasize right away that the so-called no free lunch theorem applies here, in the sense that there is no universally superior method that outperforms all other methods on all categories of bigness. It is also important to stress the fact that simplicity in the sense of Ockham’s razor non-plurality principle of parsimony tends to reign supreme when it comes to massive data. We conclude with a comparison of the predictive performance of some of the most commonly used methods on a few data sets.