13 resultados para downloading of data

em Bulgarian Digital Mathematics Library at IMI-BAS


Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the most important problems of e-learning system is studied in given paper. This problem is building of data domain model. Data domain model is based on usage of correct organizing knowledge base. In this paper production-frame model is offered, which allows structuring data domain and building flexible and understandable inference system, residing in production system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The question of forming aim-oriented description of an object domain of decision support process is outlined. Two main problems of an estimation and evaluation of data and knowledge uncertainty in decision support systems – straight and reverse, are formulated. Three conditions being the formalized criteria of aimoriented constructing of input, internal and output spaces of some decision support system are proposed. Definitions of appeared and hidden data uncertainties on some measuring scale are given.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The method (algorithm BIDIMS) of multivariate objects display to bidimensional structure in which the sum of differences of objects properties and their nearest neighbors is minimal is being described. The basic regularities on the set of objects at this ordering become evident. Besides, such structures (tables) have high inductive opportunities: many latent properties of objects may be predicted on their coordinates in this table. Opportunities of a method are illustrated on an example of bidimentional ordering of chemical elements. The table received in result practically coincides with the periodic Mendeleev table.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A major drawback of artificial neural networks is their black-box character. Therefore, the rule extraction algorithm is becoming more and more important in explaining the extracted rules from the neural networks. In this paper, we use a method that can be used for symbolic knowledge extraction from neural networks, once they have been trained with desired function. The basis of this method is the weights of the neural network trained. This method allows knowledge extraction from neural networks with continuous inputs and output as well as rule extraction. An example of the application is showed. This example is based on the extraction of average load demand of a power plant.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Владимир Димитров - Целта на настоящия доклад е формалната спецификация на релационния модел на данни. Тази спецификация след това може да бъде разширена към Обектно-релационния модел на данни и към Потоците от данни.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Тихомир Трифонов, Цветанка Георгиева-Трифонова - В настоящата статия е представена системата bgBell/OLAP за складиране и онлайн аналитична обработка на данни за уникални български камбани. Реализираната система предоставя възможност за извеждане на обобщени справки и анализиране на различни характеристики на камбаните, за да се извлече предварително неизвестна и потенциално полезна информация.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 62P10, 92C40

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Information extraction or knowledge discovery from large data sets should be linked to data aggregation process. Data aggregation process can result in a new data representation with decreased number of objects of a given set. A deterministic approach to separable data aggregation means a lesser number of objects without mixing of objects from different categories. A statistical approach is less restrictive and allows for almost separable data aggregation with a low level of mixing of objects from different categories. Layers of formal neurons can be designed for the purpose of data aggregation both in the case of deterministic and statistical approach. The proposed designing method is based on minimization of the of the convex and piecewise linear (CPL) criterion functions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

questions of forming of learning sets for artificial neural networks in problems of lossless data compression are considered. Methods of construction and use of learning sets are studied. The way of forming of learning set during training an artificial neural network on the data stream is offered.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the global strategy for preservation genetic resources of farm animals the implementation of information technology is of great importance. In this regards platform independent information tools and approaches for data exchange are needed in order to obtain aggregate values for regions and countries of spreading a separate breed. The current paper presents a XML based solution for data exchange in management genetic resources of farm animals’ small populations. There are specific requirements to the exchanged documents that come from the goal of data analysis. Three main types of documents are distinguished and their XML formats are discussed. DTD and XML Schema for each type are suggested. Some examples of XML documents are given also.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 62H30, 62J20, 62P12, 68T99

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Big data comes in various ways, types, shapes, forms and sizes. Indeed, almost all areas of science, technology, medicine, public health, economics, business, linguistics and social science are bombarded by ever increasing flows of data begging to be analyzed efficiently and effectively. In this paper, we propose a rough idea of a possible taxonomy of big data, along with some of the most commonly used tools for handling each particular category of bigness. The dimensionality p of the input space and the sample size n are usually the main ingredients in the characterization of data bigness. The specific statistical machine learning technique used to handle a particular big data set will depend on which category it falls in within the bigness taxonomy. Large p small n data sets for instance require a different set of tools from the large n small p variety. Among other tools, we discuss Preprocessing, Standardization, Imputation, Projection, Regularization, Penalization, Compression, Reduction, Selection, Kernelization, Hybridization, Parallelization, Aggregation, Randomization, Replication, Sequentialization. Indeed, it is important to emphasize right away that the so-called no free lunch theorem applies here, in the sense that there is no universally superior method that outperforms all other methods on all categories of bigness. It is also important to stress the fact that simplicity in the sense of Ockham’s razor non-plurality principle of parsimony tends to reign supreme when it comes to massive data. We conclude with a comparison of the predictive performance of some of the most commonly used methods on a few data sets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

2010 Mathematics Subject Classification: 94A17.