3 resultados para Big Science projects

em Bulgarian Digital Mathematics Library at IMI-BAS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper some current digitization projects carried out by the Mathematical Institute of Serbian Academy of Science and Arts Belgrade and the Faculty of Mathematics Belgrade are described. The projects concern developing of a virtual library of retro-digitized books and an Internet data base and presentation of electronic editions of some leading Serbian journals in science and arts, and the work on the South-Eastern European Digitization Initiative (SEEDI).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Digitization offers excellent opportunities for the preservation and safe-keeping of valuable library collections. The article recounts the first coordinated attempts of “Ivan Vazov” Public Library – Plovdiv at digitizing some of its treasured collections such as manuscripts, early printed books and archives through partner projects and revealing them to the world community.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Big data comes in various ways, types, shapes, forms and sizes. Indeed, almost all areas of science, technology, medicine, public health, economics, business, linguistics and social science are bombarded by ever increasing flows of data begging to be analyzed efficiently and effectively. In this paper, we propose a rough idea of a possible taxonomy of big data, along with some of the most commonly used tools for handling each particular category of bigness. The dimensionality p of the input space and the sample size n are usually the main ingredients in the characterization of data bigness. The specific statistical machine learning technique used to handle a particular big data set will depend on which category it falls in within the bigness taxonomy. Large p small n data sets for instance require a different set of tools from the large n small p variety. Among other tools, we discuss Preprocessing, Standardization, Imputation, Projection, Regularization, Penalization, Compression, Reduction, Selection, Kernelization, Hybridization, Parallelization, Aggregation, Randomization, Replication, Sequentialization. Indeed, it is important to emphasize right away that the so-called no free lunch theorem applies here, in the sense that there is no universally superior method that outperforms all other methods on all categories of bigness. It is also important to stress the fact that simplicity in the sense of Ockham’s razor non-plurality principle of parsimony tends to reign supreme when it comes to massive data. We conclude with a comparison of the predictive performance of some of the most commonly used methods on a few data sets.