5 resultados para simplicity

em Bulgarian Digital Mathematics Library at IMI-BAS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

An eMathTeacher [Sánchez-Torrubia 2007a] is an eLearning on line self assessment tool that help students to active learning math algorithms by themselves, correcting their mistakes and providing them with clues to find the right solution. The tool presented in this paper is an example of this new concept on Computer Aided Instruction (CAI) resources and has been implemented as a Java applet and designed as an auxiliary instrument for both classroom teaching and individual practicing of Fleury’s algorithm. This tool, included within a set of eMathTeacher tools, has been designed as educational complement of Graph Algorithm active learning for first course students. Its characteristics of visualization, simplicity and interactivity, make this tutorial a great value pedagogical instrument.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work was supported by the Bulgarian National Science Fund under grant BY-TH-105/2005.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In 2000 A. Alesina and M. Galuzzi presented Vincent’s theorem “from a modern point of view” along with two new bisection methods derived from it, B and C. Their profound understanding of Vincent’s theorem is responsible for simplicity — the characteristic property of these two methods. In this paper we compare the performance of these two new bisection methods — i.e. the time they take, as well as the number of intervals they examine in order to isolate the real roots of polynomials — against that of the well-known Vincent-Collins-Akritas method, which is the first bisection method derived from Vincent’s theorem back in 1976. Experimental results indicate that REL, the fastest implementation of the Vincent-Collins-Akritas method, is still the fastest of the three bisection methods, but the number of intervals it examines is almost the same as that of B. Therefore, further research on speeding up B while preserving its simplicity looks promising.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

ACM Computing Classification System (1998): E.4, C.2.1.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Big data comes in various ways, types, shapes, forms and sizes. Indeed, almost all areas of science, technology, medicine, public health, economics, business, linguistics and social science are bombarded by ever increasing flows of data begging to be analyzed efficiently and effectively. In this paper, we propose a rough idea of a possible taxonomy of big data, along with some of the most commonly used tools for handling each particular category of bigness. The dimensionality p of the input space and the sample size n are usually the main ingredients in the characterization of data bigness. The specific statistical machine learning technique used to handle a particular big data set will depend on which category it falls in within the bigness taxonomy. Large p small n data sets for instance require a different set of tools from the large n small p variety. Among other tools, we discuss Preprocessing, Standardization, Imputation, Projection, Regularization, Penalization, Compression, Reduction, Selection, Kernelization, Hybridization, Parallelization, Aggregation, Randomization, Replication, Sequentialization. Indeed, it is important to emphasize right away that the so-called no free lunch theorem applies here, in the sense that there is no universally superior method that outperforms all other methods on all categories of bigness. It is also important to stress the fact that simplicity in the sense of Ockham’s razor non-plurality principle of parsimony tends to reign supreme when it comes to massive data. We conclude with a comparison of the predictive performance of some of the most commonly used methods on a few data sets.