9 resultados para specific learning difficulties
em Bulgarian Digital Mathematics Library at IMI-BAS
Resumo:
The article deals with the topicality and problems of using information and communication technologies in secondary education, conditions and methods for Ukrainian language learning with the distance support in senior classes. The article shows the principal similarity of distance learning to training one. The common and specific principles of creation of teaching materials for a distance learning course are described. It reveals the conditions of effective organization of Ukrainian language learning with distance support on the material of distance course “Business Ukrainian and Culture of Communication”.
Resumo:
It is presented a research on the application of a collaborative learning and authoring during all delivery phases of e-learning programmes or e-courses offered by educational institutions. The possibilities for modelling of an e-project as a specific management process based on planned, dynamically changing or accidentally arising sequences of learning activities, is discussed. New approaches for project-based and collaborative learning and authoring are presented. Special types of test questions are introduced which allow test generation and authoring based on learners’ answers accumulated in the frame of given e-course. Experiments are carried out in an e-learning environment, named BEST.
Resumo:
The paper represents a verification of a previously developed conceptual model of security related processes in DRM implementation. The applicability of established security requirements in practice is checked as well by comparing these requirements to four real DRM implementations (Microsoft Media DRM, Apple's iTunes, SunnComm Technologies’s MediaMax DRM and First4Internet’s XCP DRM). The exploited weaknesses of these systems resulting from the violation of specific security requirements are explained and the possibilities to avoid the attacks by implementing the requirements in designing step are discussed.
Resumo:
The advances in building learning technology now have to emphasize on the aspect of the individual learning besides the popular focus on the technology per se. Unlike the common research where a great deal has been on finding ways to build, manage, classify, categorize and search knowledge on the server, there is an interest in our work to look at the knowledge development at the individual’s learning. We build the technology that resides behind the knowledge sharing platform where learning and sharing activities of an individual take place. The system that we built, KFTGA (Knowledge Flow Tracer and Growth Analyzer), demonstrates the capability of identifying the topics and subjects that an individual is engaged with during the knowledge sharing session and measuring the knowledge growth of the individual learning on a specific subject on a given time space.
Resumo:
Ironically, the “learning of percent” is one of the most problematic aspects of school mathematics. In our view, these difficulties are not associated with the arithmetic aspects of the “percent problems”, but mostly with two methodological issues: firstly, providing students with a simple and accurate understanding of the rationale behind the use of percent, and secondly - overcoming the psychological complexities of the fluent and comprehensive understanding by the students of the sometimes specific wordings of “percent problems”. Before we talk about percent, it is necessary to acquaint students with a much more fundamental and important (regrettably, not covered by the school syllabus) classical concepts of quantitative and qualitative comparison of values, to give students the opportunity to learn the relevant standard terminology and become accustomed to conventional turns of speech. Further, it makes sense to briefly touch on the issue (important in its own right) of different representations of numbers. Percent is just one of the technical, but common forms of data representation: p% = p × % = p × 0.01 = p × 1/100 = p/100 = p × 10-2 "Percent problems” are involved in just two cases: I. The ratio of a variation m to the standard M II. The relative deviation of a variation m from the standard M The hardest and most essential in each specific "percent problem” is not the routine arithmetic actions involved, but the ability to figure out, to clearly understand which of the variables involved in the problem instructions is the standard and which is the variation. And in the first place, this is what teachers need to patiently and persistently teach their students. As a matter of fact, most primary school pupils are not yet quite ready for the lexical specificity of “percent problems”. ....Math teachers should closely, hand in hand with their students, carry out a linguistic analysis of the wording of each problem ... Schoolchildren must firmly understand that a comparison of objects is only meaningful when we speak about properties which can be objectively expressed in terms of actual numerical characteristics. In our opinion, an adequate acquisition of the teaching unit on percent cannot be achieved in primary school due to objective psychological specificities related to this age and because of the level of general training of students. Yet, if we want to make this topic truly accessible and practically useful, it should be taught in high school. A final question to the reader (quickly, please): What is greater: % of e or e% of Pi
Resumo:
Pavel Azalov - Recursion is a powerful technique for producing simple algorithms. It is a main topics in almost every introductory programming course. However, educators often refer to difficulties in learning recursion, and suggest methods for teaching recursion. This paper offers a possible solutions to the problem by (1) expressing the recursive definitions through base operations, which have been predefined as a set of base functions and (2) practising recursion by solving sequences of problems. The base operations are specific for each sequence of problems, resulting in a smooth transitions from recursive definitions to recursive functions. Base functions hide the particularities of the concrete programming language and allows the students to focus solely on the formulation of recursive definitions.
Resumo:
Report published in the Proceedings of the National Conference on "Education in the Information Society", Plovdiv, May, 2013
Resumo:
Big data comes in various ways, types, shapes, forms and sizes. Indeed, almost all areas of science, technology, medicine, public health, economics, business, linguistics and social science are bombarded by ever increasing flows of data begging to be analyzed efficiently and effectively. In this paper, we propose a rough idea of a possible taxonomy of big data, along with some of the most commonly used tools for handling each particular category of bigness. The dimensionality p of the input space and the sample size n are usually the main ingredients in the characterization of data bigness. The specific statistical machine learning technique used to handle a particular big data set will depend on which category it falls in within the bigness taxonomy. Large p small n data sets for instance require a different set of tools from the large n small p variety. Among other tools, we discuss Preprocessing, Standardization, Imputation, Projection, Regularization, Penalization, Compression, Reduction, Selection, Kernelization, Hybridization, Parallelization, Aggregation, Randomization, Replication, Sequentialization. Indeed, it is important to emphasize right away that the so-called no free lunch theorem applies here, in the sense that there is no universally superior method that outperforms all other methods on all categories of bigness. It is also important to stress the fact that simplicity in the sense of Ockham’s razor non-plurality principle of parsimony tends to reign supreme when it comes to massive data. We conclude with a comparison of the predictive performance of some of the most commonly used methods on a few data sets.
Resumo:
This research evaluates pattern recognition techniques on a subclass of big data where the dimensionality of the input space (p) is much larger than the number of observations (n). Specifically, we evaluate massive gene expression microarray cancer data where the ratio κ is less than one. We explore the statistical and computational challenges inherent in these high dimensional low sample size (HDLSS) problems and present statistical machine learning methods used to tackle and circumvent these difficulties. Regularization and kernel algorithms were explored in this research using seven datasets where κ < 1. These techniques require special attention to tuning necessitating several extensions of cross-validation to be investigated to support better predictive performance. While no single algorithm was universally the best predictor, the regularization technique produced lower test errors in five of the seven datasets studied.