819 resultados para E-Learning Systems
Resumo:
The various questions of creation of integrated development environment for computer training systems are considered in the given paper. The information technologies that can be used for creation of the integrated development environment are described. The different didactic aspects of realization of such systems are analyzed. The ways to improve the efficiency and quality of learning process with computer training systems for distance education are pointed.
Resumo:
In the paper learning algorithm for adjusting weight coefficients of the Cascade Neo-Fuzzy Neural Network (CNFNN) in sequential mode is introduced. Concerned architecture has the similar structure with the Cascade-Correlation Learning Architecture proposed by S.E. Fahlman and C. Lebiere, but differs from it in type of artificial neurons. CNFNN consists of neo-fuzzy neurons, which can be adjusted using high-speed linear learning procedures. Proposed CNFNN is characterized by high learning rate, low size of learning sample and its operations can be described by fuzzy linguistic “if-then” rules providing “transparency” of received results, as compared with conventional neural networks. Using of online learning algorithm allows to process input data sequentially in real time mode.
Resumo:
Increased global uptake of entertainment gaming has the potential to lead to high expectations of engagement and interactivity from users of technology-enhanced learning environments. Blended approaches to implementing game-based learning as part of distance or technology-enhanced education have led to demonstrations of the benefits they might bring, allowing learners to interact with immersive technologies as part of a broader, structured learning experience. In this article, we explore how the integration of a serious game can be extended to a learning content management system (LCMS) to support a blended and holistic approach, described as an 'intuitive-guided' method. Through a case study within the EU-Funded Adaptive Learning via Intuitive/Interactive, Collaborative and Emotional Systems (ALICE) project, a technical integration of a gaming engine with a proprietary LCMS is demonstrated, building upon earlier work and demonstrating how this approach might be realized. In particular, how this method can support an intuitive-guided approach to learning is considered, whereby the learner is given the potential to explore a non-linear environment whilst scaffolding and blending provide guidance ensuring targeted learning objectives are met. Through an evaluation of the developed prototype with 32 students aged 14-16 across two Italian schools, a varied response from learners is observed, coupled with a positive reception from tutors. The study demonstrates that challenges remain in providing high-fidelity content in a classroom environment, particularly as an increasing gap in technology availability between leisure and school times emerges.
Resumo:
Никола Вълчанов, Тодорка Терзиева, Владимир Шкуртов, Антон Илиев - Една от основните области на приложения на компютърната информатика е автоматизирането на математическите изчисления. Информационните системи покриват различни области като счетоводство, електронно обучение/тестване, симулационни среди и т. н. Те работят с изчислителни библиотеки, които са специфични за обхвата на системата. Въпреки, че такива системи са перфектни и работят безпогрешно, ако не се поддържат остаряват. В тази работа описваме механизъм, който използва динамично библиотеките за изчисления и взема решение по време на изпълнение (интелигентно или интерактивно) за това как и кога те да се използват. Целта на тази статия е представяне на архитектура за системи, управлявани от изчисления. Тя се фокусира върху ползите от използването на правилните шаблони за дизайн с цел да се осигури разширяемост и намаляване на сложността.
Resumo:
Report published in the Proceedings of the National Conference on "Education in the Information Society", Plovdiv, May, 2013
Resumo:
A rough set approach for attribute reduction is an important research subject in data mining and machine learning. However, most attribute reduction methods are performed on a complete decision system table. In this paper, we propose methods for attribute reduction in static incomplete decision systems and dynamic incomplete decision systems with dynamically-increasing and decreasing conditional attributes. Our methods use generalized discernibility matrix and function in tolerance-based rough sets.
Resumo:
Resilience is a term that is gaining currency in conservation and sustainable development, though its meaning and value in this context is yet to be defined. Searching for Resilience in Sustainable Development examines ways in which resilience may be created within the web of ecological, socio-economic and cultural systems that make up the world in. The authors embark upon a learning journey exploring both robust and fragile systems and asking questions of groups and individuals actively involved in building or maintaining resilience.
Resumo:
This paper ends with a brief discussion of climate change and suggests that a practical solution would be to transfer much of the current air, sea and long-haul trucking of intercontinental freight between China and Europe (and the USA) to maglev systems. First we review the potential of Asian knowledge management and organisational learning and contrast this against Western precepts finding that there seems to be little incentive to 'look after one's fellows' in China (and perhaps across Asia) outside of tight personal guanxi networks. This is likely to be the case in the intense production regions of China where little time is allowed for 'organisational learning' by the staff and there is little incentive to initiate 'knowledge management' by senior managers. Thus the 'tragedy of the commons' will be enacted by individuals, township, and provincial leaders upwards to top ministers - no one will care for the climate or pollution, only for their own group and their wealth creation prospects. Copyright © 2011 Inderscience Enterprises Ltd.
Resumo:
For the treatment and monitoring of Parkinson's disease (PD) to be scientific, a key requirement is that measurement of disease stages and severity is quantitative, reliable, and repeatable. The last 50 years in PD research have been dominated by qualitative, subjective ratings obtained by human interpretation of the presentation of disease signs and symptoms at clinical visits. More recently, “wearable,” sensor-based, quantitative, objective, and easy-to-use systems for quantifying PD signs for large numbers of participants over extended durations have been developed. This technology has the potential to significantly improve both clinical diagnosis and management in PD and the conduct of clinical studies. However, the large-scale, high-dimensional character of the data captured by these wearable sensors requires sophisticated signal processing and machine-learning algorithms to transform it into scientifically and clinically meaningful information. Such algorithms that “learn” from data have shown remarkable success in making accurate predictions for complex problems in which human skill has been required to date, but they are challenging to evaluate and apply without a basic understanding of the underlying logic on which they are based. This article contains a nontechnical tutorial review of relevant machine-learning algorithms, also describing their limitations and how these can be overcome. It discusses implications of this technology and a practical road map for realizing the full potential of this technology in PD research and practice. © 2016 International Parkinson and Movement Disorder Society.
Resumo:
Markovian models are widely used to analyse quality-of-service properties of both system designs and deployed systems. Thanks to the emergence of probabilistic model checkers, this analysis can be performed with high accuracy. However, its usefulness is heavily dependent on how well the model captures the actual behaviour of the analysed system. Our work addresses this problem for a class of Markovian models termed discrete-time Markov chains (DTMCs). We propose a new Bayesian technique for learning the state transition probabilities of DTMCs based on observations of the modelled system. Unlike existing approaches, our technique weighs observations based on their age, to account for the fact that older observations are less relevant than more recent ones. A case study from the area of bioinformatics workflows demonstrates the effectiveness of the technique in scenarios where the model parameters change over time.
Resumo:
The main challenges of multimedia data retrieval lie in the effective mapping between low-level features and high-level concepts, and in the individual users' subjective perceptions of multimedia content. ^ The objectives of this dissertation are to develop an integrated multimedia indexing and retrieval framework with the aim to bridge the gap between semantic concepts and low-level features. To achieve this goal, a set of core techniques have been developed, including image segmentation, content-based image retrieval, object tracking, video indexing, and video event detection. These core techniques are integrated in a systematic way to enable the semantic search for images/videos, and can be tailored to solve the problems in other multimedia related domains. In image retrieval, two new methods of bridging the semantic gap are proposed: (1) for general content-based image retrieval, a stochastic mechanism is utilized to enable the long-term learning of high-level concepts from a set of training data, such as user access frequencies and access patterns of images. (2) In addition to whole-image retrieval, a novel multiple instance learning framework is proposed for object-based image retrieval, by which a user is allowed to more effectively search for images that contain multiple objects of interest. An enhanced image segmentation algorithm is developed to extract the object information from images. This segmentation algorithm is further used in video indexing and retrieval, by which a robust video shot/scene segmentation method is developed based on low-level visual feature comparison, object tracking, and audio analysis. Based on shot boundaries, a novel data mining framework is further proposed to detect events in soccer videos, while fully utilizing the multi-modality features and object information obtained through video shot/scene detection. ^ Another contribution of this dissertation is the potential of the above techniques to be tailored and applied to other multimedia applications. This is demonstrated by their utilization in traffic video surveillance applications. The enhanced image segmentation algorithm, coupled with an adaptive background learning algorithm, improves the performance of vehicle identification. A sophisticated object tracking algorithm is proposed to track individual vehicles, while the spatial and temporal relationships of vehicle objects are modeled by an abstract semantic model. ^
Resumo:
The Unified Modeling Language (UML) has quickly become the industry standard for object-oriented software development. It is being widely used in organizations and institutions around the world. However, UML is often found to be too complex for novice systems analysts. Although prior research has identified difficulties novice analysts encounter in learning UML, no viable solution has been proposed to address these difficulties. Sequence-diagram modeling, in particular, has largely been overlooked. The sequence diagram models the behavioral aspects of an object-oriented software system in terms of interactions among its building blocks, i.e. objects and classes. It is one of the most commonly-used UML diagrams in practice. However, there has been little research on sequence-diagram modeling. The current literature scarcely provides effective guidelines for developing a sequence diagram. Such guidelines will be greatly beneficial to novice analysts who, unlike experienced systems analysts, do not possess relevant prior experience to easily learn how to develop a sequence diagram. There is the need for an effective sequence-diagram modeling technique for novices. This dissertation reports a research study that identified novice difficulties in modeling a sequence diagram and proposed a technique called CHOP (CHunking, Ordering, Patterning), which was designed to reduce the cognitive load by addressing the cognitive complexity of sequence-diagram modeling. The CHOP technique was evaluated in a controlled experiment against a technique recommended in a well-known textbook, which was found to be representative of approaches provided in many textbooks as well as practitioner literatures. The results indicated that novice analysts were able to perform better using the CHOP technique. This outcome seems have been enabled by pattern-based heuristics provided by the technique. Meanwhile, novice analysts rated the CHOP technique more useful although not significantly easier to use than the control technique. The study established that the CHOP technique is an effective sequence-diagram modeling technique for novice analysts.
Resumo:
This dissertation establishes a novel system for human face learning and recognition based on incremental multilinear Principal Component Analysis (PCA). Most of the existing face recognition systems need training data during the learning process. The system as proposed in this dissertation utilizes an unsupervised or weakly supervised learning approach, in which the learning phase requires a minimal amount of training data. It also overcomes the inability of traditional systems to adapt to the testing phase as the decision process for the newly acquired images continues to rely on that same old training data set. Consequently when a new training set is to be used, the traditional approach will require that the entire eigensystem will have to be generated again. However, as a means to speed up this computational process, the proposed method uses the eigensystem generated from the old training set together with the new images to generate more effectively the new eigensystem in a so-called incremental learning process. In the empirical evaluation phase, there are two key factors that are essential in evaluating the performance of the proposed method: (1) recognition accuracy and (2) computational complexity. In order to establish the most suitable algorithm for this research, a comparative analysis of the best performing methods has been carried out first. The results of the comparative analysis advocated for the initial utilization of the multilinear PCA in our research. As for the consideration of the issue of computational complexity for the subspace update procedure, a novel incremental algorithm, which combines the traditional sequential Karhunen-Loeve (SKL) algorithm with the newly developed incremental modified fast PCA algorithm, was established. In order to utilize the multilinear PCA in the incremental process, a new unfolding method was developed to affix the newly added data at the end of the previous data. The results of the incremental process based on these two methods were obtained to bear out these new theoretical improvements. Some object tracking results using video images are also provided as another challenging task to prove the soundness of this incremental multilinear learning method.
Resumo:
With the explosive growth of the volume and complexity of document data (e.g., news, blogs, web pages), it has become a necessity to semantically understand documents and deliver meaningful information to users. Areas dealing with these problems are crossing data mining, information retrieval, and machine learning. For example, document clustering and summarization are two fundamental techniques for understanding document data and have attracted much attention in recent years. Given a collection of documents, document clustering aims to partition them into different groups to provide efficient document browsing and navigation mechanisms. One unrevealed area in document clustering is that how to generate meaningful interpretation for the each document cluster resulted from the clustering process. Document summarization is another effective technique for document understanding, which generates a summary by selecting sentences that deliver the major or topic-relevant information in the original documents. How to improve the automatic summarization performance and apply it to newly emerging problems are two valuable research directions. To assist people to capture the semantics of documents effectively and efficiently, the dissertation focuses on developing effective data mining and machine learning algorithms and systems for (1) integrating document clustering and summarization to obtain meaningful document clusters with summarized interpretation, (2) improving document summarization performance and building document understanding systems to solve real-world applications, and (3) summarizing the differences and evolution of multiple document sources.
Resumo:
Public schools traditionally have been held accountable for educating the majority of the nation’s school children, and through the years, these schools have been evaluated in a variety of ways. Currently, evaluation measures for accountability purposes consist solely of standardized test scores. In the past, only test scores of general education students were analyzed. Laws governing the education of students with disabilities, however, have extended accountability measures not only to include those students, but to report their scores in a disaggregated form (No Child Left Behind Act, 2001). The recent emphasis on accountability and compliance has resulted in the need for schools to carefully examine how programs, services, and policies impact student achievement (Bowers & Figgers, 2003). ^ Standard-based school reform and accountability systems have raised expectations about student learning outcomes for all students, including those with disabilities and minority students. Yet, overall, racial/ethnic minority students are performing well below their White non-Hispanic peers in most academic areas. Additionally, with respect to special education, there exists an enduring problem of disproportionate representation of racial/ethnic minority students (National Research Council, 2000). ^ This study examined classroom placement (inclusive versus non-inclusive) relative to academic performance of urban, low socioeconomic Hispanic students with and without disabilities in secondary content area classrooms. A mixed method research design was used to investigate this important issue using data from a local school district and results from field observations. The study compared performance levels of four middle school Hispanic student subgroups (students with disabilities in inclusive settings, students without disabilities in inclusive settings, students with disabilities in resource settings, and student without disabilities in general education settings) each in their respective placements for two consecutive years, exploring existing practices within authentic settings. ^ Significant differences were found in the relationship of educational placement and achievement between grade level and disability in the areas of math and reading. Additionally, clear and important differences were observed in student-teacher interactions. Recommendations for further researchers and stakeholders include soliciting responses from teams at the schools composed of general education and special education teachers, administrative personnel, and students as well as broadening the study across grade levels and exceptionalities. ^