819 resultados para e-learning systems
Resumo:
Recently major processor manufacturers have announced a dramatic shift in their paradigm to increase computing power over the coming years. Instead of focusing on faster clock speeds and more powerful single core CPUs, the trend clearly goes towards multi core systems. This will also result in a paradigm shift for the development of algorithms for computationally expensive tasks, such as data mining applications. Obviously, work on parallel algorithms is not new per se but concentrated efforts in the many application domains are still missing. Multi-core systems, but also clusters of workstations and even large-scale distributed computing infrastructures provide new opportunities and pose new challenges for the design of parallel and distributed algorithms. Since data mining and machine learning systems rely on high performance computing systems, research on the corresponding algorithms must be on the forefront of parallel algorithm research in order to keep pushing data mining and machine learning applications to be more powerful and, especially for the former, interactive. To bring together researchers and practitioners working in this exciting field, a workshop on parallel data mining was organized as part of PKDD/ECML 2006 (Berlin, Germany). The six contributions selected for the program describe various aspects of data mining and machine learning approaches featuring low to high degrees of parallelism: The first contribution focuses the classic problem of distributed association rule mining and focuses on communication efficiency to improve the state of the art. After this a parallelization technique for speeding up decision tree construction by means of thread-level parallelism for shared memory systems is presented. The next paper discusses the design of a parallel approach for dis- tributed memory systems of the frequent subgraphs mining problem. This approach is based on a hierarchical communication topology to solve issues related to multi-domain computational envi- ronments. The forth paper describes the combined use and the customization of software packages to facilitate a top down parallelism in the tuning of Support Vector Machines (SVM) and the next contribution presents an interesting idea concerning parallel training of Conditional Random Fields (CRFs) and motivates their use in labeling sequential data. The last contribution finally focuses on very efficient feature selection. It describes a parallel algorithm for feature selection from random subsets. Selecting the papers included in this volume would not have been possible without the help of an international Program Committee that has provided detailed reviews for each paper. We would like to also thank Matthew Otey who helped with publicity for the workshop.
Resumo:
A new sparse kernel density estimator is introduced based on the minimum integrated square error criterion for the finite mixture model. Since the constraint on the mixing coefficients of the finite mixture model is on the multinomial manifold, we use the well-known Riemannian trust-region (RTR) algorithm for solving this problem. The first- and second-order Riemannian geometry of the multinomial manifold are derived and utilized in the RTR algorithm. Numerical examples are employed to demonstrate that the proposed approach is effective in constructing sparse kernel density estimators with an accuracy competitive with those of existing kernel density estimators.
Resumo:
Subspace clustering groups a set of samples from a union of several linear subspaces into clusters, so that the samples in the same cluster are drawn from the same linear subspace. In the majority of the existing work on subspace clustering, clusters are built based on feature information, while sample correlations in their original spatial structure are simply ignored. Besides, original high-dimensional feature vector contains noisy/redundant information, and the time complexity grows exponentially with the number of dimensions. To address these issues, we propose a tensor low-rank representation (TLRR) and sparse coding-based (TLRRSC) subspace clustering method by simultaneously considering feature information and spatial structures. TLRR seeks the lowest rank representation over original spatial structures along all spatial directions. Sparse coding learns a dictionary along feature spaces, so that each sample can be represented by a few atoms of the learned dictionary. The affinity matrix used for spectral clustering is built from the joint similarities in both spatial and feature spaces. TLRRSC can well capture the global structure and inherent feature information of data, and provide a robust subspace segmentation from corrupted data. Experimental results on both synthetic and real-world data sets show that TLRRSC outperforms several established state-of-the-art methods.
Resumo:
Increasing costs and competitive business strategies are pushing sawmill enterprises to make an effort for optimization of their process management. Organizational decisions mainly concentrate on performance and reduction of operational costs in order to maintain profit margins. Although many efforts have been made, effective utilization of resources, optimal planning and maximum productivity in sawmill are still challenging to sawmill industries. Many researchers proposed the simulation models in combination with optimization techniques to address problems of integrated logistics optimization. The combination of simulation and optimization technique identifies the optimal strategy by simulating all complex behaviours of the system under consideration including objectives and constraints. During the past decade, an enormous number of studies were conducted to simulate operational inefficiencies in order to find optimal solutions. This paper gives a review on recent developments and challenges associated with simulation and optimization techniques. It was believed that the review would provide a perfect ground to the authors in pursuing further work in optimizing sawmill yard operations.
Resumo:
Mobile learning involves use of mobile devices to participate in learning activities. Most elearning activities are available to participants through learning systems such as learning content management systems (LCMS). Due to certain challenges, LCMS are not equally accessible on all mobile devices. This study investigates actual use, perceived usefulness and user experiences of LCMS use on mobile phones at Makerere University in Uganda. The study identifies challenges pertaining to use and discusses how to improve LCMS use on mobile phones. Such solutions are a cornerstone in enabling and improving mobile learning. Data was collected by means of focus group discussions, an online survey designed based on the Technology Acceptance Model (TAM), and LCMS log files of user activities. Data was collected from two courses where Moodle was used as a learning platform. The results indicate positive attitudes towards use of LCMS on phones but also huge challenges whichare content related and technical in nature.
Resumo:
Bin planning (arrangements) is a key factor in the timber industry. Improper planning of the storage bins may lead to inefficient transportation of resources, which threaten the overall efficiency and thereby limit the profit margins of sawmills. To address this challenge, a simulation model has been developed. However, as numerous alternatives are available for arranging bins, simulating all possibilities will take an enormous amount of time and it is computationally infeasible. A discrete-event simulation model incorporating meta-heuristic algorithms has therefore been investigated in this study. Preliminary investigations indicate that the results achieved by GA based simulation model are promising and better than the other meta-heuristic algorithm. Further, a sensitivity analysis has been done on the GA based optimal arrangement which contributes to gaining insights and knowledge about the real system that ultimately leads to improved and enhanced efficiency in sawmill yards. It is expected that the results achieved in the work will support timber industries in making optimal decisions with respect to arrangement of storage bins in a sawmill yard.
Resumo:
This work analyses the application of the so-called corporative university model in business enterprises in Brazil. What are the impacts observed in the training and development structures in companies that adopt learning systems defined within the corporative university? It aims to offer an analytical basis to those interested in the educational tendency of knowledge management as a strategy for a sustained competitive edge in business nowadays.
Resumo:
A educação a distância tem passado por grandes transformações, principalmente após o advento da internet e das tecnologias de informação e comunicação (TICs). Inúmeras perguntas sobre qualidade e resultados de aprendizagem em ambientes virtuais foram geradas com o crescimento da modalidade. Pesquisadores têm investigado métodos de avaliação dos benefícios promovidos pelo e-learning sob um número diversificado de perspectivas. O objetivo desta pesquisa é avaliar o impacto dos construtos qualidade do sistema, qualidade da informação e qualidade do serviço na satisfação do aluno e no uso de Sistemas Virtuais de Aprendizagem em ambientes de e-learning, utilizando como base teórica o modelo de Sucesso de e-learning, adaptado do modelo de Delone e McLean por Holsapple e Lee-Post. A metodologia de pesquisa tipo survey foi administrada por meio de um curso on-line ofertado a 291 estudantes de instituições públicas e privadas de todas as regiões do Brasil. Para o tratamento e análise dos dados, utilizaram-se técnicas de modelagem de equações estruturais e análise fatorial confirmatória. Os resultados demonstram que o uso do sistema é impactado pela variação dos construtos qualidade do sistema, qualidade da informação e qualidade dos serviços, já a satisfação do aluno é antecedida pela qualidade percebida da informação e do serviço. Muitos dos benefícios gerados pela educação a distância são causados pela satisfação do aluno e pela intensidade com que este utiliza o sistema de aprendizagem. Ao identificar os indicadores que antecedem estas variáveis, os gestores educacionais podem planejar seus investimentos visando atender às demandas mais importantes, além de utilizar a informação para lidar com um dos maiores problemas em EaD: a evasão.
Resumo:
The Backpropagation Algorithm (BA) is the standard method for training multilayer Artificial Neural Networks (ANN), although it converges very slowly and can stop in a local minimum. We present a new method for neural network training using the BA inspired on constructivism, an alphabetization method proposed by Emilia Ferreiro based on Piaget philosophy. Simulation results show that the proposed configuration usually obtains a lower final mean square error, when compared with the standard BA and with the BA with momentum factor.
Resumo:
In this article, an implementation of structural health monitoring process automation based on vibration measurements is proposed. The work presents an alternative approach which intent is to exploit the capability of model updating techniques associated to neural networks to be used in a process of automation of fault detection. The updating procedure supplies a reliable model which permits to simulate any damage condition in order to establish direct correlation between faults and deviation in the response of the model. The ability of the neural networks to recognize, at known signature, changes in the actual data of a model in real time are explored to investigate changes of the actual operation conditions of the system. The learning of the network is performed using a compressed spectrum signal created for each specific type of fault. Different fault conditions for a frame structure are evaluated using simulated data as well as measured experimental data.
Resumo:
This article has the purpose to review the main codes used to detect and correct errors in data communication specifically in the computer's network. The Hamming's code and the Ciclic Redundancy Code (CRC) are presented as the focus of this article as well as CRC hardware implementation. Each code is reviewed in details in order to fill the gaps in the literature and to make it accessible to the computer science and engineering students as well as to anyone who may be interested in learning the technique to treat error in data communication.
Resumo:
Interactive visual representations complement traditional statistical and machine learning techniques for data analysis, allowing users to play a more active role in a knowledge discovery process and making the whole process more understandable. Though visual representations are applicable to several stages of the knowledge discovery process, a common use of visualization is in the initial stages to explore and organize a sometimes unknown and complex data set. In this context, the integrated and coordinated - that is, user actions should be capable of affecting multiple visualizations when desired - use of multiple graphical representations allows data to be observed from several perspectives and offers richer information than isolated representations. In this paper we propose an underlying model for an extensible and adaptable environment that allows independently developed visualization components to be gradually integrated into a user configured knowledge discovery application. Because a major requirement when using multiple visual techniques is the ability to link amongst them, so that user actions executed on a representation propagate to others if desired, the model also allows runtime configuration of coordinated user actions over different visual representations. We illustrate how this environment is being used to assist data exploration and organization in a climate classification problem.
Resumo:
Autonomous robots must be able to learn and maintain models of their environments. In this context, the present work considers techniques for the classification and extraction of features from images in joined with artificial neural networks in order to use them in the system of mapping and localization of the mobile robot of Laboratory of Automation and Evolutive Computer (LACE). To do this, the robot uses a sensorial system composed for ultrasound sensors and a catadioptric vision system formed by a camera and a conical mirror. The mapping system is composed by three modules. Two of them will be presented in this paper: the classifier and the characterizer module. The first module uses a hierarchical neural network to do the classification; the second uses techiniques of extraction of attributes of images and recognition of invariant patterns extracted from the places images set. The neural network of the classifier module is structured in two layers, reason and intuition, and is trained to classify each place explored for the robot amongst four predefine classes. The final result of the exploration is the construction of a topological map of the explored environment. Results gotten through the simulation of the both modules of the mapping system will be presented in this paper. © 2008 IEEE.
Resumo:
Musical genre classification has been paramount in the last years, mainly in large multimedia datasets, in which new songs and genres can be added at every moment by anyone. In this context, we have seen the growing of musical recommendation systems, which can improve the benefits for several applications, such as social networks and collective musical libraries. In this work, we have introduced a recent machine learning technique named Optimum-Path Forest (OPF) for musical genre classification, which has been demonstrated to be similar to the state-of-the-art pattern recognition techniques, but much faster for some applications. Experiments in two public datasets were conducted against Support Vector Machines and a Bayesian classifier to show the validity of our work. In addition, we have executed an experiment using very recent hybrid feature selection techniques based on OPF to speed up feature extraction process. © 2011 International Society for Music Information Retrieval.
Resumo:
Voice-based user interfaces have been actively pursued aiming to help individuals with motor impairments, providing natural interfaces to communicate with machines. In this work, we have introduced a recent machine learning technique named Optimum-Path Forest (OPF) for voice-based robot interface, which has been demonstrated to be similar to the state-of-the-art pattern recognition techniques, but much faster. Experiments were conducted against Support Vector Machines, Neural Networks and a Bayesian classifier to show the OPF robustness. The proposed architecture provides high accuracy rates allied with low computational times. © 2012 IEEE.