779 resultados para computer-mediated learning


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: Brain computer interface (BCI) is a promising new technology with possible application in neurorehabilitation after spinal cord injury. Movement imagination or attempted movement-based BCI coupled with functional electrical stimulation (FES) enables the simultaneous activation of the motor cortices and the muscles they control. When using the BCI- coupled with FES (known as BCI-FES), the subject activates the motor cortex using attempted movement or movement imagination of a limb. The BCI system detects the motor cortex activation and activates the FES attached to the muscles of the limb the subject is attempting or imaging to move. In this way the afferent and the efferent pathways of the nervous system are simultaneously activated. This simultaneous activation encourages Hebbian type learning which could be beneficial in functional rehabilitation after spinal cord injury (SCI). The FES is already in use in several SCI rehabilitation units but there is currently not enough clinical evidence to support the use of BCI-FES for rehabilitation. Aims: The main aim of this thesis is to assess outcomes in sub-acute tetraplegic patients using BCI-FES for functional hand rehabilitation. In addition, the thesis explores different methods for assessing neurological rehabilitation especially after BCI-FES therapy. The thesis also investigated mental rotation as a possible rehabilitation method in SCI. Methods: Following investigation into applicable methods that can be used to implement rehabilitative BCI, a BCI based on attempted movement was built. Further, the BCI was used to build a BCI-FES system. The BCI-FES system was used to deliver therapy to seven sub-acute tetraplegic patients who were scheduled to receive the therapy over a total period of 20 working days. These seven patients are in a 'BCI-FES' group. Five more patients were also recruited and offered equivalent FES quantity without the BCI. These further five patients are in a 'FES-only' group. Neurological and functional measures were investigated and used to assess both patient groups before and after therapy. Results: The results of the two groups of patients were compared. The patients in the BCI-FES group had better improvements. These improvements were found with outcome measures assessing neurological changes. The neurological changes following the use of the BCI-FES showed that during movement attempt, the activation of the motor cortex areas of the SCI patients became closer to the activation found in healthy individuals. The intensity of the activation and its spatial localisation both improved suggesting desirable cortical reorganisation. Furthermore, the responses of the somatosensory cortex during sensory stimulation were of clear evidence of better improvement in patients who used the BCI-FES. Missing somatosensory evoked potential peaks returned more for the BCI-FES group while there was no overall change in the FES-only group. Although the BCI-FES group had better neurological improvement, they did not show better functional improvement than the FES-only group. This was attributed mainly to the short duration of the study where therapies were only delivered for 20 working days. Conclusions: The results obtained from this study have shown that BCI-FES may induce cortical changes in the desired direction at least faster than FES alone. The observation of better improvement in the patients who used the BCI-FES is a good result in neurorehabilitation and it shows the potential of thought-controlled FES as a neurorehabilitation tool. These results back other studies that have shown the potential of BCI-FES in rehabilitation following neurological injuries that lead to movement impairment. Although the results are promising, further studies are necessary given the small number of subjects in the current study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Der Einsatz von Fallstudien kann als wichtiges Bindeglied zur Verknüpfung von Theorie und Praxis betrachtet werden. Fallstudien ermöglichen die Anwendung theoretischen Grundlagenwissens und die Entwicklung überfachlicher Kompetenzen. Damit können sie einen wichtigen Beitrag zur beruflichen Handlungskompetenz genau dort leisten, wo praktische Erfahrungen im Rahmen der Aus-und Weiterbildung nicht möglich sind. Der Einsatz von Fallstudien sollte aus diesem Grund nicht nur den „klassischen“ Anwendungsdisziplinen wie den Rechtswissenschaften, der Betriebswirtschaftslehre oder der Psychologie vorbehalten sein. Auch im Bereich der Informatik können sie eine wichtige Ergänzung zu den bisher eingesetzten Methoden darstellen. Das im Kontext des Projekts New Economy1 entwickelte und hier vorgestellte Konzept zur didaktischen und technischen Aufbereitung von Fallstudien am Beispiel der IT-Aus- und Weiterbildung soll diese Diskussion anregen. Mit Hilfe des vorgestellten Ansatzes ist es möglich, unterschiedliche methodische Zugänge zu einer Fallstudie für eine computerbasierte Präsentation automatisch zu generieren und mit fachlichen Inhalten zu verknüpfen. Damit ist ein entscheidender Mehrwert gegenüber den bisherigen statischen und in sich geschlossenen Darstellungen gegeben. Der damit zu erreichende Qualitätssprung im Einsatz von Fallstudien in der universitären und betrieblichen Aus- und Weiterbildung stellt einen wichtigen Beitrag zur praxisorientierten Gestaltung von Blended Learning-Ansätzen dar.(DIPF/Orig.)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Visual recognition is a fundamental research topic in computer vision. This dissertation explores datasets, features, learning, and models used for visual recognition. In order to train visual models and evaluate different recognition algorithms, this dissertation develops an approach to collect object image datasets on web pages using an analysis of text around the image and of image appearance. This method exploits established online knowledge resources (Wikipedia pages for text; Flickr and Caltech data sets for images). The resources provide rich text and object appearance information. This dissertation describes results on two datasets. The first is Berg’s collection of 10 animal categories; on this dataset, we significantly outperform previous approaches. On an additional set of 5 categories, experimental results show the effectiveness of the method. Images are represented as features for visual recognition. This dissertation introduces a text-based image feature and demonstrates that it consistently improves performance on hard object classification problems. The feature is built using an auxiliary dataset of images annotated with tags, downloaded from the Internet. Image tags are noisy. The method obtains the text features of an unannotated image from the tags of its k-nearest neighbors in this auxiliary collection. A visual classifier presented with an object viewed under novel circumstances (say, a new viewing direction) must rely on its visual examples. This text feature may not change, because the auxiliary dataset likely contains a similar picture. While the tags associated with images are noisy, they are more stable when appearance changes. The performance of this feature is tested using PASCAL VOC 2006 and 2007 datasets. This feature performs well; it consistently improves the performance of visual object classifiers, and is particularly effective when the training dataset is small. With more and more collected training data, computational cost becomes a bottleneck, especially when training sophisticated classifiers such as kernelized SVM. This dissertation proposes a fast training algorithm called Stochastic Intersection Kernel Machine (SIKMA). This proposed training method will be useful for many vision problems, as it can produce a kernel classifier that is more accurate than a linear classifier, and can be trained on tens of thousands of examples in two minutes. It processes training examples one by one in a sequence, so memory cost is no longer the bottleneck to process large scale datasets. This dissertation applies this approach to train classifiers of Flickr groups with many group training examples. The resulting Flickr group prediction scores can be used to measure image similarity between two images. Experimental results on the Corel dataset and a PASCAL VOC dataset show the learned Flickr features perform better on image matching, retrieval, and classification than conventional visual features. Visual models are usually trained to best separate positive and negative training examples. However, when recognizing a large number of object categories, there may not be enough training examples for most objects, due to the intrinsic long-tailed distribution of objects in the real world. This dissertation proposes an approach to use comparative object similarity. The key insight is that, given a set of object categories which are similar and a set of categories which are dissimilar, a good object model should respond more strongly to examples from similar categories than to examples from dissimilar categories. This dissertation develops a regularized kernel machine algorithm to use this category dependent similarity regularization. Experiments on hundreds of categories show that our method can make significant improvement for categories with few or even no positive examples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we envision didactical concepts for university education based on self-responsible and project-based learning and outline principles of adequate technical support. We use the scenario technique describing how a fictive student named Anna organizes her studies of informatics at a fictive university from the first days of her studies to make a career for herself.(DIPF/Orig.)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This commentary will use recent events in Cornwall to highlight the ongoing abuse of adults with learning disabilities in England. It will critically explore how two parallel policy agendas – namely, the promotion of choice and independence for adults with learning disabilities and the development of adult protection policies – have failed to connect, thus allowing abuse to continue to flourish. It will be argued that the abuse of people with learning disabilities can only be minimised by policies which reflect an understanding that choice and independence must necessarily be mediated by effective adult protection measures. Such protection needs to include not only an appropriate regulatory framework, access to justice and well-qualified staff, but also a more critical and reflective approach to the current orthodoxy which promotes choice and independence as the only acceptable goals for any person with a learning disability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The size of online image datasets is constantly increasing. Considering an image dataset with millions of images, image retrieval becomes a seemingly intractable problem for exhaustive similarity search algorithms. Hashing methods, which encodes high-dimensional descriptors into compact binary strings, have become very popular because of their high efficiency in search and storage capacity. In the first part, we propose a multimodal retrieval method based on latent feature models. The procedure consists of a nonparametric Bayesian framework for learning underlying semantically meaningful abstract features in a multimodal dataset, a probabilistic retrieval model that allows cross-modal queries and an extension model for relevance feedback. In the second part, we focus on supervised hashing with kernels. We describe a flexible hashing procedure that treats binary codes and pairwise semantic similarity as latent and observed variables, respectively, in a probabilistic model based on Gaussian processes for binary classification. We present a scalable inference algorithm with the sparse pseudo-input Gaussian process (SPGP) model and distributed computing. In the last part, we define an incremental hashing strategy for dynamic databases where new images are added to the databases frequently. The method is based on a two-stage classification framework using binary and multi-class SVMs. The proposed method also enforces balance in binary codes by an imbalance penalty to obtain higher quality binary codes. We learn hash functions by an efficient algorithm where the NP-hard problem of finding optimal binary codes is solved via cyclic coordinate descent and SVMs are trained in a parallelized incremental manner. For modifications like adding images from an unseen class, we propose an incremental procedure for effective and efficient updates to the previous hash functions. Experiments on three large-scale image datasets demonstrate that the incremental strategy is capable of efficiently updating hash functions to the same retrieval performance as hashing from scratch.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In contemporary societies higher education must shape individuals able to solve problems in a workable and simpler manner and, therefore, a multidisciplinary view of the problems, with insights in disciplines like psychology, mathematics or computer science becomes mandatory. Undeniably, the great challenge for teachers is to provide a comprehensive training in General Chemistry with high standards of quality, and aiming not only at the promotion of the student’s academic success, but also at the understanding of the competences/skills required to their future doings. Thus, this work will be focused on the development of an intelligent system to assess the Quality-of-General-Chemistry-Learning, based on factors related with subject, teachers and students.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A combined Short-Term Learning (STL) and Long-Term Learning (LTL) approach to solving mobile robot navigation problems is presented and tested in both real and simulated environments. The LTL consists of rapid simulations that use a Genetic Algorithm to derive diverse sets of behaviours. These sets are then transferred to an idiotypic Artificial Immune System (AIS), which forms the STL phase, and the system is said to be seeded. The combined LTL-STL approach is compared with using STL only, and with using a handdesigned controller. In addition, the STL phase is tested when the idiotypic mechanism is turned off. The results provide substantial evidence that the best option is the seeded idiotypic system, i.e. the architecture that merges LTL with an idiotypic AIS for the STL. They also show that structurally different environments can be used for the two phases without compromising transferability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

International audience

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Part 13: Virtual Reality and Simulation

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A large proportion of human populations suffer memory impairments either caused by normal aging or afflicted by diverse neurological and neurodegenerative diseases. Memory enhancers and other drugs tested so far against memory loss have failed to produce therapeutic efficacy in clinical trials and thus, there is a need to find remedy for this mental disorder. In search for cure of memory loss, our laboratory discovered a robust memory enhancer called RGS14(414). A treatment in brain with its gene produces an enduring effect on memory that lasts for lifetime of rats. Therefore, current thesis work was designed to investigate whether RGS14(414) treatment can prevent memory loss and furthermore, explore through biological processes responsible for RGS-mediated memory enhancement. We found that RGS14(414) gene treatment prevented episodic memory loss in rodent models of normal aging and Alzheimer´s disease. A memory loss was observed in normal rats at 18 months of age; however, when they were treated with RGS14(414) gene at 3 months of age, they abrogated this deficit and their memory remained intact till the age of 22 months. In addition to normal aging rats, effect of memory enhancer treatment in mice model of Alzheimer´s disease (AD-mice) produced a similar effect. AD-mice subjected to treatment with RGS14(414) gene at the age of 2 months, a period when memory was intact, showed not only a prevention in memory loss observed at 4 months of age but also they were able to maintain normal memory after 6 months of the treatment. We posit that long-lasting effect on memory enhancement and prevention of memory loss mediated through RGS14(414) might be due to a permanent structural change caused by a surge in neuronal connections and enhanced neuronal remodeling, key processes for long-term memory formation. A neuronal arborization analysis of both pyramidal and non-pyramidal neurons in brain of RGS14(414)-treated rats exhibited robust rise in neurites outgrowth of both kind of cells, and an increment in number of branching from the apical dendrite of pyramidal neurons, reaching to almost three times of the control animals. To further understand of underlying mechanism by which RGS14(414) induces neuronal arborization, we investigated into neurotrophic factors. We observed that RGS14 treatment induces a selective increase in BDNF. Role of BDNF in neuronal arborization, as well as its implication in learning and memory processes is well described. In addition, our results showing a dynamic expression pattern of BDNF during ORM processing that overlapped with memory consolidation further support the idea of the implication of this neurotrophin in formation of long-term memory in RGS-animals. On the other hand, in studies of expression profiling of RGS-treated animals, we have demonstrated that 14-3-3ζ protein displays a coherent relationship to RGS-mediated ORM enhancement. Recent studies have demonstrated that the interaction of receptor for activated protein kinase 1 (RACK1) with 14-3-3ζ is essential for its nuclear translocation, where RACK1-14-3-3ζ complex binds at promotor IV region of BDNF and promotes an increase in BDNF gene transcription. These observations suggest that 14-3-3ζ might regulate the elevated level of BDNF seen in RGS14(414) gene treated animals. Therefore, it seems that RGS-mediated surge in 14-3-3ζ causes elevated BDNF synthesis needed for neuronal arborization and enhanced ORM. The prevention of memory loss might be mediated through a restoration in BDNF and 14-3-3ζ protein levels, which are significantly decreased in aging and Alzheimer’s disease. Additionally, our results demonstrate that RGS14(414) treatment could be a viable strategy against episodic memory loss.