719 resultados para Learning and teaching
Resumo:
How do the layered circuits of prefrontal and motor cortex carry out working memory storage, sequence learning, and voluntary sequential item selection and performance? A neural model called LIST PARSE is presented to explain and quantitatively simulate cognitive data about both immediate serial recall and free recall, including bowing of the serial position performance curves, error-type distributions, temporal limitations upon recall, and list length effects. The model also qualitatively explains cognitive effects related to attentional modulation, temporal grouping, variable presentation rates, phonemic similarity, presentation of non-words, word frequency/item familiarity and list strength, distracters and modality effects. In addition, the model quantitatively simulates neurophysiological data from the macaque prefrontal cortex obtained during sequential sensory-motor imitation and planned performance. The article further develops a theory concerning how the cerebral cortex works by showing how variations of the laminar circuits that have previously clarified how the visual cortex sees can also support cognitive processing of sequentially organized behaviors.
Resumo:
This article introduces a new neural network architecture, called ARTMAP, that autonomously learns to classify arbitrarily many, arbitrarily ordered vectors into recognition categories based on predictive success. This supervised learning system is built up from a pair of Adaptive Resonance Theory modules (ARTa and ARTb) that are capable of self-organizing stable recognition categories in response to arbitrary sequences of input patterns. During training trials, the ARTa module receives a stream {a^(p)} of input patterns, and ARTb receives a stream {b^(p)} of input patterns, where b^(p) is the correct prediction given a^(p). These ART modules are linked by an associative learning network and an internal controller that ensures autonomous system operation in real time. During test trials, the remaining patterns a^(p) are presented without b^(p), and their predictions at ARTb are compared with b^(p). Tested on a benchmark machine learning database in both on-line and off-line simulations, the ARTMAP system learns orders of magnitude more quickly, efficiently, and accurately than alternative algorithms, and achieves 100% accuracy after training on less than half the input patterns in the database. It achieves these properties by using an internal controller that conjointly maximizes predictive generalization and minimizes predictive error by linking predictive success to category size on a trial-by-trial basis, using only local operations. This computation increases the vigilance parameter ρa of ARTa by the minimal amount needed to correct a predictive error at ARTb· Parameter ρa calibrates the minimum confidence that ARTa must have in a category, or hypothesis, activated by an input a^(p) in order for ARTa to accept that category, rather than search for a better one through an automatically controlled process of hypothesis testing. Parameter ρa is compared with the degree of match between a^(p) and the top-down learned expectation, or prototype, that is read-out subsequent to activation of an ARTa category. Search occurs if the degree of match is less than ρa. ARTMAP is hereby a type of self-organizing expert system that calibrates the selectivity of its hypotheses based upon predictive success. As a result, rare but important events can be quickly and sharply distinguished even if they are similar to frequent events with different consequences. Between input trials ρa relaxes to a baseline vigilance pa When ρa is large, the system runs in a conservative mode, wherein predictions are made only if the system is confident of the outcome. Very few false-alarm errors then occur at any stage of learning, yet the system reaches asymptote with no loss of speed. Because ARTMAP learning is self stabilizing, it can continue learning one or more databases, without degrading its corpus of memories, until its full memory capacity is utilized.
Resumo:
This article describes neural network models for adaptive control of arm movement trajectories during visually guided reaching and, more generally, a framework for unsupervised real-time error-based learning. The models clarify how a child, or untrained robot, can learn to reach for objects that it sees. Piaget has provided basic insights with his concept of a circular reaction: As an infant makes internally generated movements of its hand, the eyes automatically follow this motion. A transformation is learned between the visual representation of hand position and the motor representation of hand position. Learning of this transformation eventually enables the child to accurately reach for visually detected targets. Grossberg and Kuperstein have shown how the eye movement system can use visual error signals to correct movement parameters via cerebellar learning. Here it is shown how endogenously generated arm movements lead to adaptive tuning of arm control parameters. These movements also activate the target position representations that are used to learn the visuo-motor transformation that controls visually guided reaching. The AVITE model presented here is an adaptive neural circuit based on the Vector Integration to Endpoint (VITE) model for arm and speech trajectory generation of Bullock and Grossberg. In the VITE model, a Target Position Command (TPC) represents the location of the desired target. The Present Position Command (PPC) encodes the present hand-arm configuration. The Difference Vector (DV) population continuously.computes the difference between the PPC and the TPC. A speed-controlling GO signal multiplies DV output. The PPC integrates the (DV)·(GO) product and generates an outflow command to the arm. Integration at the PPC continues at a rate dependent on GO signal size until the DV reaches zero, at which time the PPC equals the TPC. The AVITE model explains how self-consistent TPC and PPC coordinates are autonomously generated and learned. Learning of AVITE parameters is regulated by activation of a self-regulating Endogenous Random Generator (ERG) of training vectors. Each vector is integrated at the PPC, giving rise to a movement command. The generation of each vector induces a complementary postural phase during which ERG output stops and learning occurs. Then a new vector is generated and the cycle is repeated. This cyclic, biphasic behavior is controlled by a specialized gated dipole circuit. ERG output autonomously stops in such a way that, across trials, a broad sample of workspace target positions is generated. When the ERG shuts off, a modulator gate opens, copying the PPC into the TPC. Learning of a transformation from TPC to PPC occurs using the DV as an error signal that is zeroed due to learning. This learning scheme is called a Vector Associative Map, or VAM. The VAM model is a general-purpose device for autonomous real-time error-based learning and performance of associative maps. The DV stage serves the dual function of reading out new TPCs during performance and reading in new adaptive weights during learning, without a disruption of real-time operation. YAMs thus provide an on-line unsupervised alternative to the off-line properties of supervised error-correction learning algorithms. YAMs and VAM cascades for learning motor-to-motor and spatial-to-motor maps are described. YAM models and Adaptive Resonance Theory (ART) models exhibit complementary matching, learning, and performance properties that together provide a foundation for designing a total sensory-cognitive and cognitive-motor autonomous system.
Resumo:
This article introduces ART 2-A, an efficient algorithm that emulates the self-organizing pattern recognition and hypothesis testing properties of the ART 2 neural network architecture, but at a speed two to three orders of magnitude faster. Analysis and simulations show how the ART 2-A systems correspond to ART 2 dynamics at both the fast-learn limit and at intermediate learning rates. Intermediate learning rates permit fast commitment of category nodes but slow recoding, analogous to properties of word frequency effects, encoding specificity effects, and episodic memory. Better noise tolerance is hereby achieved without a loss of learning stability. The ART 2 and ART 2-A systems are contrasted with the leader algorithm. The speed of ART 2-A makes practical the use of ART 2 modules in large-scale neural computation.
Resumo:
A Fuzzy ART model capable of rapid stable learning of recognition categories in response to arbitrary sequences of analog or binary input patterns is described. Fuzzy ART incorporates computations from fuzzy set theory into the ART 1 neural network, which learns to categorize only binary input patterns. The generalization to learning both analog and binary input patterns is achieved by replacing appearances of the intersection operator (n) in AHT 1 by the MIN operator (Λ) of fuzzy set theory. The MIN operator reduces to the intersection operator in the binary case. Category proliferation is prevented by normalizing input vectors at a preprocessing stage. A normalization procedure called complement coding leads to a symmetric theory in which the MIN operator (Λ) and the MAX operator (v) of fuzzy set theory play complementary roles. Complement coding uses on-cells and off-cells to represent the input pattern, and preserves individual feature amplitudes while normalizing the total on-cell/off-cell vector. Learning is stable because all adaptive weights can only decrease in time. Decreasing weights correspond to increasing sizes of category "boxes". Smaller vigilance values lead to larger category boxes. Learning stops when the input space is covered by boxes. With fast learning and a finite input set of arbitrary size and composition, learning stabilizes after just one presentation of each input pattern. A fast-commit slow-recode option combines fast learning with a forgetting rule that buffers system memory against noise. Using this option, rare events can be rapidly learned, yet previously learned memories are not rapidly erased in response to statistically unreliable input fluctuations.
Resumo:
A neural model is described of how adaptively timed reinforcement learning occurs. The adaptive timing circuit is suggested to exist in the hippocampus, and to involve convergence of dentate granule cells on CA3 pyramidal cells, and NMDA receptors. This circuit forms part of a model neural system for the coordinated control of recognition learning, reinforcement learning, and motor learning, whose properties clarify how an animal can learn to acquire a delayed reward. Behavioral and neural data are summarized in support of each processing stage of the system. The relevant anatomical sites are in thalamus, neocortex, hippocampus, hypothalamus, amygdala, and cerebellum. Cerebellar influences on motor learning are distinguished from hippocampal influences on adaptive timing of reinforcement learning. The model simulates how damage to the hippocampal formation disrupts adaptive timing, eliminates attentional blocking, and causes symptoms of medial temporal amnesia. It suggests how normal acquisition of subcortical emotional conditioning can occur after cortical ablation, even though extinction of emotional conditioning is retarded by cortical ablation. The model simulates how increasing the duration of an unconditioned stimulus increases the amplitude of emotional conditioning, but does not change adaptive timing; and how an increase in the intensity of a conditioned stimulus "speeds up the clock", but an increase in the intensity of an unconditioned stimulus does not. Computer simulations of the model fit parametric conditioning data, including a Weber law property and an inverted U property. Both primary and secondary adaptively timed conditioning are simulated, as are data concerning conditioning using multiple interstimulus intervals (ISIs), gradually or abruptly changing ISis, partial reinforcement, and multiple stimuli that lead to time-averaging of responses. Neurobiologically testable predictions are made to facilitate further tests of the model.
Resumo:
Organizations that leverage lessons learned from their experience in the practice of complex real-world activities are faced with five difficult problems. First, how to represent the learning situation in a recognizable way. Second, how to represent what was actually done in terms of repeatable actions. Third, how to assess performance taking account of the particular circumstances. Fourth, how to abstract lessons learned that are re-usable on future occasions. Fifth, how to determine whether to pursue practice maturity or strategic relevance of activities. Here, organizational learning and performance improvement are investigated in a field study using the Context-based Intelligent Assistant Support (CIAS) approach. A new conceptual framework for practice-based organizational learning and performance improvement is presented that supports researchers and practitioners address the problems evoked and contributes to a practice-based approach to activity management. The novelty of the research lies in the simultaneous study of the different levels involved in the activity. Route selection in light rail infrastructure projects involves practices at both the strategic and operational levels; it is part managerial/political and part engineering. Aspectual comparison of practices represented in Contextual Graphs constitutes a new approach to the selection of Key Performance Indicators (KPIs). This approach is free from causality assumptions and forms the basis of a new approach to practice-based organizational learning and performance improvement. The evolution of practices in contextual graphs is shown to be an objective and measurable expression of organizational learning. This diachronic representation is interpreted using a practice-based organizational learning novelty typology. This dissertation shows how lessons learned when effectively leveraged by an organization lead to practice maturity. The practice maturity level of an activity in combination with an assessment of an activity’s strategic relevance can be used by management to prioritize improvement effort.
Resumo:
Angelman syndrome (AS) is a neurobehavioral disorder associated with mental retardation, absence of language development, characteristic electroencephalography (EEG) abnormalities and epilepsy, happy disposition, movement or balance disorders, and autistic behaviors. The molecular defects underlying AS are heterogeneous, including large maternal deletions of chromosome 15q11-q13 (70%), paternal uniparental disomy (UPD) of chromosome 15 (5%), imprinting mutations (rare), and mutations in the E6-AP ubiquitin ligase gene UBE3A (15%). Although patients with UBE3A mutations have a wide spectrum of neurological phenotypes, their features are usually milder than AS patients with deletions of 15q11-q13. Using a chromosomal engineering strategy, we generated mutant mice with a 1.6-Mb chromosomal deletion from Ube3a to Gabrb3, which inactivated the Ube3a and Gabrb3 genes and deleted the Atp10a gene. Homozygous deletion mutant mice died in the perinatal period due to a cleft palate resulting from the null mutation in Gabrb3 gene. Mice with a maternal deletion (m-/p+) were viable and did not have any obvious developmental defects. Expression analysis of the maternal and paternal deletion mice confirmed that the Ube3a gene is maternally expressed in brain, and showed that the Atp10a and Gabrb3 genes are biallelically expressed in all brain sub-regions studied. Maternal (m-/p+), but not paternal (m+/p-), deletion mice had increased spontaneous seizure activity and abnormal EEG. Extensive behavioral analyses revealed significant impairment in motor function, learning and memory tasks, and anxiety-related measures assayed in the light-dark box in maternal deletion but not paternal deletion mice. Ultrasonic vocalization (USV) recording in newborns revealed that maternal deletion pups emitted significantly more USVs than wild-type littermates. The increased USV in maternal deletion mice suggests abnormal signaling behavior between mothers and pups that may reflect abnormal communication behaviors in human AS patients. Thus, mutant mice with a maternal deletion from Ube3a to Gabrb3 provide an AS mouse model that is molecularly more similar to the contiguous gene deletion form of AS in humans than mice with Ube3a mutation alone. These mice will be valuable for future comparative studies to mice with maternal deficiency of Ube3a alone.
Resumo:
Based upon relevant literature, this study investigated the assessment policy and practices for the BSc (Hons) Computing Science programme at the University of Greenwich (UOG), contextualising these in terms of broad social and educational purposes. It discusses Assessment, and then proceeds to give a critical evaluation of the assessment policy and practices at the UOG. Although this is one case study, because any of the features of the programme are generic to other programmes and institutions, it is of wider value and has further implications. The study was concluded in the summer of 2002. It concludes that overall, the programme's assessment policy and practices are well considered in terms of broad social and educational purposes, although it identifies and outlines several possible improvements, as well as raising some major issues still to be addressed which go beyond assessment practices.
Resumo:
This paper presents the findings of an experiment which looked at the effects of performing applied tasks (action learning) prior to the completion of the theoretical learning of these tasks (explanation-based learning), and vice-versa. The applied tasks took the form of laboratories for the Object-Oriented Analysis and Design (OOAD) course, theoretical learning was via lectures.
Resumo:
This poster describes a "real world" example of the teaching of Human-Computer Interaction at the final level of a Computer Science degree. It highlights many of the problems of the ever expanding HCI domain and the consequential issues of what to teach and why. The poster describes the conception and development of a new HCI course, its historical background, the justification for decisions made, lessons learnt from its implementation, and questions arising from its implementation that are yet to be addressed. For example, should HCI be taught as a course in its own right or as a component of another course? At what level is the teaching of HCI appropriate, and how is teaching influenced by industry? It considers suitable learning pedagogies as well as the demands and the contribution of industry. The experiences presented will no doubt be familiar to many HCI educators. Whilst the poster raises more questions than it answers, the resolution of some questions will hopefully be achieved by the workshop.
Resumo:
Since 1984 David Kolb’s Experiential Learning Theory (ELT) has been a leading influence in the development of learner-centred pedagogy in management and business. It forms the basis of Kolb’s own Learning Styles’ Inventory and those of other authors including Honey and Mumford (2000). It also provides powerful underpinning for the emphasis, nay insistence, on reflection as a way of learning and the use of reflective practice in the preparation of students for business and management and other professions. In this paper, we confirm that Kolb’s ELT is still the most commonly cited source used in relation to reflective practice. Kolb himself continues to propound its relevance to teaching and learning in general. However, we also review some of the criticisms that ELT has attracted over the years and advance new criticisms that challenge its relevance to higher education and its validity as a model for formal, intentional learning.
Resumo:
The Student Experience of E-learning Laboratory (SEEL) is a three year initiative that seeks to develop the University’s capacity to discover more about the impact of e-learning on our students in an attempt to narrow the gap between the digital natives and immigrants (Prensky, 2001). In its first year the project team have gathered data on the student experience of using technology in support of their learning from across the University. Initial analysis suggests we should listen more carefully to our students and may need to review some of our current practices in relation to e-learning and explore some new ways of working. In this workshop we will outline some of the findings and consider implications for our future practice.
Resumo:
The aim of the present review was to perform a systematic in-depth review of the best evidence from controlled trial studies that have investigated the effects of nutrition, diet and dietary change on learning, education and performance in school-aged children (4-18 years) from the UK and other developed countries. The twenty-nine studies identified for the review examined the effects of breakfast consumption, sugar intake, fish oil and vitamin supplementation and 'good diets'. In summary, the studies included in the present review suggest there is insufficient evidence to identify any effect of nutrition, diet and dietary change on learning, education or performance of school-aged children from the developed world. However, there is emerging evidence for the effects of certain fatty acids which appear to be a function of dose and time. Further research is required in settings of relevance to the UK and must be of high quality, representative of all populations, undertaken for longer durations and use universal validated measures of educational attainment. However, challenges in terms of interpreting the results of such studies within the context of factors such as family and community context, poverty, disease and the rate of individual maturation and neurodevelopment will remain. Whilst the importance of diet in educational attainment remains under investigation, the evidence for promotion of lower-fat, -salt and -sugar diets, high in fruits, vegetables and complex carbohydrates, as well as promotion of physical activity remains unequivocal in terms of health outcomes for all schoolchildren.
Resumo:
Collaborative approaches in leadership and management are increasingly acknowledged to play a key role in successful institutions in the learning and skills sector (LSS) (Ofsted, 2004). Such approaches may be important in bridging the potential 'distance' (psychological, cultural, interactional and geographical) (Collinson, 2005) that may exist between 'leaders' and 'followers', fostering more democratic communal solidarity. This paper reports on a 2006-07 research project funded by the Centre for Excellence in Leadership (CEL) that aimed to collect and analyse data on 'collaborative leadership' (CL) in the learning and skills sector. The project investigated collaborative leadership and its potential for benefiting staff through trust and knowledge-sharing in communities of practice (CoPs). The project forms part of longer-term educational research investigating leadership in a collaborative inquiry process (Jameson et al., 2006). The research examined the potential for CL to benefit institutions, analysing respondents' understanding of and resistance to collaborative practices. Quantitative and qualitative data from senior managers and lecturers was analysed using electronic data in SPSS and Tropes Zoom. The project aimed to recommend systems and practices for more inclusive, diverse leadership (Lumby et al., 2005). Collaborative leadership has increasingly gained international prominence as emphasis shifted towards team leadership beyond zero-sum 'leadership'/ 'followership' polarities into more mature conceptions of shared leadership spaces, within which synergistic leadership spaces can be mediated. The relevance of collaboration within the LSS has been highlighted following a spate of recent government-driven policy developments in FE. The promotion of CL addresses concerns about the apparent 'remoteness' of some senior managers, and the 'neo-management' control of professionals which can increase 'distance' between leaders and 'followers' and may de-professionalise staff in an already disempowered sector. Positive benefit from 'collaborative advantage' tends to be assumed in idealistic interpretations of CL, but potential 'collaborative inertia' may be problematic in a sector characterised by rapid top-down policy changes and continuous external audit and surveillance. Constant pressure for achievement against goals leaves little time for democratic group negotiations, despite the desires of leaders to create a more collaborative ethos. Yet prior models of intentional communities of practice potentially offer promise for CL practice to improve group performance despite multiple constraints. The CAMEL CoP model (JISC infoNet, 2006) was linked to the project, providing one practical way of implementing CL within situated professional networks.The project found that a good understanding of CL was demonstrated by most respondents, who thought it could enable staff to share power and work in partnership to build trust and conjoin skills, abilities and experience to achieve common goals for the good of the sector. However, although most respondents expressed agreement with the concept and ideals of CL, many thought this was currently an idealistically democratic, unachievable pipe dream in the LSS. Many respondents expressed concerns with the 'audit culture' and authoritarian management structures in FE. While there was a strong desire to see greater levels of implementation of CL, and 'collaborative advantage' from the 'knowledge sharing benefit potential' of team leadership, respondents also strongly advised against the pitfalls of 'collaborative inertia'. A 'distance' between senior leadership views and those of staff lower down the hierarchy regarding aspects of leadership performance in the sector was reported. Finally, the project found that more research is needed to investigate CL and develop innovative methods of practical implementation within autonomous communities of professional practice.