957 resultados para Function Learning
Resumo:
Computer-Based Learning systems of one sort or another have been in existence for almost 20 years, but they have yet to achieve real credibility within Commerce, Industry or Education. A variety of reasons could be postulated for this, typically: - cost - complexity - inefficiency - inflexibility - tedium Obviously different systems deserve different levels and types of criticism, but it still remains true that Computer-Based Learning (CBL) is falling significantly short of its potential. Experience of a small, but highly successful CBL system within a large, geographically distributed industry (the National Coal Board) prompted an investigation into currently available packages, the original intention being to purchase the most suitable software and run it on existing computer hardware, alongside existing software systems. It became apparent that none of the available CBL packages were suitable, and a decision was taken to develop an in-house Computer-Assisted Instruction system according to the following criteria: - cheap to run; - easy to author course material; - easy to use; - requires no computing knowledge to use (as either an author or student) ; - efficient in the use of computer resources; - has a comprehensive range of facilities at all levels. This thesis describes the initial investigation, resultant observations and the design, development and implementation of the SCHOOL system. One of the principal characteristics c£ SCHOOL is that it uses a hierarchical database structure for the storage of course material - thereby providing inherently a great deal of the power, flexibility and efficiency originally required. Trials using the SCHOOL system on IBM 303X series equipment are also detailed, along with proposed and current development work on what is essentially an operational CBL system within a large-scale Industrial environment.
Resumo:
Background: We introduced a series of computer-supported workshops in our undergraduate statistics courses, in the hope that it would help students to gain a deeper understanding of statistical concepts. This raised questions about the appropriate design of the Virtual Learning Environment (VLE) in which such an approach had to be implemented. Therefore, we investigated two competing software design models for VLEs. In the first system, all learning features were a function of the classical VLE. The second system was designed from the perspective that learning features should be a function of the course's core content (statistical analyses), which required us to develop a specific-purpose Statistical Learning Environment (SLE) based on Reproducible Computing and newly developed Peer Review (PR) technology. Objectives: The main research question is whether the second VLE design improved learning efficiency as compared to the standard type of VLE design that is commonly used in education. As a secondary objective we provide empirical evidence about the usefulness of PR as a constructivist learning activity which supports non-rote learning. Finally, this paper illustrates that it is possible to introduce a constructivist learning approach in large student populations, based on adequately designed educational technology, without subsuming educational content to technological convenience. Methods: Both VLE systems were tested within a two-year quasi-experiment based on a Reliable Nonequivalent Group Design. This approach allowed us to draw valid conclusions about the treatment effect of the changed VLE design, even though the systems were implemented in successive years. The methodological aspects about the experiment's internal validity are explained extensively. Results: The effect of the design change is shown to have substantially increased the efficiency of constructivist, computer-assisted learning activities for all cohorts of the student population under investigation. The findings demonstrate that a content-based design outperforms the traditional VLE-based design. © 2011 Wessa et al.
Resumo:
The problem of learning by examples in ultrametric committee machines (UCMs) is studied within the framework of statistical mechanics. Using the replica formalism we calculate the average generalization error in UCMs with L hidden layers and for a large enough number of units. In most of the regimes studied we find that the generalization error, as a function of the number of examples presented, develops a discontinuous drop at a critical value of the load parameter. We also find that when L>1 a number of teacher networks with the same number of hidden layers and different overlaps induce learning processes with the same critical points.
Resumo:
To solve multi-objective problems, multiple reward signals are often scalarized into a single value and further processed using established single-objective problem solving techniques. While the field of multi-objective optimization has made many advances in applying scalarization techniques to obtain good solution trade-offs, the utility of applying these techniques in the multi-objective multi-agent learning domain has not yet been thoroughly investigated. Agents learn the value of their decisions by linearly scalarizing their reward signals at the local level, while acceptable system wide behaviour results. However, the non-linear relationship between weighting parameters of the scalarization function and the learned policy makes the discovery of system wide trade-offs time consuming. Our first contribution is a thorough analysis of well known scalarization schemes within the multi-objective multi-agent reinforcement learning setup. The analysed approaches intelligently explore the weight-space in order to find a wider range of system trade-offs. In our second contribution, we propose a novel adaptive weight algorithm which interacts with the underlying local multi-objective solvers and allows for a better coverage of the Pareto front. Our third contribution is the experimental validation of our approach by learning bi-objective policies in self-organising smart camera networks. We note that our algorithm (i) explores the objective space faster on many problem instances, (ii) obtained solutions that exhibit a larger hypervolume, while (iii) acquiring a greater spread in the objective space.
Resumo:
The role that student friendship groups play in learning was investigated here. Employing a critical realist design, two focus groups on undergraduates were conducted to explore their experience of studying. Data from the "case-by-case" analysis suggested student-to-student friendships produced social contexts which facilitated conceptual understanding through discussion, explanation, and application to "real life" contemporary issues. However, the students did not conceive this as a learning experience or suggest the function of their friendships involved learning. These data therefore challenge the perspective that student groups in higher education are formed and regulated for the primary function of learning. Given these findings, further research is needed to assess the role student friendships play in developing disciplinary conceptual understanding.
Resumo:
A rough set approach for attribute reduction is an important research subject in data mining and machine learning. However, most attribute reduction methods are performed on a complete decision system table. In this paper, we propose methods for attribute reduction in static incomplete decision systems and dynamic incomplete decision systems with dynamically-increasing and decreasing conditional attributes. Our methods use generalized discernibility matrix and function in tolerance-based rough sets.
Resumo:
The purpose of this study was to investigate the ontogeny of auditory learning via operant contingency in Northern bobwhite (Colinus virginianus ) hatchlings and possible interaction between attention, orienting and learning during early development. Chicks received individual 5 min training sessions in which they received a playback of a bobwhite maternal call at a single delay following each vocalization they emitted. Playback was either from a single randomly chosen speaker or switched back and forth semi-randomly between two speakers during training. Chicks were tested 24 hrs later in a simultaneous choice test between the familiar and an unfamiliar maternal call. It was found that day-old chicks showed a significant time-specific decrement in auditory learning when trained with delays in the range of 470–910 ms between their vocalizations and call playback only when training involved two speakers. Two-day-old birds showed an even more sustained disruption of learning than day-old chicks, whereas three-day-old chicks showed a pattern of intermittent interference with their learning when trained at such delays. A similar but less severe decrement in auditory learning was found when chicks were provided with motor training in which playback was contingent upon chicks entering and exiting one of two colored squares placed on the floor of the arena. Chicks provided with playback of the call at randomly chosen delays each time they vocalized exhibited large fluctuations in their responsivity to the auditory stimulus as a function of delay—fluctuations which were correlated significantly with measures of chick learning, particularly at two-days-of-age. When playback was limited to a single location chicks no longer showed a time-specific disruption of their learning of the auditory stimulus. Sequential analyses revealed several patterns suggesting that an attentional process similar or analogous to attentional blink may have contributed both to the observed fluctuations in chick responsivity to the auditory stimulus as a function of delay and to the time-specific learning deficit shown by chicks provided with two-speaker training. The study highlights that learning can be substantially modulated by processes of orienting and attention and has a number of important implications for research within cognitive neuroscience, animal behavior and learning.
Resumo:
Peer reviewed
Resumo:
Subspaces and manifolds are two powerful models for high dimensional signals. Subspaces model linear correlation and are a good fit to signals generated by physical systems, such as frontal images of human faces and multiple sources impinging at an antenna array. Manifolds model sources that are not linearly correlated, but where signals are determined by a small number of parameters. Examples are images of human faces under different poses or expressions, and handwritten digits with varying styles. However, there will always be some degree of model mismatch between the subspace or manifold model and the true statistics of the source. This dissertation exploits subspace and manifold models as prior information in various signal processing and machine learning tasks.
A near-low-rank Gaussian mixture model measures proximity to a union of linear or affine subspaces. This simple model can effectively capture the signal distribution when each class is near a subspace. This dissertation studies how the pairwise geometry between these subspaces affects classification performance. When model mismatch is vanishingly small, the probability of misclassification is determined by the product of the sines of the principal angles between subspaces. When the model mismatch is more significant, the probability of misclassification is determined by the sum of the squares of the sines of the principal angles. Reliability of classification is derived in terms of the distribution of signal energy across principal vectors. Larger principal angles lead to smaller classification error, motivating a linear transform that optimizes principal angles. This linear transformation, termed TRAIT, also preserves some specific features in each class, being complementary to a recently developed Low Rank Transform (LRT). Moreover, when the model mismatch is more significant, TRAIT shows superior performance compared to LRT.
The manifold model enforces a constraint on the freedom of data variation. Learning features that are robust to data variation is very important, especially when the size of the training set is small. A learning machine with large numbers of parameters, e.g., deep neural network, can well describe a very complicated data distribution. However, it is also more likely to be sensitive to small perturbations of the data, and to suffer from suffer from degraded performance when generalizing to unseen (test) data.
From the perspective of complexity of function classes, such a learning machine has a huge capacity (complexity), which tends to overfit. The manifold model provides us with a way of regularizing the learning machine, so as to reduce the generalization error, therefore mitigate overfiting. Two different overfiting-preventing approaches are proposed, one from the perspective of data variation, the other from capacity/complexity control. In the first approach, the learning machine is encouraged to make decisions that vary smoothly for data points in local neighborhoods on the manifold. In the second approach, a graph adjacency matrix is derived for the manifold, and the learned features are encouraged to be aligned with the principal components of this adjacency matrix. Experimental results on benchmark datasets are demonstrated, showing an obvious advantage of the proposed approaches when the training set is small.
Stochastic optimization makes it possible to track a slowly varying subspace underlying streaming data. By approximating local neighborhoods using affine subspaces, a slowly varying manifold can be efficiently tracked as well, even with corrupted and noisy data. The more the local neighborhoods, the better the approximation, but the higher the computational complexity. A multiscale approximation scheme is proposed, where the local approximating subspaces are organized in a tree structure. Splitting and merging of the tree nodes then allows efficient control of the number of neighbourhoods. Deviation (of each datum) from the learned model is estimated, yielding a series of statistics for anomaly detection. This framework extends the classical {\em changepoint detection} technique, which only works for one dimensional signals. Simulations and experiments highlight the robustness and efficacy of the proposed approach in detecting an abrupt change in an otherwise slowly varying low-dimensional manifold.
Resumo:
This paper reports the findings from a study of the learning of English intonation by Spanish speakers within the discourse mode of L2 oral presentation. The purpose of this experiment is, firstly, to compare four prosodic parameters before and after an L2 discourse intonation training programme and, secondly, to confirm whether subjects, after the aforementioned L2 discourse intonation training, are able to match the form of these four prosodic parameters to the discourse-pragmatic function of dominance and control. The study designed the instructions and tasks to create the oral and written corpora and Brazil’s Pronunciation for Advanced Learners of English was adapted for the pedagogical aims of the present study. The learners’ pre- and post-tasks were acoustically analysed and a pre / post- questionnaire design was applied to interpret the acoustic analysis. Results indicate most of the subjects acquired a wider choice of the four prosodic parameters partly due to the prosodically-annotated transcripts that were developed throughout the L2 discourse intonation course. Conversely, qualitative and quantitative data reveal most subjects failed to match the forms to their appropriate pragmatic functions to express dominance and control in an L2 oral presentation.
Resumo:
Learning Bayesian networks with bounded tree-width has attracted much attention recently, because low tree-width allows exact inference to be performed efficiently. Some existing methods \cite{korhonen2exact, nie2014advances} tackle the problem by using $k$-trees to learn the optimal Bayesian network with tree-width up to $k$. Finding the best $k$-tree, however, is computationally intractable. In this paper, we propose a sampling method to efficiently find representative $k$-trees by introducing an informative score function to characterize the quality of a $k$-tree. To further improve the quality of the $k$-trees, we propose a probabilistic hill climbing approach that locally refines the sampled $k$-trees. The proposed algorithm can efficiently learn a quality Bayesian network with tree-width at most $k$. Experimental results demonstrate that our approach is more computationally efficient than the exact methods with comparable accuracy, and outperforms most existing approximate methods.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Professional Practice in Learning and Development: How to Design and Deliver Plans for the Workplace
Resumo:
Introduction The world is changing! It is volatile, uncertain, complex and ambiguous. As cliché as it may sound the evidence of such dynamism in the external environment is growing. Business-as-usual is more of the exception than the norm. Organizational change is the rule; be it to accommodate and adapt to change, or instigate and lead change. A constantly changing environment is a situation that all organizations have to live with. What makes some organizations however, able to thrive better than others? Many scholars and practitioners believe that this is due to the ability to learn. Therefore, this book on developing Learning and Development (L&D) professionals is timely as it explores and discusses trends and practices that impact organizations, the workforce and L&D professionals. Being able to learn and develop effectively is the cornerstone of motivation as it helps to address people’s need to be competent and to be autonomous (Deci & Ryan, 2002; Loon & Casimir, 2008; Ryan & Deci, 2000). L&D stimulates and empowers people to perform. Organizations that are better at learning at all levels; the individual, group and organizational level, will always have a better chance of surviving and performing. Given the new reality of a dynamic external environment and constant change, L&D professionals now play an even more important role in their organizations than ever before. However, L&D professionals themselves are not immune to the turbulent changes as their practices are also impacted. Therefore, the challenges that L&D professionals face are two-pronged. Firstly, in relation to helping and supporting their organization and its workforce in adapting to the change, whilst, secondly developing themselves effectively and efficiently so that they are able to be one-step ahead of the workforce that they are meant to help develop. These challenges are recognised by the CIPD, as they recently launched their new L&D qualification that has served as an inspiration for this book. L&D plays a crucial role at both strategic (e.g. organizational capability) and operational (e.g. delivery of training) levels. L&D professionals have moved from being reactive (e.g. following up action after performance appraisals) to being more proactive (e.g. shaping capability). L&D is increasingly viewed as a driver for organizational performance. The CIPD (2014) suggest that L&D is increasingly expected to not only take more responsibility but also accountability for building both individual and organizational knowledge and capability, and to nurture an organizational culture that prizes learning and development. This book is for L&D professionals. Nonetheless, it is also suited for those studying Human Resource Development HRD at intermediate level. The term ‘Human Resource Development’ (HRD) is more common in academia, and is largely synonymous with L&D (Stewart & Sambrook, 2012) Stewart (1998) defined HRD as ‘the practice of HRD is constituted by the deliberate, purposive and active interventions in the natural learning process. Such interventions can take many forms, most capable of categorising as education or training or development’ (p. 9). In fact, many parts of this book (e.g. Chapters 5 and 7) are appropriate for anyone who is involved in training and development. This may include a variety of individuals within the L&D community, such as line managers, professional trainers, training solutions vendors, instructional designers, external consultants and mentors (Mayo, 2004). The CIPD (2014) goes further as they argue that the role of L&D is broad and plays a significant role in Organizational Development (OD) and Talent Management (TM), as well as in Human Resource Management (HRM) in general. OD, TM, HRM and L&D are symbiotic in enabling the ‘people management function’ to provide organizations with the capabilities that they need.
Resumo:
This thesis addresses the Batch Reinforcement Learning methods in Robotics. This sub-class of Reinforcement Learning has shown promising results and has been the focus of recent research. Three contributions are proposed that aim to extend the state-of-art methods allowing for a faster and more stable learning process, such as required for learning in Robotics. The Q-learning update-rule is widely applied, since it allows to learn without the presence of a model of the environment. However, this update-rule is transition-based and does not take advantage of the underlying episodic structure of collected batch of interactions. The Q-Batch update-rule is proposed in this thesis, to process experiencies along the trajectories collected in the interaction phase. This allows a faster propagation of obtained rewards and penalties, resulting in faster and more robust learning. Non-parametric function approximations are explored, such as Gaussian Processes. This type of approximators allows to encode prior knowledge about the latent function, in the form of kernels, providing a higher level of exibility and accuracy. The application of Gaussian Processes in Batch Reinforcement Learning presented a higher performance in learning tasks than other function approximations used in the literature. Lastly, in order to extract more information from the experiences collected by the agent, model-learning techniques are incorporated to learn the system dynamics. In this way, it is possible to augment the set of collected experiences with experiences generated through planning using the learned models. Experiments were carried out mainly in simulation, with some tests carried out in a physical robotic platform. The obtained results show that the proposed approaches are able to outperform the classical Fitted Q Iteration.
Resumo:
Socioeconomic status (SES) influences language and cognitive development, with discrepancies particularly noticeable in vocabulary development. This study examines how SES-related differences impact the development of syntactic processing, cognitive inhibition, and word learning. 38 4-5-year-olds from higher- and lower-SES backgrounds completed a word-learning task, in which novel words were embedded in active and passive sentences. Critically, unlike the active sentences, all passive sentences required a syntactic revision. Measures of cognitive inhibition were obtained through a modified Stroop task. Results indicate that lower-SES participants had more difficulty using inhibitory functions to resolve conflict compared to their higher-SES counterparts. However, SES did not impact language processing, as the language outcomes were similar across SES background. Additionally, stronger inhibitory processes were related to better language outcomes in the passive sentence condition. These results suggest that cognitive inhibition impact language processing, but this function may vary across children from different SES backgrounds