55 resultados para on line learning


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a self-reference multiplexed fibre interferometer (MFI) by using a tunable laser and fibre Bragg grating (FBG). The optical measurement system multiplexes two Michelson fibre interferometers with shared optical path in the main part of optical system. One fibre optic interferometer is used as a reference interferometer to monitor and control the high accuracy of the measurement system under environmental perturbations. The other is used as a measurement interferometer to obtain information from the target. An active phase tracking homodyne (APTH) technique is applied for signal processing to achieve high resolution. MFI can be utilised for high precision absolute displacement measurement with different combination of wavelengths from the tuneable laser. By means of Wavelength-Division-Multiplexing (WDM) technique, MFI is also capable of realising on-line surface measurement, in which traditional stylus scanning is replaced by spatial light-wave scanning so as to greatly improve the measurement speed and robustness. © 2004 Optical Society of America.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Attention-deficit hyperactivity disorder (ADHD) is the most prevalent and impairing neurodevelopmental disorder, with worldwide estimates of 5.29%. ADHD is clinically characterized by hyperactivity-impulsivity and inattention, with neuropsychological deficits in executive functions, attention, working memory and inhibition. These cognitive processes rely on prefrontal cortex function; cognitive training programs enhance performance of ADHD participants supporting the idea of neuronal plasticity. Here we propose the development of an on-line puzzle game based assessment and training tool in which participants must deduce the ‘winning symbol’ out of N distracters. To increase ecological validity of assessments strategically triggered Twitter/Facebook notifications will challenge the ability to ignore distracters. In the UK, significant cost for the disorder on health, social and education services, stand at £23m a year. Thus the potential impact of neuropsychological assessment and training to improve our understanding of the pathophysiology of ADHD, and hence our treatment interventions and patient outcomes, cannot be overstated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Natural gradient learning is an efficient and principled method for improving on-line learning. In practical applications there will be an increased cost required in estimating and inverting the Fisher information matrix. We propose to use the matrix momentum algorithm in order to carry out efficient inversion and study the efficacy of a single step estimation of the Fisher information matrix. We analyse the proposed algorithm in a two-layer network, using a statistical mechanics framework which allows us to describe analytically the learning dynamics, and compare performance with true natural gradient learning and standard gradient descent.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We explore the effects of over-specificity in learning algorithms by investigating the behavior of a student, suited to learn optimally from a teacher B, learning from a teacher B' ? B. We only considered the supervised, on-line learning scenario with teachers selected from a particular family. We found that, in the general case, the application of the optimal algorithm to the wrong teacher produces a residual generalization error, even if the right teacher is harder. By imposing mild conditions to the learning algorithm form, we obtained an approximation for the residual generalization error. Simulations carried out in finite networks validate the estimate found.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study the effect of two types of noise, data noise and model noise, in an on-line gradient-descent learning scenario for general two-layer student network with an arbitrary number of hidden units. Training examples are randomly drawn input vectors labeled by a two-layer teacher network with an arbitrary number of hidden units. Data is then corrupted by Gaussian noise affecting either the output or the model itself. We examine the effect of both types of noise on the evolution of order parameters and the generalization error in various phases of the learning process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The learning properties of a universal approximator, a normalized committee machine with adjustable biases, are studied for on-line back-propagation learning. Within a statistical mechanics framework, numerical studies show that this model has features which do not exist in previously studied two-layer network models without adjustable biases, e.g., attractive suboptimal symmetric phases even for realizable cases and noiseless data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study the effect of regularization in an on-line gradient-descent learning scenario for a general two-layer student network with an arbitrary number of hidden units. Training examples are randomly drawn input vectors labelled by a two-layer teacher network with an arbitrary number of hidden units which may be corrupted by Gaussian output noise. We examine the effect of weight decay regularization on the dynamical evolution of the order parameters and generalization error in various phases of the learning process, in both noiseless and noisy scenarios.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A method for calculating the globally optimal learning rate in on-line gradient-descent training of multilayer neural networks is presented. The method is based on a variational approach which maximizes the decrease in generalization error over a given time frame. We demonstrate the method by computing optimal learning rates in typical learning scenarios. The method can also be employed when different learning rates are allowed for different parameter vectors as well as to determine the relevance of related training algorithms based on modifications to the basic gradient descent rule.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The authors present a model of the multilevel effects of diversity on individual learning performance in work groups. For ethnically diverse work groups, the model predicts that group diversity elicits either positive or negative effects on individual learning performance, depending on whether a focal individual’s ethnic dissimilarity from other group members is high or low. By further considering the societal status of an individual’s ethnic origin within society (Anglo versus non-Anglo for our U.K. context), the authors hypothesize that the model’s predictions hold more strongly for non-Anglo group members than for Anglo group members. We test this model with data from 412 individuals working on a 24-week business simulation in 87 four- to seven-person groups with varying degrees of ethnic diversity. Two of the three hypotheses derived from the model received full support and one hypothesis received partial support. Implications for theory development, methods, and practice in applied group diversity research are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bayesian algorithms pose a limit to the performance learning algorithms can achieve. Natural selection should guide the evolution of information processing systems towards those limits. What can we learn from this evolution and what properties do the intermediate stages have? While this question is too general to permit any answer, progress can be made by restricting the class of information processing systems under study. We present analytical and numerical results for the evolution of on-line algorithms for learning from examples for neural network classifiers, which might include or not a hidden layer. The analytical results are obtained by solving a variational problem to determine the learning algorithm that leads to maximum generalization ability. Simulations using evolutionary programming, for programs that implement learning algorithms, confirm and expand the results. The principal result is not just that the evolution is towards a Bayesian limit. Indeed it is essentially reached. In addition we find that evolution is driven by the discovery of useful structures or combinations of variables and operators. In different runs the temporal order of the discovery of such combinations is unique. The main result is that combinations that signal the surprise brought by an example arise always before combinations that serve to gauge the performance of the learning algorithm. This latter structures can be used to implement annealing schedules. The temporal ordering can be understood analytically as well by doing the functional optimization in restricted functional spaces. We also show that there is data suggesting that the appearance of these traits also follows the same temporal ordering in biological systems. © 2006 American Institute of Physics.