909 resultados para on the job learning
Resumo:
Promoted as the key policy response to unemployment, the Job Network constitutes an array of interlocking processes that position unemployed people as `problems' in need of remediation. Unemployment is presented as a primary risk threatening society, and unemployed people are presented as displaying various degrees of riskiness. The Job Seeker Classification Instrument (JSCI) is a `technology' employed by Centrelink to assess `risk' and to determine the type of interaction that unemployed people have with the job Network. In the first instance, we critically examine the development of the JSCI and expose issues that erode its credibility and legitimacy. Second, employing the analytical tools of discourse analysis, we show how the JSCI both assumes and imposes particular subject identities on unemployed people. The purpose of this latter analysis is to illustrate the consequences of the sorts of technologies and interventions used within the job Network.
Resumo:
The cross-entropy (CE) method is a new generic approach to combinatorial and multi-extremal optimization and rare event simulation. The purpose of this tutorial is to give a gentle introduction to the CE method. We present the CE methodology, the basic algorithm and its modifications, and discuss applications in combinatorial optimization and machine learning. combinatorial optimization
Resumo:
The organisation of the human neuromuscular-skeletal system allows an extremely wide variety of actions to be performed, often with great dexterity. Adaptations associated with skill acquisition occur at all levels of the neuromuscular-skeletal system although all neural adaptations are inevitably constrained by the organisation of the actuating apparatus (muscles and bones). We quantified the extent to which skill acquisition in an isometric task set is influenced by the mechanical properties of the muscles used to produce the required actions. Initial performance was greatly dependent upon the specific combination of torques required in each variant of the experimental task. Five consecutive days of practice improved the performance to a similar degree across eight actions despite differences in the torques required about the elbow and forearm. The proportional improvement in performance was also similar when the actions were performed at either 20 or 40% of participants' maximum voluntary torque capacity. The skill acquired during practice was successfully extrapolated to variants of the task requiring more torque than that required during practice. We conclude that while the extent to which skill can be acquired in isometric actions is independent of the specific combination of joint torques required for target acquisition, the nature of the kinetic adaptations leading to the performance improvement in isometric actions is influenced by the neural and mechanical properties of the actuating muscles.
Resumo:
This research examines the relationship between perceived group diversity and group conflict, and the moderating role of team context. Currentiy, diversity research predominantly focuses on surface and job-related dimensions, largely to the neglect of deep-level diversity (in terms of values, attitude and beliefs). First, this research hjfpothesised that all three dimensions of diversity would be positively related to group conflict, with deep-level diversity the strongest predictor of task. conflict. Second, it was hypothesised that team context would moderate the relationship between deep-level diversity and group conflict. Team context refers to the extent to which the work performed (1) has high consequences (in terms of health and well being for team members and others); (2) is relatively isolating, (3) requires a high reliance upon team members; (4) is volatile; and (5) interpersonal attraction and mutual helpfulness is essential. Two studies were conducted. The first study employed 44 part-time employees across a range of occupations, and the second study employed 66 full-time employees from a mining company in Australia. A series of hierarchical multiple regressions and moderated multiple regressions confirmed both hypotheses. Practical implications and future research directions are discussed.
Resumo:
Most widely-used computer software packages, such as word processors, spreadsheets and web browsers, incorporate comprehensive help systems, partly because the software is meant for those with little technical knowledge. This paper identifies four systematic philosophies or approaches to help system delivery, namely the documentation approach, based on written documents, either paper-based or online; the training approach, either offered before the user starts working on the software or on-the-job; intelligent help, that is online, context-sensitive help or that relying on software agents; and finally an approach based on minimalism, defined as providing help only when and where it is needed.
Resumo:
Foreign exchange trading has emerged in recent times as a significant activity in many countries. As with most forms of trading, the activity is influenced by many random parameters so that the creation of a system that effectively emulates the trading process is very helpful. In this paper, we try to create such a system with a genetic algorithm engine to emulate trader behaviour on the foreign exchange market and to find the most profitable trading strategy.
Resumo:
Social networks constitute a major channel for the diffusion of information and the formation of attitudes in a society. Introducing a dynamic model of social learning, the first part of this thesis studies the emergence of socially influential individuals and groups, and identifies the characteristics that make them influential. The second part uses a Bayesian network game to analyse the role of social interaction and conformism in the making of decisions whose returns or costs are ex ante uncertain.
Resumo:
The Vapnik-Chervonenkis (VC) dimension is a combinatorial measure of a certain class of machine learning problems, which may be used to obtain upper and lower bounds on the number of training examples needed to learn to prescribed levels of accuracy. Most of the known bounds apply to the Probably Approximately Correct (PAC) framework, which is the framework within which we work in this paper. For a learning problem with some known VC dimension, much is known about the order of growth of the sample-size requirement of the problem, as a function of the PAC parameters. The exact value of sample-size requirement is however less well-known, and depends heavily on the particular learning algorithm being used. This is a major obstacle to the practical application of the VC dimension. Hence it is important to know exactly how the sample-size requirement depends on VC dimension, and with that in mind, we describe a general algorithm for learning problems having VC dimension 1. Its sample-size requirement is minimal (as a function of the PAC parameters), and turns out to be the same for all non-trivial learning problems having VC dimension 1. While the method used cannot be naively generalised to higher VC dimension, it suggests that optimal algorithm-dependent bounds may improve substantially on current upper bounds.
Resumo:
Using techniques from Statistical Physics, the annealed VC entropy for hyperplanes in high dimensional spaces is calculated as a function of the margin for a spherical Gaussian distribution of inputs.
Resumo:
The G-protein coupled receptors--or GPCRs--comprise simultaneously one of the largest and one of the most multi-functional protein families known to modern-day molecular bioscience. From a drug discovery and pharmaceutical industry perspective, the GPCRs constitute one of the most commercially and economically important groups of proteins known. The GPCRs undertake numerous vital metabolic functions and interact with a hugely diverse range of small and large ligands. Many different methodologies have been developed to efficiently and accurately classify the GPCRs. These range from motif-based techniques to machine learning as well as a variety of alignment-free techniques based on the physiochemical properties of sequences. We review here the available methodologies for the classification of GPCRs. Part of this work focuses on how we have tried to build the intrinsically hierarchical nature of sequence relations, implicit within the family, into an adaptive approach to classification. Importantly, we also allude to some of the key innate problems in developing an effective approach to classifying the GPCRs: the lack of sequence similarity between the six classes that comprise the GPCR family and the low sequence similarity to other family members evinced by many newly revealed members of the family.
Resumo:
The assessment of the reliability of systems which learn from data is a key issue to investigate thoroughly before the actual application of information processing techniques to real-world problems. Over the recent years Gaussian processes and Bayesian neural networks have come to the fore and in this thesis their generalisation capabilities are analysed from theoretical and empirical perspectives. Upper and lower bounds on the learning curve of Gaussian processes are investigated in order to estimate the amount of data required to guarantee a certain level of generalisation performance. In this thesis we analyse the effects on the bounds and the learning curve induced by the smoothness of stochastic processes described by four different covariance functions. We also explain the early, linearly-decreasing behaviour of the curves and we investigate the asymptotic behaviour of the upper bounds. The effect of the noise and the characteristic lengthscale of the stochastic process on the tightness of the bounds are also discussed. The analysis is supported by several numerical simulations. The generalisation error of a Gaussian process is affected by the dimension of the input vector and may be decreased by input-variable reduction techniques. In conventional approaches to Gaussian process regression, the positive definite matrix estimating the distance between input points is often taken diagonal. In this thesis we show that a general distance matrix is able to estimate the effective dimensionality of the regression problem as well as to discover the linear transformation from the manifest variables to the hidden-feature space, with a significant reduction of the input dimension. Numerical simulations confirm the significant superiority of the general distance matrix with respect to the diagonal one.In the thesis we also present an empirical investigation of the generalisation errors of neural networks trained by two Bayesian algorithms, the Markov Chain Monte Carlo method and the evidence framework; the neural networks have been trained on the task of labelling segmented outdoor images.
Resumo:
Changes in the strength of signalling between neurones are thought to provide a cellular substrate for learning and memory. In the cerebellar cortex, raising the frequency and the strength of parallel fibre (PF) stimulation leads to a long-term depression (LTD) of the strength of signalling at the synapse between PFs and Purkinje cells (PCs), which spreads to distant synapses to the same cell via a nitric oxide (NO) dependent mechanism. At the same synapse, but under conditions of reduced post-synaptic calcium activity, raised frequency stimulation (RFS) of PFs triggers a long-term potentiation of synaptic transmission. The aims of the work described in this thesis were to investigate the conditions necessary for LTD and LTP at this synapse following RFS and to identify the origins and second messenger cascades involved in the induction and spread of LTP and LTD. In thin, parasagittal cerebellar slices whole cell patch clamp recordings were made from PCs and the effects of RFS of one of two, independent PF inputs to the same PC were examined under a range of experimental conditions. Under conditions designed to reduce post-synaptic calcium activity, RFS to a single PF input led to LTP and a decreases in paired pulse facilitation (PPF) in both pathways. This heterosynaptic potentiation was prevented by inhibition of protein kinase A (PKA) or by inhibition of NO synthase with either 7-nitroindazole (7-NI) or NG Nitro-L-argenine methyl ester. Inhibition of guanylate cyclase (GC) or protein kinase G (PKG) had no effect. A similar potentiation was observed upon application of the adenylyl cyclase (AC) activator forskolin or the NO donor spermine NONOate. Both of these treatments also resulted in an increase in the frequency of mEPSCs, which provides further evidence for a presynaptic origin of LTP. Forskolin induced potentiation and the increase in mEPSC frequency were blocked by 7-NI. The styryl dye FM1-43, a fluorescent reporter of endo- and exocytosis, was also used to further examine the possible pre-synaptic origins of LTP. RFS or forskolin application enhanced FM1-43 de-staining and NOS inhibitors blocked this effect. Application of NONOate also enhanced FM1-43 de-staining. When post-synaptic calcium activity was less strictly buffered, RFS to a single PF input led to a transient potentiation that was succeeded by LTD in both pathways. This LTD, which resembled previously described forms, was prevented by inhibition of the NO/cGMP/PKG cascade. Modification of the AC/cAMP/PKA cascade had no effect. In summary, the direction of synaptic plasticity at the PF-PC synapse in response to RFS depends largely on the level of post-synaptic calcium activity. LTP and LTD were non-input specific and both forms of plasticity were dependent on NOS activity. Induction of LTP was mediated by a presynaptic mechanism and depended on NO and cAMP production. LTD on the other hand was a post-synaptic process and required activity of the NO/cGMP/PKG signalling cascade.