904 resultados para Learning algorithm
Resumo:
This paper aims to present an overview on characteristics, roles and responsibilities of those who arc in charge. of the Corporate Educational Systems in several organizations from distinct industries in Brazil, based on a research carried out by the authors. The analysis compares what is available in the literature on this subject so that it may provide insights on how Brazilian companies have dealt with the difficult task of developing competences in their employees. Special attention is given to the Chief Learning Officer`s role (or the lack of it) - someone who was supposed to be in charge of the employees` development processes in a given organization. The results show that this role has not been a clear or unanimous concept yet, neither in terms of the functions to be performed nor the so-called strategic importance given to this sort of executive. This research is both exploratory and descriptive, and due to the use of intentional sample, the inferences are limited. Despite these limitations, its comments may enrich the discussion on this subject.
Resumo:
A graph clustering algorithm constructs groups of closely related parts and machines separately. After they are matched for the least intercell moves, a refining process runs on the initial cell formation to decrease the number of intercell moves. A simple modification of this main approach can deal with some practical constraints, such as the popular constraint of bounding the maximum number of machines in a cell. Our approach makes a big improvement in the computational time. More importantly, improvement is seen in the number of intercell moves when the computational results were compared with best known solutions from the literature. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
The acquisition and extinction of affective valence to neutral geometrical shape conditional stimuli was investigated in three experiments. Experiment 1 employed a differential conditioning procedure with aversive shock USs. Differential electrodermal responding was evident during acquisition and lost during extinction. As indexed by verbal ratings, the CS1 acquired negative valence during acquisition,which was reduced after extinction. Affective priming, a reaction time based demand free measure of stimulus valence, failed to provide evidence for affective learning. Experiment 2 employed pictures of happy and angry faces as USs.Valence ratings after acquisitionweremore positive for theCS paired with happy faces (CS-H) and less positive for the CS paired with angry faces (CS-A) than during baseline. Extinction training reduced the extent of acquired valence significantly for both CSs, however, ratings of the CS-A remained different from baseline. Affective priming confirmed these results yielding differences between CS-A and CS-H after acquisition for pleasant and unpleasant targets, but for pleasant targets only after extinction. Experiment 3 replicated the design of Experiment 2, but presented the US pictures backwardly masked. Neither rating nor affective priming measures yielded any evidence for affective learning. The present results confirm across two different experimental procedures that, contrary to predictions from dual process accounts of human learning, affective learning is subject to extinction.
Resumo:
Fear-relevant stimuli, such as snakes, spiders and heights, preferentially capture attention as compared to nonfear-relevant stimuli. This is said to reflect an encapsulated mechanism whereby attention is captured by the simple perceptual features of stimuli that have evolutionary significance. Research, using pictures of snakes and spiders, has found some support for this account; however, participants may have had prior fear of snakes and spiders that influenced results. The current research compared responses of snake and spider experts who had little fear of snakes and spiders, and control participants across a series of affective priming and visual search tasks. Experts discriminated between dangerous and nondangerous snakes and spiders, and expert responses to pictures of nondangerous snakes and spiders differed from those of control participants. The current results dispute that stimulus fear relevance is based purely on perceptual features, and provides support for the role of learning and experience.
Resumo:
The Australian Universities Teaching Committee (AUTC) funds projects intended to improve the quality of teaching and learning in specific disciplinary areas. The project brief for 'Learning Outcomes and Curriculum Development in Psychology' for 2004/2005 was to 'produce an evaluative overview of courses ... with a focus on the specification and assessment of learning outcomes and ... identify strategic directions for universities to enhance teaching and learning'. This project was awarded to a consortium from The University of Queensland, University of Tasmania, and Southern Cross University. The starting point for this project is an analysis of the scientist-practitioner model and its role in curriculum design, a review of current challenges at a conceptual level, and consideration of the implications of recent changes to universities relating to such things as intemationalisation of programs and technological advances. The project will seek to bring together stakeholders from around the country in order to survey the widest possible range of perspectives on the project brief requirements. It is hoped also to establish mechanisms for fiiture scholarly discussion of these issues, including the establishment of an Australian Society for the Teaching of Psychology and an annual conference.
Resumo:
Protein malnutrition induces structural, neurochemical and functional changes in the central nervous system leading to alterations in cognitive and behavioral development of rats. The aim of this work was to investigate the effects of postnatal protein malnutrition on learning and memory tasks. Previously malnourished (6% protein) and well-nourished rats (16% protein) were tested in three experiments: working memory tasks in the Morris water maze (Experiment I), recognition memory of objects (Experiment II), and working memory in the water T-maze (Experiment III). The results showed higher escape latencies in malnourished animals in Experiment I, lower recognition indexes of malnourished animals in Experiment II, and no differences due to diet in Experiment III. It is suggested that protein malnutrition imposed on early life of rats can produce impairments on both working memory in the Morris maze and recognition memory in the open field tests.
Resumo:
There is not a specific test to diagnose Alzheimer`s disease (AD). Its diagnosis should be based upon clinical history, neuropsychological and laboratory tests, neuroimaging and electroencephalography (EEG). Therefore, new approaches are necessary to enable earlier and more accurate diagnosis and to follow treatment results. In this study we used a Machine Learning (ML) technique, named Support Vector Machine (SVM), to search patterns in EEG epochs to differentiate AD patients from controls. As a result, we developed a quantitative EEG (qEEG) processing method for automatic differentiation of patients with AD from normal individuals, as a complement to the diagnosis of probable dementia. We studied EEGs from 19 normal subjects (14 females/5 males, mean age 71.6 years) and 16 probable mild to moderate symptoms AD patients (14 females/2 males, mean age 73.4 years. The results obtained from analysis of EEG epochs were accuracy 79.9% and sensitivity 83.2%. The analysis considering the diagnosis of each individual patient reached 87.0% accuracy and 91.7% sensitivity.
Resumo:
This special issue represents a further exploration of some issues raised at a symposium entitled “Functional magnetic resonance imaging: From methods to madness” presented during the 15th annual Theoretical and Experimental Neuropsychology (TENNET XV) meeting in Montreal, Canada in June, 2004. The special issue’s theme is methods and learning in functional magnetic resonance imaging (fMRI), and it comprises 6 articles (3 reviews and 3 empirical studies). The first (Amaro and Barker) provides a beginners guide to fMRI and the BOLD effect (perhaps an alternative title might have been “fMRI for dummies”). While fMRI is now commonplace, there are still researchers who have yet to employ it as an experimental method and need some basic questions answered before they venture into new territory. This article should serve them well. A key issue of interest at the symposium was how fMRI could be used to elucidate cerebral mechanisms responsible for new learning. The next 4 articles address this directly, with the first (Little and Thulborn) an overview of data from fMRI studies of category-learning, and the second from the same laboratory (Little, Shin, Siscol, and Thulborn) an empirical investigation of changes in brain activity occurring across different stages of learning. While a role for medial temporal lobe (MTL) structures in episodic memory encoding has been acknowledged for some time, the different experimental tasks and stimuli employed across neuroimaging studies have not surprisingly produced conflicting data in terms of the precise subregion(s) involved. The next paper (Parsons, Haut, Lemieux, Moran, and Leach) addresses this by examining effects of stimulus modality during verbal memory encoding. Typically, BOLD fMRI studies of learning are conducted over short time scales, however, the fourth paper in this series (Olson, Rao, Moore, Wang, Detre, and Aguirre) describes an empirical investigation of learning occurring over a longer than usual period, achieving this by employing a relatively novel technique called perfusion fMRI. This technique shows considerable promise for future studies. The final article in this special issue (de Zubicaray) represents a departure from the more familiar cognitive neuroscience applications of fMRI, instead describing how neuroimaging studies might be conducted to both inform and constrain information processing models of cognition.
Resumo:
Extended gcd computation is interesting itself. It also plays a fundamental role in other calculations. We present a new algorithm for solving the extended gcd problem. This algorithm has a particularly simple description and is practical. It also provides refined bounds on the size of the multipliers obtained.
Resumo:
Qu-Prolog is an extension of Prolog which performs meta-level computations over object languages, such as predicate calculi and lambda-calculi, which have object-level variables, and quantifier or binding symbols creating local scopes for those variables. As in Prolog, the instantiable (meta-level) variables of Qu-Prolog range over object-level terms, and in addition other Qu-Prolog syntax denotes the various components of the object-level syntax, including object-level variables. Further, the meta-level operation of substitution into object-level terms is directly represented by appropriate Qu-Prolog syntax. Again as in Prolog, the driving mechanism in Qu-Prolog computation is a form of unification, but this is substantially more complex than for Prolog because of Qu-Prolog's greater generality, and especially because substitution operations are evaluated during unification. In this paper, the Qu-Prolog unification algorithm is specified, formalised and proved correct. Further, the analysis of the algorithm is carried out in a frame-work which straightforwardly allows the 'completeness' of the algorithm to be proved: though fully explicit answers to unification problems are not always provided, no information is lost in the unification process.
Resumo:
An algorithm for explicit integration of structural dynamics problems with multiple time steps is proposed that averages accelerations to obtain subcycle states at a nodal interface between regions integrated with different time steps. With integer time step ratios, the resulting subcycle updates at the interface sum to give the same effect as a central difference update over a major cycle. The algorithm is shown to have good accuracy, and stability properties in linear elastic analysis similar to those of constant velocity subcycling algorithms. The implementation of a generalised form of the algorithm with non-integer time step ratios is presented. (C) 1997 by John Wiley & Sons, Ltd.
Resumo:
The popular Newmark algorithm, used for implicit direct integration of structural dynamics, is extended by means of a nodal partition to permit use of different timesteps in different regions of a structural model. The algorithm developed has as a special case an explicit-explicit subcycling algorithm previously reported by Belytschko, Yen and Mullen. That algorithm has been shown, in the absence of damping or other energy dissipation, to exhibit instability over narrow timestep ranges that become narrower as the number of degrees of freedom increases, making them unlikely to be encountered in practice. The present algorithm avoids such instabilities in the case of a one to two timestep ratio (two subcycles), achieving unconditional stability in an exponential sense for a linear problem. However, with three or more subcycles, the trapezoidal rule exhibits stability that becomes conditional, falling towards that of the central difference method as the number of subcycles increases. Instabilities over narrow timestep ranges, that become narrower as the model size increases, also appear with three or more subcycles. However by moving the partition between timesteps one row of elements into the region suitable for integration with the larger timestep these the unstable timestep ranges become extremely narrow, even in simple systems with a few degrees of freedom. As well, accuracy is improved. Use of a version of the Newmark algorithm that dissipates high frequencies minimises or eliminates these narrow bands of instability. Viscous damping is also shown to remove these instabilities, at the expense of having more effect on the low frequency response.
Resumo:
We propose a simulated-annealing-based genetic algorithm for solving model parameter estimation problems. The algorithm incorporates advantages of both genetic algorithms and simulated annealing. Tests on computer-generated synthetic data that closely resemble optical constants of a metal were performed to compare the efficiency of plain genetic algorithms against the simulated-annealing-based genetic algorithms. These tests assess the ability of the algorithms to and the global minimum and the accuracy of values obtained for model parameters. Finally, the algorithm with the best performance is used to fit the model dielectric function to data for platinum and aluminum. (C) 1997 Optical Society of America.