885 resultados para Q-learning algorithm
Resumo:
Fear-relevant stimuli, such as snakes, spiders and heights, preferentially capture attention as compared to nonfear-relevant stimuli. This is said to reflect an encapsulated mechanism whereby attention is captured by the simple perceptual features of stimuli that have evolutionary significance. Research, using pictures of snakes and spiders, has found some support for this account; however, participants may have had prior fear of snakes and spiders that influenced results. The current research compared responses of snake and spider experts who had little fear of snakes and spiders, and control participants across a series of affective priming and visual search tasks. Experts discriminated between dangerous and nondangerous snakes and spiders, and expert responses to pictures of nondangerous snakes and spiders differed from those of control participants. The current results dispute that stimulus fear relevance is based purely on perceptual features, and provides support for the role of learning and experience.
Resumo:
The Australian Universities Teaching Committee (AUTC) funds projects intended to improve the quality of teaching and learning in specific disciplinary areas. The project brief for 'Learning Outcomes and Curriculum Development in Psychology' for 2004/2005 was to 'produce an evaluative overview of courses ... with a focus on the specification and assessment of learning outcomes and ... identify strategic directions for universities to enhance teaching and learning'. This project was awarded to a consortium from The University of Queensland, University of Tasmania, and Southern Cross University. The starting point for this project is an analysis of the scientist-practitioner model and its role in curriculum design, a review of current challenges at a conceptual level, and consideration of the implications of recent changes to universities relating to such things as intemationalisation of programs and technological advances. The project will seek to bring together stakeholders from around the country in order to survey the widest possible range of perspectives on the project brief requirements. It is hoped also to establish mechanisms for fiiture scholarly discussion of these issues, including the establishment of an Australian Society for the Teaching of Psychology and an annual conference.
Resumo:
Protein malnutrition induces structural, neurochemical and functional changes in the central nervous system leading to alterations in cognitive and behavioral development of rats. The aim of this work was to investigate the effects of postnatal protein malnutrition on learning and memory tasks. Previously malnourished (6% protein) and well-nourished rats (16% protein) were tested in three experiments: working memory tasks in the Morris water maze (Experiment I), recognition memory of objects (Experiment II), and working memory in the water T-maze (Experiment III). The results showed higher escape latencies in malnourished animals in Experiment I, lower recognition indexes of malnourished animals in Experiment II, and no differences due to diet in Experiment III. It is suggested that protein malnutrition imposed on early life of rats can produce impairments on both working memory in the Morris maze and recognition memory in the open field tests.
Resumo:
This paper proposes the use of the q-Gaussian mutation with self-adaptation of the shape of the mutation distribution in evolutionary algorithms. The shape of the q-Gaussian mutation distribution is controlled by a real parameter q. In the proposed method, the real parameter q of the q-Gaussian mutation is encoded in the chromosome of individuals and hence is allowed to evolve during the evolutionary process. In order to test the new mutation operator, evolution strategy and evolutionary programming algorithms with self-adapted q-Gaussian mutation generated from anisotropic and isotropic distributions are presented. The theoretical analysis of the q-Gaussian mutation is also provided. In the experimental study, the q-Gaussian mutation is compared to Gaussian and Cauchy mutations in the optimization of a set of test functions. Experimental results show the efficiency of the proposed method of self-adapting the mutation distribution in evolutionary algorithms.
Resumo:
There is not a specific test to diagnose Alzheimer`s disease (AD). Its diagnosis should be based upon clinical history, neuropsychological and laboratory tests, neuroimaging and electroencephalography (EEG). Therefore, new approaches are necessary to enable earlier and more accurate diagnosis and to follow treatment results. In this study we used a Machine Learning (ML) technique, named Support Vector Machine (SVM), to search patterns in EEG epochs to differentiate AD patients from controls. As a result, we developed a quantitative EEG (qEEG) processing method for automatic differentiation of patients with AD from normal individuals, as a complement to the diagnosis of probable dementia. We studied EEGs from 19 normal subjects (14 females/5 males, mean age 71.6 years) and 16 probable mild to moderate symptoms AD patients (14 females/2 males, mean age 73.4 years. The results obtained from analysis of EEG epochs were accuracy 79.9% and sensitivity 83.2%. The analysis considering the diagnosis of each individual patient reached 87.0% accuracy and 91.7% sensitivity.
Resumo:
This special issue represents a further exploration of some issues raised at a symposium entitled “Functional magnetic resonance imaging: From methods to madness” presented during the 15th annual Theoretical and Experimental Neuropsychology (TENNET XV) meeting in Montreal, Canada in June, 2004. The special issue’s theme is methods and learning in functional magnetic resonance imaging (fMRI), and it comprises 6 articles (3 reviews and 3 empirical studies). The first (Amaro and Barker) provides a beginners guide to fMRI and the BOLD effect (perhaps an alternative title might have been “fMRI for dummies”). While fMRI is now commonplace, there are still researchers who have yet to employ it as an experimental method and need some basic questions answered before they venture into new territory. This article should serve them well. A key issue of interest at the symposium was how fMRI could be used to elucidate cerebral mechanisms responsible for new learning. The next 4 articles address this directly, with the first (Little and Thulborn) an overview of data from fMRI studies of category-learning, and the second from the same laboratory (Little, Shin, Siscol, and Thulborn) an empirical investigation of changes in brain activity occurring across different stages of learning. While a role for medial temporal lobe (MTL) structures in episodic memory encoding has been acknowledged for some time, the different experimental tasks and stimuli employed across neuroimaging studies have not surprisingly produced conflicting data in terms of the precise subregion(s) involved. The next paper (Parsons, Haut, Lemieux, Moran, and Leach) addresses this by examining effects of stimulus modality during verbal memory encoding. Typically, BOLD fMRI studies of learning are conducted over short time scales, however, the fourth paper in this series (Olson, Rao, Moore, Wang, Detre, and Aguirre) describes an empirical investigation of learning occurring over a longer than usual period, achieving this by employing a relatively novel technique called perfusion fMRI. This technique shows considerable promise for future studies. The final article in this special issue (de Zubicaray) represents a departure from the more familiar cognitive neuroscience applications of fMRI, instead describing how neuroimaging studies might be conducted to both inform and constrain information processing models of cognition.
Resumo:
Objective: To examine the quality of diabetes care and prevention of cardiovascular disease (CVD) in Australian general practice patients with type 2 diabetes and to investigate its relationship with coronary heart disease absolute risk (CHDAR). Methods: A total of 3286 patient records were extracted from registers of patients with type 2 diabetes held by 16 divisions of general practice (250 practices) across Australia for the year 2002. CHDAR was estimated using the United Kingdom Prospective Diabetes Study algorithm with higher CHDAR set at a 10 year risk of >15%. Multivariate multilevel logistic regression investigated the association between CHDAR and diabetes care. Results: 47.9% of diabetic patient records had glycosylated haemoglobin (HbA1c) >7%, 87.6% had total cholesterol >= 4.0 mmol/l, and 73.8% had blood pressure (BP) >= 130/85 mm Hg. 57.6% of patients were at a higher CHDAR, 76.8% of whom were not on lipid modifying medication and 66.2% were not on antihypertensive medication. After adjusting for clustering at the general practice level and age, lipid modifying medication was negatively related to CHDAR (odds ratio (OR) 0.84) and total cholesterol. Antihypertensive medication was positively related to systolic BP but negatively related to CHDAR (OR 0.88). Referral to ophthalmologists/optometrists and attendance at other health professionals were not related to CHDAR. Conclusions: At the time of the study the diabetes and CVD preventive care in Australian general practice was suboptimal, even after a number of national initiatives. The Australian Pharmaceutical Benefits Scheme (PBS) guidelines need to be modified to improve CVD preventive care in patients with type 2 diabetes.
Resumo:
Extended gcd computation is interesting itself. It also plays a fundamental role in other calculations. We present a new algorithm for solving the extended gcd problem. This algorithm has a particularly simple description and is practical. It also provides refined bounds on the size of the multipliers obtained.
Resumo:
Qu-Prolog is an extension of Prolog which performs meta-level computations over object languages, such as predicate calculi and lambda-calculi, which have object-level variables, and quantifier or binding symbols creating local scopes for those variables. As in Prolog, the instantiable (meta-level) variables of Qu-Prolog range over object-level terms, and in addition other Qu-Prolog syntax denotes the various components of the object-level syntax, including object-level variables. Further, the meta-level operation of substitution into object-level terms is directly represented by appropriate Qu-Prolog syntax. Again as in Prolog, the driving mechanism in Qu-Prolog computation is a form of unification, but this is substantially more complex than for Prolog because of Qu-Prolog's greater generality, and especially because substitution operations are evaluated during unification. In this paper, the Qu-Prolog unification algorithm is specified, formalised and proved correct. Further, the analysis of the algorithm is carried out in a frame-work which straightforwardly allows the 'completeness' of the algorithm to be proved: though fully explicit answers to unification problems are not always provided, no information is lost in the unification process.
Resumo:
Quantum Lie algebras are generalizations of Lie algebras which have the quantum parameter h built into their structure. They have been defined concretely as certain submodules L-h(g) of the quantized enveloping algebras U-h(g). On them the quantum Lie product is given by the quantum adjoint action. Here we define for any finite-dimensional simple complex Lie algebra g an abstract quantum Lie algebra g(h) independent of any concrete realization. Its h-dependent structure constants are given in terms of inverse quantum Clebsch-Gordan coefficients. We then show that all concrete quantum Lie algebras L-h(g) are isomorphic to an abstract quantum Lie algebra g(h). In this way we prove two important properties of quantum Lie algebras: 1) all quantum Lie algebras L-h(g) associated to the same g are isomorphic, 2) the quantum Lie product of any Ch(B) is q-antisymmetric. We also describe a construction of L-h(g) which establishes their existence.
Resumo:
We describe the twisted affine superalgebra sl(2\2)((2)) and its quantized version U-q[sl(2\2)((2))]. We investigate the tensor product representation of the four-dimensional grade star representation for the fixed-point sub superalgebra U-q[osp(2\2)]. We work out the tensor product decomposition explicitly and find that the decomposition is not completely reducible. Associated with this four-dimensional grade star representation we derive two U-q[osp(2\2)] invariant R-matrices: one of them corresponds to U-q [sl(2\2)(2)] and the other to U-q [osp(2\2)((1))]. Using the R-matrix for U-q[sl(2\2)((2))], we construct a new U-q[osp(2\2)] invariant strongly correlated electronic model, which is integrable in one dimension. Interestingly this model reduces in the q = 1 limit, to the one proposed by Essler et al which has a larger sl(2\2) symmetry.
Resumo:
An algorithm for explicit integration of structural dynamics problems with multiple time steps is proposed that averages accelerations to obtain subcycle states at a nodal interface between regions integrated with different time steps. With integer time step ratios, the resulting subcycle updates at the interface sum to give the same effect as a central difference update over a major cycle. The algorithm is shown to have good accuracy, and stability properties in linear elastic analysis similar to those of constant velocity subcycling algorithms. The implementation of a generalised form of the algorithm with non-integer time step ratios is presented. (C) 1997 by John Wiley & Sons, Ltd.
Resumo:
The popular Newmark algorithm, used for implicit direct integration of structural dynamics, is extended by means of a nodal partition to permit use of different timesteps in different regions of a structural model. The algorithm developed has as a special case an explicit-explicit subcycling algorithm previously reported by Belytschko, Yen and Mullen. That algorithm has been shown, in the absence of damping or other energy dissipation, to exhibit instability over narrow timestep ranges that become narrower as the number of degrees of freedom increases, making them unlikely to be encountered in practice. The present algorithm avoids such instabilities in the case of a one to two timestep ratio (two subcycles), achieving unconditional stability in an exponential sense for a linear problem. However, with three or more subcycles, the trapezoidal rule exhibits stability that becomes conditional, falling towards that of the central difference method as the number of subcycles increases. Instabilities over narrow timestep ranges, that become narrower as the model size increases, also appear with three or more subcycles. However by moving the partition between timesteps one row of elements into the region suitable for integration with the larger timestep these the unstable timestep ranges become extremely narrow, even in simple systems with a few degrees of freedom. As well, accuracy is improved. Use of a version of the Newmark algorithm that dissipates high frequencies minimises or eliminates these narrow bands of instability. Viscous damping is also shown to remove these instabilities, at the expense of having more effect on the low frequency response.