977 resultados para Induction Learning


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Model trees are a particular case of decision trees employed to solve regression problems. They have the advantage of presenting an interpretable output, helping the end-user to get more confidence in the prediction and providing the basis for the end-user to have new insight about the data, confirming or rejecting hypotheses previously formed. Moreover, model trees present an acceptable level of predictive performance in comparison to most techniques used for solving regression problems. Since generating the optimal model tree is an NP-Complete problem, traditional model tree induction algorithms make use of a greedy top-down divide-and-conquer strategy, which may not converge to the global optimal solution. In this paper, we propose a novel algorithm based on the use of the evolutionary algorithms paradigm as an alternate heuristic to generate model trees in order to improve the convergence to globally near-optimal solutions. We call our new approach evolutionary model tree induction (E-Motion). We test its predictive performance using public UCI data sets, and we compare the results to traditional greedy regression/model trees induction algorithms, as well as to other evolutionary approaches. Results show that our method presents a good trade-off between predictive performance and model comprehensibility, which may be crucial in many machine learning applications. (C) 2010 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Immediate early genes (IEG) are presumed to be activated in response to stress, novelty, and learning. Evidence supports the involvement of prefrontal and hippocampal areas in stress and learning, but also in the detection of novel events. This study examined whether a previous experience with shocks changes the pattern of Fos and Egr-1 expression in the medial prefrontal cortex (mPFC), the hippocampal cornus ammonis 1 (CA1), and dentate gyrus (DG) of adult male Wistar rats that learned to escape in an operant aversive test. Subjects previously exposed to inescapable footshocks that learned to escape from Shocks were assigned to the treated group (EXP). Subjects from Group Novelty (NOV) rested undisturbed during treatment and also learned to escape in the test. The nonshock group (NSH) rested undisturbed in both sessions. Standard immunohistochemistry procedures were used to detect the proteins in brain sections. The results show that a previous experience with shocks changed the pattern of IEG expression, then demonstrating c-fos and egr-1 induction as experience-dependent events. Compared with NSH and EXP an enhanced Fos expression was detected in the mPFC and CA1 subfield of Group NOV, which also exhibited increased Egr-1 expression in the mPFC and DG in comparison to NSH. No differences were found in the DG for Fos, or in the CA1 for Egr-1. Novelty, and not the operant aversive escape learning, seems to have generated IEG induction. The results suggest novel stimuli as a possible confounding factor in studies on Fos and/or Egr-1 expression in aversive conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis concerns artificially intelligent natural language processing systems that are capable of learning the properties of lexical items (properties like verbal valency or inflectional class membership) autonomously while they are fulfilling their tasks for which they have been deployed in the first place. Many of these tasks require a deep analysis of language input, which can be characterized as a mapping of utterances in a given input C to a set S of linguistically motivated structures with the help of linguistic information encoded in a grammar G and a lexicon L: G + L + C → S (1) The idea that underlies intelligent lexical acquisition systems is to modify this schematic formula in such a way that the system is able to exploit the information encoded in S to create a new, improved version of the lexicon: G + L + S → L' (2) Moreover, the thesis claims that a system can only be considered intelligent if it does not just make maximum usage of the learning opportunities in C, but if it is also able to revise falsely acquired lexical knowledge. So, one of the central elements in this work is the formulation of a couple of criteria for intelligent lexical acquisition systems subsumed under one paradigm: the Learn-Alpha design rule. The thesis describes the design and quality of a prototype for such a system, whose acquisition components have been developed from scratch and built on top of one of the state-of-the-art Head-driven Phrase Structure Grammar (HPSG) processing systems. The quality of this prototype is investigated in a series of experiments, in which the system is fed with extracts of a large English corpus. While the idea of using machine-readable language input to automatically acquire lexical knowledge is not new, we are not aware of a system that fulfills Learn-Alpha and is able to deal with large corpora. To instance four major challenges of constructing such a system, it should be mentioned that a) the high number of possible structural descriptions caused by highly underspeci ed lexical entries demands for a parser with a very effective ambiguity management system, b) the automatic construction of concise lexical entries out of a bulk of observed lexical facts requires a special technique of data alignment, c) the reliability of these entries depends on the system's decision on whether it has seen 'enough' input and d) general properties of language might render some lexical features indeterminable if the system tries to acquire them with a too high precision. The cornerstone of this dissertation is the motivation and development of a general theory of automatic lexical acquisition that is applicable to every language and independent of any particular theory of grammar or lexicon. This work is divided into five chapters. The introductory chapter first contrasts three different and mutually incompatible approaches to (artificial) lexical acquisition: cue-based queries, head-lexicalized probabilistic context free grammars and learning by unification. Then the postulation of the Learn-Alpha design rule is presented. The second chapter outlines the theory that underlies Learn-Alpha and exposes all the related notions and concepts required for a proper understanding of artificial lexical acquisition. Chapter 3 develops the prototyped acquisition method, called ANALYZE-LEARN-REDUCE, a framework which implements Learn-Alpha. The fourth chapter presents the design and results of a bootstrapping experiment conducted on this prototype: lexeme detection, learning of verbal valency, categorization into nominal count/mass classes, selection of prepositions and sentential complements, among others. The thesis concludes with a review of the conclusions and motivation for further improvements as well as proposals for future research on the automatic induction of lexical features.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a model for plasticity induction in reinforcement learning which is based on a cascade of synaptic memory traces. In the cascade of these so called eligibility traces presynaptic input is first corre lated with postsynaptic events, next with the behavioral decisions and finally with the external reinforcement. A population of leaky integrate and fire neurons endowed with this plasticity scheme is studied by simulation on different tasks. For operant co nditioning with delayed reinforcement, learning succeeds even when the delay is so large that the delivered reward reflects the appropriateness, not of the immediately preceeding response, but of a decision made earlier on in the stimulus - decision sequence . So the proposed model does not rely on the temporal contiguity between decision and pertinent reward and thus provides a viable means of addressing the temporal credit assignment problem. In the same task, learning speeds up with increasing population si ze, showing that the plasticity cascade simultaneously addresses the spatial problem of assigning credit to the different population neurons. Simulations on other task such as sequential decision making serve to highlight the robustness of the proposed sch eme and, further, contrast its performance to that of temporal difference based approaches to reinforcement learning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

n learning from trial and error, animals need to relate behavioral decisions to environmental reinforcement even though it may be difficult to assign credit to a particular decision when outcomes are uncertain or subject to delays. When considering the biophysical basis of learning, the credit-assignment problem is compounded because the behavioral decisions themselves result from the spatio-temporal aggregation of many synaptic releases. We present a model of plasticity induction for reinforcement learning in a population of leaky integrate and fire neurons which is based on a cascade of synaptic memory traces. Each synaptic cascade correlates presynaptic input first with postsynaptic events, next with the behavioral decisions and finally with external reinforcement. For operant conditioning, learning succeeds even when reinforcement is delivered with a delay so large that temporal contiguity between decision and pertinent reward is lost due to intervening decisions which are themselves subject to delayed reinforcement. This shows that the model provides a viable mechanism for temporal credit assignment. Further, learning speeds up with increasing population size, so the plasticity cascade simultaneously addresses the spatial problem of assigning credit to synapses in different population neurons. Simulations on other tasks, such as sequential decision making, serve to contrast the performance of the proposed scheme to that of temporal difference-based learning. We argue that, due to their comparative robustness, synaptic plasticity cascades are attractive basic models of reinforcement learning in the brain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a model of spike-driven synaptic plasticity inspired by experimental observations and motivated by the desire to build an electronic hardware device that can learn to classify complex stimuli in a semisupervised fashion. During training, patterns of activity are sequentially imposed on the input neurons, and an additional instructor signal drives the output neurons toward the desired activity. The network is made of integrate-and-fire neurons with constant leak and a floor. The synapses are bistable, and they are modified by the arrival of presynaptic spikes. The sign of the change is determined by both the depolarization and the state of a variable that integrates the postsynaptic action potentials. Following the training phase, the instructor signal is removed, and the output neurons are driven purely by the activity of the input neurons weighted by the plastic synapses. In the absence of stimulation, the synapses preserve their internal state indefinitely. Memories are also very robust to the disruptive action of spontaneous activity. A network of 2000 input neurons is shown to be able to classify correctly a large number (thousands) of highly overlapping patterns (300 classes of preprocessed Latex characters, 30 patterns per class, and a subset of the NIST characters data set) and to generalize with performances that are better than or comparable to those of artificial neural networks. Finally we show that the synaptic dynamics is compatible with many of the experimental observations on the induction of long-term modifications (spike-timing-dependent plasticity and its dependence on both the postsynaptic depolarization and the frequency of pre- and postsynaptic neurons).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Withdrawal reflexes of the mollusk Aplysia exhibit sensitization, a simple form of long-term memory (LTM). Sensitization is due, in part, to long-term facilitation (LTF) of sensorimotor neuron synapses. LTF is induced by the modulatory actions of serotonin (5-HT). Pettigrew et al. developed a computational model of the nonlinear intracellular signaling and gene network that underlies the induction of 5-HT-induced LTF. The model simulated empirical observations that repeated applications of 5-HT induce persistent activation of protein kinase A (PKA) and that this persistent activation requires a suprathreshold exposure of 5-HT. This study extends the analysis of the Pettigrew model by applying bifurcation analysis, singularity theory, and numerical simulation. Using singularity theory, classification diagrams of parameter space were constructed, identifying regions with qualitatively different steady-state behaviors. The graphical representation of these regions illustrates the robustness of these regions to changes in model parameters. Because persistent protein kinase A (PKA) activity correlates with Aplysia LTM, the analysis focuses on a positive feedback loop in the model that tends to maintain PKA activity. In this loop, PKA phosphorylates a transcription factor (TF-1), thereby increasing the expression of an ubiquitin hydrolase (Ap-Uch). Ap-Uch then acts to increase PKA activity, closing the loop. This positive feedback loop manifests multiple, coexisting steady states, or multiplicity, which provides a mechanism for a bistable switch in PKA activity. After the removal of 5-HT, the PKA activity either returns to its basal level (reversible switch) or remains at a high level (irreversible switch). Such an irreversible switch might be a mechanism that contributes to the persistence of LTM. The classification diagrams also identify parameters and processes that might be manipulated, perhaps pharmacologically, to enhance the induction of memory. Rational drug design, to affect complex processes such as memory formation, can benefit from this type of analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Neuropathic pain is a debilitating neurological disorder that may appear after peripheral nerve trauma and is characterized by persistent, intractable pain. The well-studied phenomenon of long-term hyperexcitability (LTH), in which sensory somata become hyperexcitable following peripheral nerve injury may be important for both chronic pain and long-lasting memory formation, since similar cellular alterations take place after both injury and learning. Though axons have previously been considered simple conducting cables, spontaneous afferent signals develop from some neuromas that form at severed nerve tips, indicating intrinsic changes in sensory axonal excitability may contribute to this intractable pain. Here we show that nerve transection, exposure to serotonin, and transient depolarization induce long-lasting sensory axonal hyperexcitability that is localized to the treated nerve segment and requires local translation of new proteins. Long-lasting functional plasticity may be a general property of axons, since both injured and transiently depolarized motor axons display LTH as well. Axonal hyperexcitability may represent an adaptive mechanism to overcome conduction failure after peripheral injury, but also displays key features shared with cellular analogues of memory including: site-specific changes in neuronal function, dependence on transient, focal depolarization for induction, and requirement for synthesis of new proteins for expression of long-lasting effects. The finding of axonal hyperexcitability after nerve injury sheds new light on the clinical problem of chronic neuropathic pain, and provides more support for the hypothesis that mechanisms of long-term memory storage evolved from primitive adaptive responses to injury. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present work examines the role of cAMP in the induction of the type of long-term morphological changes that have been shown to be correlated with long-term sensitization in Aplysia.^ To examine this issue, cAMP was injected into individual tail sensory neurons in the pleural ganglion to mimic, at the single cell level, the effects of behavioral training. After a 22 hr incubation period, the same cells were filled with horseradish peroxidase and 2 hours later the tissue was fixed and processed. Morphological analysis revealed that cAMP induced an increase in two morphological features of the neurons, varicosities and branch points. These structural alterations, which are similar to those seen in siphon sensory neurons of the abdominal ganglion following long-term sensitization training of the siphon-gill withdrawal reflex, could subserve the altered behavioral response of the animal. These results expose another role played by cAMP in the induction of learning, the initiation of a structural substrate, which, in concert with other correlates, underlies learning.^ cAMP was injected into sensory neurons in the presence of the reversible protein synthesis inhibitor, anisomycin. The presence of anisomycin during and immediately following the nucleotide injection completely blocked the structural remodeling. These results indicate that the induction of morphological changes by cAMP is a process dependent on protein synthesis.^ To further examine the temporal requirement for protein synthesis in the induction of these changes, the time of anisomycin exposure was varied. The results indicate that the cellular processes triggered by cAMP are sensitive to the inhibition of protein synthesis for at least 7 hours after the nucleotide injection. This is a longer period of sensitivity than that for the induction of another correlate of long-term sensitization, facilitation of the sensory to motor neuron synaptic connection. Thus, these findings demonstrate that the period of sensitivity to protein synthesis inhibition is not identical for all correlates of learning. In addition, since the induction of the morphological changes can be blocked by anisomycin pulses administered at different times during and following the cAMP injection, this suggests that cAMP is triggering a cascade of protein synthesis, with successive rounds of synthesis being dependent on successful completion of preceding rounds. Inhibition at any time during this cascade can block the entire process and so prevent the development of the structural changes.^ The extent to which cAMP can mimic the structural remodeling induced by long-term training was also examined. Animals were subjected to unilateral sensitization training and the morphology of the sensory neurons was examined twenty-four hours later. Both cAMP injection and long-term training produced a twofold increase in varicosities and approximately a fifty percent increase in the number of branch points in the sensory neuron arborization within the pleural ganglion. (Abstract shortened by UMI.) ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pragmatism is the leading motivation of regularization. We can understand regularization as a modification of the maximum-likelihood estimator so that a reasonable answer could be given in an unstable or ill-posed situation. To mention some typical examples, this happens when fitting parametric or non-parametric models with more parameters than data or when estimating large covariance matrices. Regularization is usually used, in addition, to improve the bias-variance tradeoff of an estimation. Then, the definition of regularization is quite general, and, although the introduction of a penalty is probably the most popular type, it is just one out of multiple forms of regularization. In this dissertation, we focus on the applications of regularization for obtaining sparse or parsimonious representations, where only a subset of the inputs is used. A particular form of regularization, L1-regularization, plays a key role for reaching sparsity. Most of the contributions presented here revolve around L1-regularization, although other forms of regularization are explored (also pursuing sparsity in some sense). In addition to present a compact review of L1-regularization and its applications in statistical and machine learning, we devise methodology for regression, supervised classification and structure induction of graphical models. Within the regression paradigm, we focus on kernel smoothing learning, proposing techniques for kernel design that are suitable for high dimensional settings and sparse regression functions. We also present an application of regularized regression techniques for modeling the response of biological neurons. Supervised classification advances deal, on the one hand, with the application of regularization for obtaining a na¨ıve Bayes classifier and, on the other hand, with a novel algorithm for brain-computer interface design that uses group regularization in an efficient manner. Finally, we present a heuristic for inducing structures of Gaussian Bayesian networks using L1-regularization as a filter. El pragmatismo es la principal motivación de la regularización. Podemos entender la regularización como una modificación del estimador de máxima verosimilitud, de tal manera que se pueda dar una respuesta cuando la configuración del problema es inestable. A modo de ejemplo, podemos mencionar el ajuste de modelos paramétricos o no paramétricos cuando hay más parámetros que casos en el conjunto de datos, o la estimación de grandes matrices de covarianzas. Se suele recurrir a la regularización, además, para mejorar el compromiso sesgo-varianza en una estimación. Por tanto, la definición de regularización es muy general y, aunque la introducción de una función de penalización es probablemente el método más popular, éste es sólo uno de entre varias posibilidades. En esta tesis se ha trabajado en aplicaciones de regularización para obtener representaciones dispersas, donde sólo se usa un subconjunto de las entradas. En particular, la regularización L1 juega un papel clave en la búsqueda de dicha dispersión. La mayor parte de las contribuciones presentadas en la tesis giran alrededor de la regularización L1, aunque también se exploran otras formas de regularización (que igualmente persiguen un modelo disperso). Además de presentar una revisión de la regularización L1 y sus aplicaciones en estadística y aprendizaje de máquina, se ha desarrollado metodología para regresión, clasificación supervisada y aprendizaje de estructura en modelos gráficos. Dentro de la regresión, se ha trabajado principalmente en métodos de regresión local, proponiendo técnicas de diseño del kernel que sean adecuadas a configuraciones de alta dimensionalidad y funciones de regresión dispersas. También se presenta una aplicación de las técnicas de regresión regularizada para modelar la respuesta de neuronas reales. Los avances en clasificación supervisada tratan, por una parte, con el uso de regularización para obtener un clasificador naive Bayes y, por otra parte, con el desarrollo de un algoritmo que usa regularización por grupos de una manera eficiente y que se ha aplicado al diseño de interfaces cerebromáquina. Finalmente, se presenta una heurística para inducir la estructura de redes Bayesianas Gaussianas usando regularización L1 a modo de filtro.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Calretinin (Cr) is a Ca2+ binding protein present in various populations of neurons distributed in the central and peripheral nervous systems. We have generated Cr-deficient (Cr−/−) mice by gene targeting and have investigated the associated phenotype. Cr−/− mice were viable, and a large number of morphological, biochemical, and behavioral parameters were found unaffected. In the normal mouse hippocampus, Cr is expressed in a widely distributed subset of GABAergic interneurons and in hilar mossy cells of the dentate gyrus. Because both types of cells are part of local pathways innervating dentate granule cells and/or pyramidal neurons, we have explored in Cr−/− mice the synaptic transmission between the perforant pathway and granule cells and at the Schaffer commissural input to CA1 pyramidal neurons. Cr−/− mice showed no alteration in basal synaptic transmission, but long-term potentiation (LTP) was impaired in the dentate gyrus. Normal LTP could be restored in the presence of the GABAA receptor antagonist bicuculline, suggesting that in Cr−/− dentate gyrus an excess of γ-aminobutyric acid (GABA) release interferes with LTP induction. Synaptic transmission and LTP were normal in CA1 area, which contains only few Cr-positive GABAergic interneurons. Cr−/− mice performed normally in spatial memory task. These results suggest that expression of Cr contributes to the control of synaptic plasticity in mouse dentate gyrus by indirectly regulating the activity of GABAergic interneurons, and that Cr−/− mice represent a useful tool to understand the role of dentate LTP in learning and memory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Auditory cortical receptive field plasticity produced during behavioral learning may be considered to constitute "physiological memory" because it has major characteristics of behavioral memory: associativity, specificity, rapid acquisition, and long-term retention. To investigate basal forebrain mechanisms in receptive field plasticity, we paired a tone with stimulation of the nucleus basalis, the main subcortical source of cortical acetylcholine, in the adult guinea pig. Nucleus basalis stimulation produced electroencephalogram desynchronization that was blocked by systemic and cortical atropine. Paired tone/nucleus basalis stimulation, but not unpaired stimulation, induced receptive field plasticity similar to that produced by behavioral learning. Thus paired activation of the nucleus basalis is sufficient to induce receptive field plasticity, possibly via cholinergic actions in the cortex.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Long-term potentiation (LTP), an increase in synaptic efficacy believed to underlie learning and memory mechanisms, has been proposed to involve structural modifications of synapses. Precise identification of the morphological changes associated with LTP has however been hindered by the difficulty in distinguishing potentiated or activated from nonstimulated synapses. Here we used a cytochemical method that allowed detection in CA1 hippocampus at the electron microscopy level of a stimulation-specific, D-AP5-sensitive accumulation of calcium in postsynaptic spines and presynaptic terminals following application of high-frequency trains. Morphometric analyses carried out 30-40 min after LTP induction revealed dramatic ultrastructural differences between labeled and nonlabeled synapses. The majority of labeled synapses (60%) exhibited perforated postsynaptic densities, whereas this proportion was only 20% in nonlabeled synaptic contacts. Labeled synaptic profiles were also characterized by a larger apposition zone between pre- and postsynaptic structures, longer postsynaptic densities, and enlarged spine profiles. These results add strong support to the idea that ultrastructural modifications and specifically an increase in perforated synapses are associated with LTP induction in field CA1 of hippocampus and they suggest that a majority of activated contacts may exhibit such changes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes how the statistical technique of cluster analysis and the machine learning technique of rule induction can be combined to explore a database. The ways in which such an approach alleviates the problems associated with other techniques for data analysis are discussed. We report the results of experiments carried out on a database from the medical diagnosis domain. Finally we describe the future developments which we plan to carry out to build on our current work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis describes a novel connectionist machine utilizing induction by a Hilbert hypercube representation. This representation offers a number of distinct advantages which are described. We construct a theoretical and practical learning machine which lies in an area of overlap between three disciplines - neural nets, machine learning and knowledge acquisition - hence it is refered to as a "coalesced" machine. To this unifying aspect is added the various advantages of its orthogonal lattice structure as against less structured nets. We discuss the case for such a fundamental and low level empirical learning tool and the assumptions behind the machine are clearly outlined. Our theory of an orthogonal lattice structure the Hilbert hypercube of an n-dimensional space using a complemented distributed lattice as a basis for supervised learning is derived from first principles on clearly laid out scientific principles. The resulting "subhypercube theory" was implemented in a development machine which was then used to test the theoretical predictions again under strict scientific guidelines. The scope, advantages and limitations of this machine were tested in a series of experiments. Novel and seminal properties of the machine include: the "metrical", deterministic and global nature of its search; complete convergence invariably producing minimum polynomial solutions for both disjuncts and conjuncts even with moderate levels of noise present; a learning engine which is mathematically analysable in depth based upon the "complexity range" of the function concerned; a strong bias towards the simplest possible globally (rather than locally) derived "balanced" explanation of the data; the ability to cope with variables in the network; and new ways of reducing the exponential explosion. Performance issues were addressed and comparative studies with other learning machines indicates that our novel approach has definite value and should be further researched.