992 resultados para Neural Dynamics


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The temporal dynamics of the neural activity that implements the dimensions valence and arousal during processing of emotional stimuli were studied in two multi-channel ERP experiments that used visually presented emotional words (experiment 1) and emotional pictures (experiment 2) as stimulus material. Thirty-two healthy subjects participated (mean age 26.8 +/- 6.4 years, 24 women). The stimuli in both experiments were selected on the basis of verbal reports in such a way that we were able to map the temporal dynamics of one dimension while controlling for the other one. Words (pictures) were centrally presented for 450 (600) ms with interstimulus intervals of 1,550 (1,400) ms. ERP microstate analysis of the entire epochs of stimulus presentations parsed the data into sequential steps of information processing. The results revealed that in several microstates of both experiments, processing of pleasant and unpleasant valence (experiment 1, microstate #3: 118-162 ms, #6: 218-238 ms, #7: 238-266 ms, #8: 266-294 ms; experiment 2, microstate #5: 142-178 ms, #6: 178-226 ms, #7: 226-246 ms, #9: 262-302 ms, #10: 302-330 ms) as well as of low and high arousal (experiment 1, microstate #8: 266-294 ms, #9: 294-346 ms; experiment 2, microstate #10: 302-330 ms, #15: 562-600 ms) involved different neural assemblies. The results revealed also that in both experiments, information about valence was extracted before information about arousal. The last microstate of valence extraction was identical with the first microstate of arousal extraction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During the generalization of epileptic seizures, pathological activity in one brain area recruits distant brain structures into joint synchronous discharges. However, it remains unknown whether specific changes in local circuit activity are related to the aberrant recruitment of anatomically distant structures into epileptiform discharges. Further, it is not known whether aberrant areas recruit or entrain healthy ones into pathological activity. Here we study the dynamics of local circuit activity during the spread of epileptiform discharges in the zero-magnesium in vitro model of epilepsy. We employ high-speed multi-photon imaging in combination with dual whole-cell recordings in acute thalamocortical (TC) slices of the juvenile mouse to characterize the generalization of epileptic activity between neocortex and thalamus. We find that, although both structures are exposed to zero-magnesium, the initial onset of focal epileptiform discharge occurs in cortex. This suggests that local recurrent connectivity that is particularly prevalent in cortex is important for the initiation of seizure activity. Subsequent recruitment of thalamus into joint, generalized discharges is coincident with an increase in the coherence of local cortical circuit activity that itself does not depend on thalamus. Finally, the intensity of population discharges is positively correlated between both brain areas. This suggests that during and after seizure generalization not only the timing but also the amplitude of epileptiform discharges in thalamus is entrained by cortex. Together these results suggest a central role of neocortical activity for the onset and the structure of pathological recruitment of thalamus into joint synchronous epileptiform discharges.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new method to study large scale neural networks is presented in this paper. The basis is the use of Feynman- like diagrams. These diagrams allow the analysis of collective and cooperative phenomena with a similar methodology to the employed in the Many Body Problem. The proposed method is applied to a very simple structure composed by an string of neurons with interaction among them. It is shown that a new behavior appears at the end of the row. This behavior is different to the initial dynamics of a single cell. When a feedback is present, as in the case of the hippocampus, this situation becomes more complex with a whole set of new frequencies, different from the proper frequencies of the individual neurons. Application to an optical neural network is reported.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The cAMP response element-binding protein (CREB) is an activity-dependent transcription factor that is involved in neural plasticity. The kinetics of CREB phosphorylation have been suggested to be important for gene activation, with sustained phosphorylation being associated with downstream gene expression. If so, the duration of CREB phosphorylation might serve as an indicator for time-sensitive plastic changes in neurons. To screen for regions potentially involved in dopamine-mediated plasticity in the basal ganglia, we used organotypic slice cultures to study the patterns of dopamine- and calcium-mediated CREB phosphorylation in the major subdivisions of the striatum. Different durations of CREB phosphorylation were evoked in the dorsal and ventral striatum by activation of dopamine D1-class receptors. The same D1 stimulus elicited (i) transient phosphorylation (≤15 min) in the matrix of the dorsal striatum; (ii) sustained phosphorylation (≤2 hr) in limbic-related structures including striosomes, the nucleus accumbens, the fundus striati, and the bed nucleus of the stria terminalis; and (iii) prolonged phosphorylation (up to 4 hr or more) in cellular islands in the olfactory tubercle. Elevation of Ca2+ influx by stimulation of L-type Ca2+ channels, NMDA, or KCl induced strong CREB phosphorylation in the dorsal striatum but not in the olfactory tubercle. These findings differentiate the response of CREB to dopamine and calcium signals in different striatal regions and suggest that dopamine-mediated CREB phosphorylation is persistent in limbic-related regions of the neonatal basal ganglia. The downstream effects activated by persistent CREB phosphorylation may include time-sensitive neuroplasticity modulated by dopamine.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The GTPase dynamin has been clearly implicated in clathrin-mediated endocytosis of synaptic vesicle membranes at the presynaptic nerve terminal. Here we describe a novel 52-kDa protein in rat brain that binds the proline-rich C terminus of dynamin. Syndapin I (synaptic, dynamin-associated protein I) is highly enriched in brain where it exists in a high molecular weight complex. Syndapin I can be involved in multiple protein–protein interactions via a src homology 3 (SH3) domain at the C terminus and two predicted coiled-coil stretches. Coprecipitation studies and blot overlay analyses revealed that syndapin I binds the brain-specific proteins dynamin I, synaptojanin, and synapsin I via an SH3 domain-specific interaction. Coimmunoprecipitation of dynamin I with antibodies recognizing syndapin I and colocalization of syndapin I with dynamin I at vesicular structures in primary neurons indicate that syndapin I associates with dynamin I in vivo and may play a role in synaptic vesicle endocytosis. Furthermore, syndapin I associates with the neural Wiskott-Aldrich syndrome protein, an actin-depolymerizing protein that regulates cytoskeletal rearrangement. These characteristics of syndapin I suggest a molecular link between cytoskeletal dynamics and synaptic vesicle recycling in the nerve terminal.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cette thèse contribue a la recherche vers l'intelligence artificielle en utilisant des méthodes connexionnistes. Les réseaux de neurones récurrents sont un ensemble de modèles séquentiels de plus en plus populaires capable en principe d'apprendre des algorithmes arbitraires. Ces modèles effectuent un apprentissage en profondeur, un type d'apprentissage machine. Sa généralité et son succès empirique en font un sujet intéressant pour la recherche et un outil prometteur pour la création de l'intelligence artificielle plus générale. Le premier chapitre de cette thèse donne un bref aperçu des sujets de fonds: l'intelligence artificielle, l'apprentissage machine, l'apprentissage en profondeur et les réseaux de neurones récurrents. Les trois chapitres suivants couvrent ces sujets de manière de plus en plus spécifiques. Enfin, nous présentons quelques contributions apportées aux réseaux de neurones récurrents. Le chapitre \ref{arxiv1} présente nos travaux de régularisation des réseaux de neurones récurrents. La régularisation vise à améliorer la capacité de généralisation du modèle, et joue un role clé dans la performance de plusieurs applications des réseaux de neurones récurrents, en particulier en reconnaissance vocale. Notre approche donne l'état de l'art sur TIMIT, un benchmark standard pour cette tâche. Le chapitre \ref{cpgp} présente une seconde ligne de travail, toujours en cours, qui explore une nouvelle architecture pour les réseaux de neurones récurrents. Les réseaux de neurones récurrents maintiennent un état caché qui représente leurs observations antérieures. L'idée de ce travail est de coder certaines dynamiques abstraites dans l'état caché, donnant au réseau une manière naturelle d'encoder des tendances cohérentes de l'état de son environnement. Notre travail est fondé sur un modèle existant; nous décrivons ce travail et nos contributions avec notamment une expérience préliminaire.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cette thèse contribue a la recherche vers l'intelligence artificielle en utilisant des méthodes connexionnistes. Les réseaux de neurones récurrents sont un ensemble de modèles séquentiels de plus en plus populaires capable en principe d'apprendre des algorithmes arbitraires. Ces modèles effectuent un apprentissage en profondeur, un type d'apprentissage machine. Sa généralité et son succès empirique en font un sujet intéressant pour la recherche et un outil prometteur pour la création de l'intelligence artificielle plus générale. Le premier chapitre de cette thèse donne un bref aperçu des sujets de fonds: l'intelligence artificielle, l'apprentissage machine, l'apprentissage en profondeur et les réseaux de neurones récurrents. Les trois chapitres suivants couvrent ces sujets de manière de plus en plus spécifiques. Enfin, nous présentons quelques contributions apportées aux réseaux de neurones récurrents. Le chapitre \ref{arxiv1} présente nos travaux de régularisation des réseaux de neurones récurrents. La régularisation vise à améliorer la capacité de généralisation du modèle, et joue un role clé dans la performance de plusieurs applications des réseaux de neurones récurrents, en particulier en reconnaissance vocale. Notre approche donne l'état de l'art sur TIMIT, un benchmark standard pour cette tâche. Le chapitre \ref{cpgp} présente une seconde ligne de travail, toujours en cours, qui explore une nouvelle architecture pour les réseaux de neurones récurrents. Les réseaux de neurones récurrents maintiennent un état caché qui représente leurs observations antérieures. L'idée de ce travail est de coder certaines dynamiques abstraites dans l'état caché, donnant au réseau une manière naturelle d'encoder des tendances cohérentes de l'état de son environnement. Notre travail est fondé sur un modèle existant; nous décrivons ce travail et nos contributions avec notamment une expérience préliminaire.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To examine the role of the effector dynamics of the wrist in the production of rhythmic motor activity, we estimated the phase shifts between the EMG and the task-related output for a rhythmic isometric torque production task and an oscillatory movement, and found a substantial difference (45-52degrees) between the two. For both tasks, the relation between EMG and task-related output (torque or displacement) was adequately reproduced with a physiologically motivated musculoskeletal model. The model simulations demonstrated the importance of the contribution of passive structures to the overall dynamics and provided an account for the observed phase shifts in the dynamic task. Additional simulations of the musculoskeletal model with added load suggested that particular changes in the phase relation between EMG and movement may follow largely from the intrinsic muscle dynamics, rather than being the result of adaptations in the neural control of joint stiffness. The implications of these results are discussed in relation to (models of) interlimb coordination in rhythmic tasks. (C) 2004 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The learning properties of a universal approximator, a normalized committee machine with adjustable biases, are studied for on-line back-propagation learning. Within a statistical mechanics framework, numerical studies show that this model has features which do not exist in previously studied two-layer network models without adjustable biases, e.g., attractive suboptimal symmetric phases even for realizable cases and noiseless data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We complement recent advances in thermodynamic limit analyses of mean on-line gradient descent learning dynamics in multi-layer networks by calculating fluctuations possessed by finite dimensional systems. Fluctuations from the mean dynamics are largest at the onset of specialisation as student hidden unit weight vectors begin to imitate specific teacher vectors, increasing with the degree of symmetry of the initial conditions. In light of this, we include a term to stimulate asymmetry in the learning process, which typically also leads to a significant decrease in training time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

On-line learning is examined for the radial basis function network, an important and practical type of neural network. The evolution of generalization error is calculated within a framework which allows the phenomena of the learning process, such as the specialization of the hidden units, to be analyzed. The distinct stages of training are elucidated, and the role of the learning rate described. The three most important stages of training, the symmetric phase, the symmetry-breaking phase, and the convergence phase, are analyzed in detail; the convergence phase analysis allows derivation of maximal and optimal learning rates. As well as finding the evolution of the mean system parameters, the variances of these parameters are derived and shown to be typically small. Finally, the analytic results are strongly confirmed by simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We analyse the matrix momentum algorithm, which provides an efficient approximation to on-line Newton's method, by extending a recent statistical mechanics framework to include second order algorithms. We study the efficacy of this method when the Hessian is available and also consider a practical implementation which uses a single example estimate of the Hessian. The method is shown to provide excellent asymptotic performance, although the single example implementation is sensitive to the choice of training parameters. We conjecture that matrix momentum could provide efficient matrix inversion for other second order algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The dynamics of supervised learning in layered neural networks were studied in the regime where the size of the training set is proportional to the number of inputs. The evolution of macroscopic observables, including the two relevant performance measures can be predicted by using the dynamical replica theory. Three approximation schemes aimed at eliminating the need to solve a functional saddle-point equation at each time step have been derived.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study used magnetoencephalography (MEG) to examine the dynamic patterns of neural activity underlying the auditory steady-state response. We examined the continuous time-series of responses to a 32-Hz amplitude modulation. Fluctuations in the amplitude of the evoked response were found to be mediated by non-linear interactions with oscillatory processes both at the same source, in the alpha and beta frequency bands, and in the opposite hemisphere. © 2005 Elsevier Ireland Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently there has been an outburst of interest in extending topographic maps of vectorial data to more general data structures, such as sequences or trees. However, there is no general consensus as to how best to process sequences using topographicmaps, and this topic remains an active focus of neurocomputational research. The representational capabilities and internal representations of the models are not well understood. Here, we rigorously analyze a generalization of the self-organizingmap (SOM) for processing sequential data, recursive SOM (RecSOM) (Voegtlin, 2002), as a nonautonomous dynamical system consisting of a set of fixed input maps. We argue that contractive fixed-input maps are likely to produce Markovian organizations of receptive fields on the RecSOM map. We derive bounds on parameter β (weighting the importance of importing past information when processing sequences) under which contractiveness of the fixed-input maps is guaranteed. Some generalizations of SOM contain a dynamic module responsible for processing temporal contexts as an integral part of the model. We show that Markovian topographic maps of sequential data can be produced using a simple fixed (nonadaptable) dynamic module externally feeding a standard topographic model designed to process static vectorial data of fixed dimensionality (e.g., SOM). However, by allowing trainable feedback connections, one can obtain Markovian maps with superior memory depth and topography preservation. We elaborate on the importance of non-Markovian organizations in topographic maps of sequential data. © 2006 Massachusetts Institute of Technology.