285 resultados para gephyrin, synapse
Resumo:
The recombination-activating gene products, RAG1 and RAG2, initiate V(D)J recombination during lymphocyte development by cleaving DNA adjacent to conserved recombination signal sequences (RSSs). The reaction involves DNA binding, synapsis, and cleavage at two RSSs located on the same DNA molecule and results in the assembly of antigen receptor genes. Since their discovery full-length, RAG1 and RAG2 have been difficult to purify, and core derivatives are shown to be most active when purified from adherent 293-T cells. However, the protein yield from adherent 293-T cells is limited. Here we develop a human suspension cell purification and change the expression vector to boost RAG production 6-fold. We use these purified RAG proteins to investigate V(D)J recombination on a mechanistic single molecule level. As a result, we are able to measure the binding statistics (dwell times and binding energies) of the initial RAG binding events with or without its co-factor high mobility group box protein 1 (HMGB1), and to characterize synapse formation at the single-molecule level yielding insights into the distribution of dwell times in the paired complex and the propensity for cleavage upon forming the synapse. We then go on to investigate HMGB1 further by measuring it compact single DNA molecules. We observed concentration dependent DNA compaction, differential DNA compaction depending on the divalent cation type, and found that at a particular HMGB1 concentration the percentage of DNA compacted is conserved across DNA lengths. Lastly, we investigate another HMGB protein called TFAM, which is essential for packaging the mitochondrial genome. We present crystal structures of TFAM bound to the heavy strand promoter 1 (HSP1) and to nonspecific DNA. We show TFAM dimerization is dispensable for DNA bending and transcriptional activation, but is required for mtDNA compaction. We propose that TFAM dimerization enhances mtDNA compaction by promoting looping of mtDNA.
Resumo:
Dendritic cells (DCs) are essential in order to combat invading viruses and trigger antiviral responses. Paradoxically, in the case of HIV-1, DCs might contribute to viral pathogenesis through trans-infection, a mechanism that promotes viral capture and transmission to target cells, especially after DC maturation. In this review, we highlight recent evidence identifying sialyllactose-containing gangliosides in the viral membrane and the cellular lectin Siglec-1 as critical determinants for HIV-1 capture and storage by mature DCs and for DC-mediated trans-infection of T cells. In contrast, DC-SIGN, long considered to be the main receptor for DC capture of HIV-1, plays a minor role in mature DC-mediated HIV-1 capture and trans-infection.
Resumo:
Background Ubiquitination is known to regulate physiological neuronal functions as well as to be involved in a number of neuronal diseases. Several ubiquitin proteomic approaches have been developed during the last decade but, as they have been mostly applied to non-neuronal cell culture, very little is yet known about neuronal ubiquitination pathways in vivo. Methodology/Principal Findings Using an in vivo biotinylation strategy we have isolated and identified the ubiquitinated proteome in neurons both for the developing embryonic brain and for the adult eye of Drosophila melanogaster. Bioinformatic comparison of both datasets indicates a significant difference on the ubiquitin substrates, which logically correlates with the processes that are most active at each of the developmental stages. Detection within the isolated material of two ubiquitin E3 ligases, Parkin and Ube3a, indicates their ubiquitinating activity on the studied tissues. Further identification of the proteins that do accumulate upon interference with the proteasomal degradative pathway provides an indication of the proteins that are targeted for clearance in neurons. Last, we report the proof-of-principle validation of two lysine residues required for nSyb ubiquitination. Conclusions/Significance These data cast light on the differential and common ubiquitination pathways between the embryonic and adult neurons, and hence will contribute to the understanding of the mechanisms by which neuronal function is regulated. The in vivo biotinylation methodology described here complements other approaches for ubiquitome study and offers unique advantages, and is poised to provide further insight into disease mechanisms related to the ubiquitin proteasome system.
Resumo:
Synapses exhibit an extraordinary degree of short-term malleability, with release probabilities and effective synaptic strengths changing markedly over multiple timescales. From the perspective of a fixed computational operation in a network, this seems like a most unacceptable degree of added variability. We suggest an alternative theory according to which short-term synaptic plasticity plays a normatively-justifiable role. This theory starts from the commonplace observation that the spiking of a neuron is an incomplete, digital, report of the analog quantity that contains all the critical information, namely its membrane potential. We suggest that a synapse solves the inverse problem of estimating the pre-synaptic membrane potential from the spikes it receives, acting as a recursive filter. We show that the dynamics of short-term synaptic depression closely resemble those required for optimal filtering, and that they indeed support high quality estimation. Under this account, the local postsynaptic potential and the level of synaptic resources track the (scaled) mean and variance of the estimated presynaptic membrane potential. We make experimentally testable predictions for how the statistics of subthreshold membrane potential fluctuations and the form of spiking non-linearity should be related to the properties of short-term plasticity in any particular cell type.
Resumo:
Among the variety of applications for biosensors one of the exciting frontiers is to utilize those devices as post-synaptic sensing elements in chemical coupling between neurons and solid-state systems. The first necessary step to attain this challenge is to realize highly efficient detector for neurotransmitter acetylcholine (ACh). Herein, we demonstrate that the combination of floating gate configuration of ion-sensitive field effect transistor (ISFET) together with diluted covalent anchoring of enzyme acetylcholinesterase (AChE) onto device sensing area reveals a remarkable improvement of a four orders of magnitude in dose response to ACh. This high range sensitivity in addition to the benefits of peculiar microelectronic design show, that the presented hybrid provides a competent platform for assembly of artificial chemical synapse junction. Furthermore, our system exhibits clear response to eserine, a competitive inhibitor of AChE, and therefore it can be implemented as an effective sensor of pharmacological reagents, organophosphates, and nerve gases as well. © 2007 Materials Research Society.
Resumo:
The Double Synapse Weighted Neuron (DSWN) is a kind of general-purpose neuron model, which with the ability of configuring Hyper-sausage neuron (HSN). After introducing the design method of hardware DSWN synapse, this paper proposed a DSWN-based specific purpose neural computing device-CASSANN-IIspr. As its application, a rigid body recognition system was developed on CASSANN-IIspr, which achieved better performance than RIBF-SVMs system.
Resumo:
Do humans and animals learn exemplars or prototypes when they categorize objects and events in the world? How are different degrees of abstraction realized through learning by neurons in inferotemporal and prefrontal cortex? How do top-down expectations influence the course of learning? Thirty related human cognitive experiments (the 5-4 category structure) have been used to test competing views in the prototype-exemplar debate. In these experiments, during the test phase, subjects unlearn in a characteristic way items that they had learned to categorize perfectly in the training phase. Many cognitive models do not describe how an individual learns or forgets such categories through time. Adaptive Resonance Theory (ART) neural models provide such a description, and also clarify both psychological and neurobiological data. Matching of bottom-up signals with learned top-down expectations plays a key role in ART model learning. Here, an ART model is used to learn incrementally in response to 5-4 category structure stimuli. Simulation results agree with experimental data, achieving perfect categorization in training and a good match to the pattern of errors exhibited by human subjects in the testing phase. These results show how the model learns both prototypes and certain exemplars in the training phase. ART prototypes are, however, unlike the ones posited in the traditional prototype-exemplar debate. Rather, they are critical patterns of features to which a subject learns to pay attention based on past predictive success and the order in which exemplars are experienced. Perturbations of old memories by newly arriving test items generate a performance curve that closely matches the performance pattern of human subjects. The model also clarifies exemplar-based accounts of data concerning amnesia.
Resumo:
Memories in Adaptive Resonance Theory (ART) networks are based on matched patterns that focus attention on those portions of bottom-up inputs that match active top-down expectations. While this learning strategy has proved successful for both brain models and applications, computational examples show that attention to early critical features may later distort memory representations during online fast learning. For supervised learning, biased ARTMAP (bARTMAP) solves the problem of over-emphasis on early critical features by directing attention away from previously attended features after the system makes a predictive error. Small-scale, hand-computed analog and binary examples illustrate key model dynamics. Twodimensional simulation examples demonstrate the evolution of bARTMAP memories as they are learned online. Benchmark simulations show that featural biasing also improves performance on large-scale examples. One example, which predicts movie genres and is based, in part, on the Netflix Prize database, was developed for this project. Both first principles and consistent performance improvements on all simulation studies suggest that featural biasing should be incorporated by default in all ARTMAP systems. Benchmark datasets and bARTMAP code are available from the CNS Technology Lab Website: http://techlab.bu.edu/bART/.
Resumo:
Grouping of collinear boundary contours is a fundamental process during visual perception. Illusory contour completion vividly illustrates how stable perceptual boundaries interpolate between pairs of contour inducers, but do not extrapolate from a single inducer. Neural models have simulated how perceptual grouping occurs in laminar visual cortical circuits. These models predicted the existence of grouping cells that obey a bipole property whereby grouping can occur inwardly between pairs or greater numbers of similarly oriented and co-axial inducers, but not outwardly from individual inducers. These models have not, however, incorporated spiking dynamics. Perceptual grouping is a challenge for spiking cells because its properties of collinear facilitation and analog sensitivity to inducer configurations occur despite irregularities in spike timing across all the interacting cells. Other models have demonstrated spiking dynamics in laminar neocortical circuits, but not how perceptual grouping occurs. The current model begins to unify these two modeling streams by implementing a laminar cortical network of spiking cells whose intracellular temporal dynamics interact with recurrent intercellular spiking interactions to quantitatively simulate data from neurophysiological experiments about perceptual grouping, the structure of non-classical visual receptive fields, and gamma oscillations.
Resumo:
Computational models of learning typically train on labeled input patterns (supervised learning), unlabeled input patterns (unsupervised learning), or a combination of the two (semisupervised learning). In each case input patterns have a fixed number of features throughout training and testing. Human and machine learning contexts present additional opportunities for expanding incomplete knowledge from formal training, via self-directed learning that incorporates features not previously experienced. This article defines a new self-supervised learning paradigm to address these richer learning contexts, introducing a neural network called self-supervised ARTMAP. Self-supervised learning integrates knowledge from a teacher (labeled patterns with some features), knowledge from the environment (unlabeled patterns with more features), and knowledge from internal model activation (self-labeled patterns). Self-supervised ARTMAP learns about novel features from unlabeled patterns without destroying partial knowledge previously acquired from labeled patterns. A category selection function bases system predictions on known features, and distributed network activation scales unlabeled learning to prediction confidence. Slow distributed learning on unlabeled patterns focuses on novel features and confident predictions, defining classification boundaries that were ambiguous in the labeled patterns. Self-supervised ARTMAP improves test accuracy on illustrative lowdimensional problems and on high-dimensional benchmarks. Model code and benchmark data are available from: http://techlab.bu.edu/SSART/.
Resumo:
Studies of perceptual learning have focused on aspects of learning that are related to early stages of sensory processing. However, conclusions that perceptual learning results in low-level sensory plasticity are of great controversy, largely because such learning can often be attributed to plasticity in later stages of sensory processing or in the decision processes. To address this controversy, we developed a novel random dot motion (RDM) stimulus to target motion cells selective to contrast polarity, by ensuring the motion direction information arises only from signal dot onsets and not their offsets, and used these stimuli in conjunction with the paradigm of task-irrelevant perceptual learning (TIPL). In TIPL, learning is achieved in response to a stimulus by subliminally pairing that stimulus with the targets of an unrelated training task. In this manner, we are able to probe learning for an aspect of motion processing thought to be a function of directional V1 simple cells with a learning procedure that dissociates the learned stimulus from the decision processes relevant to the training task. Our results show learning for the exposed contrast polarity and that this learning does not transfer to the unexposed contrast polarity. These results suggest that TIPL for motion stimuli may occur at the stage of directional V1 simple cells.
Resumo:
SyNAPSE program of the Defense Advanced Projects Research Agency (Hewlett-Packard Company, subcontract under DARPA prime contract HR0011-09-3-0001, and HRL Laboratories LLC, subcontract #801881-BS under DARPA prime contract HR0011-09-C-0001); CELEST, an NSF Science of Learning Center (SBE-0354378)
Resumo:
Anterior inferotemporal cortex (ITa) plays a key role in visual object recognition. Recognition is tolerant to object position, size, and view changes, yet recent neurophysiological data show ITa cells with high object selectivity often have low position tolerance, and vice versa. A neural model learns to simulate both this tradeoff and ITa responses to image morphs using large-scale and small-scale IT cells whose population properties may support invariant recognition.
Resumo:
How do humans use predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, a certain combination of objects can define a context for a kitchen and trigger a more efficient search for a typical object, such as a sink, in that context. A neural model, ARTSCENE Search, is developed to illustrate the neural mechanisms of such memory-based contextual learning and guidance, and to explain challenging behavioral data on positive/negative, spatial/object, and local/distant global cueing effects during visual search. The model proposes how global scene layout at a first glance rapidly forms a hypothesis about the target location. This hypothesis is then incrementally refined by enhancing target-like objects in space as a scene is scanned with saccadic eye movements. The model clarifies the functional roles of neuroanatomical, neurophysiological, and neuroimaging data in visual search for a desired goal object. In particular, the model simulates the interactive dynamics of spatial and object contextual cueing in the cortical What and Where streams starting from early visual areas through medial temporal lobe to prefrontal cortex. After learning, model dorsolateral prefrontal cortical cells (area 46) prime possible target locations in posterior parietal cortex based on goalmodulated percepts of spatial scene gist represented in parahippocampal cortex, whereas model ventral prefrontal cortical cells (area 47/12) prime possible target object representations in inferior temporal cortex based on the history of viewed objects represented in perirhinal cortex. The model hereby predicts how the cortical What and Where streams cooperate during scene perception, learning, and memory to accumulate evidence over time to drive efficient visual search of familiar scenes.
Resumo:
Grid cells in the dorsal segment of the medial entorhinal cortex (dMEC) show remarkable hexagonal activity patterns, at multiple spatial scales, during spatial navigation. How these hexagonal patterns arise has excited intense interest. It has previously been shown how a selforganizing map can convert firing patterns across entorhinal grid cells into hippocampal place cells that are capable of representing much larger spatial scales. Can grid cell firing fields also arise during navigation through learning within a self-organizing map? A neural model is proposed that converts path integration signals into hexagonal grid cell patterns of multiple scales. This GRID model creates only grid cell patterns with the observed hexagonal structure, predicts how these hexagonal patterns can be learned from experience, and can process biologically plausible neural input and output signals during navigation. These results support a unified computational framework for explaining how entorhinal-hippocampal interactions support spatial navigation.