12 resultados para motor imagery

em Duke University


Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Kinesin motors hydrolyze ATP to produce force and move along microtubules, converting chemical energy into work by a mechanism that is only poorly understood. Key transitions and intermediate states in the process are still structurally uncharacterized, and remain outstanding questions in the field. Perturbing the motor by introducing point mutations could stabilize transitional or unstable states, providing critical information about these rarer states. RESULTS: Here we show that mutation of a single residue in the kinesin-14 Ncd causes the motor to release ADP and hydrolyze ATP faster than wild type, but move more slowly along microtubules in gliding assays, uncoupling nucleotide hydrolysis from force generation. A crystal structure of the motor shows a large rotation of the stalk, a conformation representing a force-producing stroke of Ncd. Three C-terminal residues of Ncd, visible for the first time, interact with the central beta-sheet and dock onto the motor core, forming a structure resembling the kinesin-1 neck linker, which has been proposed to be the primary force-generating mechanical element of kinesin-1. CONCLUSIONS: Force generation by minus-end Ncd involves docking of the C-terminus, which forms a structure resembling the kinesin-1 neck linker. The mechanism by which the plus- and minus-end motors produce force to move to opposite ends of the microtubule appears to involve the same conformational changes, but distinct structural linkers. Unstable ADP binding may destabilize the motor-ADP state, triggering Ncd stalk rotation and C-terminus docking, producing a working stroke of the motor.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A tree-based dictionary learning model is developed for joint analysis of imagery and associated text. The dictionary learning may be applied directly to the imagery from patches, or to general feature vectors extracted from patches or superpixels (using any existing method for image feature extraction). Each image is associated with a path through the tree (from root to a leaf), and each of the multiple patches in a given image is associated with one node in that path. Nodes near the tree root are shared between multiple paths, representing image characteristics that are common among different types of images. Moving toward the leaves, nodes become specialized, representing details in image classes. If available, words (text) are also jointly modeled, with a path-dependent probability over words. The tree structure is inferred via a nested Dirichlet process, and a retrospective stick-breaking sampler is used to infer the tree depth and width.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Vocal learning is a critical behavioral substrate for spoken human language. It is a rare trait found in three distantly related groups of birds-songbirds, hummingbirds, and parrots. These avian groups have remarkably similar systems of cerebral vocal nuclei for the control of learned vocalizations that are not found in their more closely related vocal non-learning relatives. These findings led to the hypothesis that brain pathways for vocal learning in different groups evolved independently from a common ancestor but under pre-existing constraints. Here, we suggest one constraint, a pre-existing system for movement control. Using behavioral molecular mapping, we discovered that in songbirds, parrots, and hummingbirds, all cerebral vocal learning nuclei are adjacent to discrete brain areas active during limb and body movements. Similar to the relationships between vocal nuclei activation and singing, activation in the adjacent areas correlated with the amount of movement performed and was independent of auditory and visual input. These same movement-associated brain areas were also present in female songbirds that do not learn vocalizations and have atrophied cerebral vocal nuclei, and in ring doves that are vocal non-learners and do not have cerebral vocal nuclei. A compilation of previous neural tracing experiments in songbirds suggests that the movement-associated areas are connected in a network that is in parallel with the adjacent vocal learning system. This study is the first global mapping that we are aware for movement-associated areas of the avian cerebrum and it indicates that brain systems that control vocal learning in distantly related birds are directly adjacent to brain systems involved in movement control. Based upon these findings, we propose a motor theory for the origin of vocal learning, this being that the brain areas specialized for vocal learning in vocal learners evolved as a specialization of a pre-existing motor pathway that controls movement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mechanisms for the evolution of convergent behavioral traits are largely unknown. Vocal learning is one such trait that evolved multiple times and is necessary in humans for the acquisition of spoken language. Among birds, vocal learning is evolved in songbirds, parrots, and hummingbirds. Each time similar forebrain song nuclei specialized for vocal learning and production have evolved. This finding led to the hypothesis that the behavioral and neuroanatomical convergences for vocal learning could be associated with molecular convergence. We previously found that the neural activity-induced gene dual specificity phosphatase 1 (dusp1) was up-regulated in non-vocal circuits, specifically in sensory-input neurons of the thalamus and telencephalon; however, dusp1 was not up-regulated in higher order sensory neurons or motor circuits. Here we show that song motor nuclei are an exception to this pattern. The song nuclei of species from all known vocal learning avian lineages showed motor-driven up-regulation of dusp1 expression induced by singing. There was no detectable motor-driven dusp1 expression throughout the rest of the forebrain after non-vocal motor performance. This pattern contrasts with expression of the commonly studied activity-induced gene egr1, which shows motor-driven expression in song nuclei induced by singing, but also motor-driven expression in adjacent brain regions after non-vocal motor behaviors. In the vocal non-learning avian species, we found no detectable vocalizing-driven dusp1 expression in the forebrain. These findings suggest that independent evolutions of neural systems for vocal learning were accompanied by selection for specialized motor-driven expression of the dusp1 gene in those circuits. This specialized expression of dusp1 could potentially lead to differential regulation of dusp1-modulated molecular cascades in vocal learning circuits.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE: To review the experience at a single institution with motor evoked potential (MEP) monitoring during intracranial aneurysm surgery to determine the incidence of unacceptable movement. METHODS: Neurophysiology event logs and anesthetic records from 220 craniotomies for aneurysm clipping were reviewed for unacceptable patient movement or reason for cessation of MEPs. Muscle relaxants were not given after intubation. Transcranial MEPs were recorded from bilateral abductor hallucis and abductor pollicis muscles. MEP stimulus intensity was increased up to 500 V until evoked potential responses were detectable. RESULTS: Out of 220 patients, 7 (3.2%) exhibited unacceptable movement with MEP stimulation-2 had nociception-induced movement and 5 had excessive field movement. In all but one case, MEP monitoring could be resumed, yielding a 99.5% monitoring rate. CONCLUSIONS: With the anesthetic and monitoring regimen, the authors were able to record MEPs of the upper and lower extremities in all patients and found only 3.2% demonstrated unacceptable movement. With a suitable anesthetic technique, MEP monitoring in the upper and lower extremities appears to be feasible in most patients and should not be withheld because of concern for movement during neurovascular surgery.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Remembering past events - or episodic retrieval - consists of several components. There is evidence that mental imagery plays an important role in retrieval and that the brain regions supporting imagery overlap with those supporting retrieval. An open issue is to what extent these regions support successful vs. unsuccessful imagery and retrieval processes. Previous studies that examined regional overlap between imagery and retrieval used uncontrolled memory conditions, such as autobiographical memory tasks, that cannot distinguish between successful and unsuccessful retrieval. A second issue is that fMRI studies that compared imagery and retrieval have used modality-aspecific cues that are likely to activate auditory and visual processing regions simultaneously. Thus, it is not clear to what extent identified brain regions support modality-specific or modality-independent imagery and retrieval processes. In the current fMRI study, we addressed this issue by comparing imagery to retrieval under controlled memory conditions in both auditory and visual modalities. We also obtained subjective measures of imagery quality allowing us to dissociate regions contributing to successful vs. unsuccessful imagery. Results indicated that auditory and visual regions contribute both to imagery and retrieval in a modality-specific fashion. In addition, we identified four sets of brain regions with distinct patterns of activity that contributed to imagery and retrieval in a modality-independent fashion. The first set of regions, including hippocampus, posterior cingulate cortex, medial prefrontal cortex and angular gyrus, showed a pattern common to imagery/retrieval and consistent with successful performance regardless of task. The second set of regions, including dorsal precuneus, anterior cingulate and dorsolateral prefrontal cortex, also showed a pattern common to imagery and retrieval, but consistent with unsuccessful performance during both tasks. Third, left ventrolateral prefrontal cortex showed an interaction between task and performance and was associated with successful imagery but unsuccessful retrieval. Finally, the fourth set of regions, including ventral precuneus, midcingulate cortex and supramarginal gyrus, showed the opposite interaction, supporting unsuccessful imagery, but successful retrieval performance. Results are discussed in relation to reconstructive, attentional, semantic memory, and working memory processes. This is the first study to separate the neural correlates of successful and unsuccessful performance for both imagery and retrieval and for both auditory and visual modalities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Behavior, neuropsychology, and neuroimaging suggest that episodic memories are constructed from interactions among the following basic systems: vision, audition, olfaction, other senses, spatial imagery, language, emotion, narrative, motor output, explicit memory, and search and retrieval. Each system has its own well-documented functions, neural substrates, processes, structures, and kinds of schemata. However, the systems have not been considered as interacting components of episodic memory, as is proposed here. Autobiographical memory and oral traditions are used to demonstrate the usefulness of the basic-systems model in accounting for existing data and predicting novel findings, and to argue that the model, or one similar to it, is the only way to understand episodic memory for complex stimuli routinely encountered outside the laboratory.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Line drawings were presented in either a spatial or a nonspatial format. Subjects recalled each of four sets of 24 items in serial order. Amount recalled in the correct serial order and sequencing errors were scored. In Experiment 1 items appeared either in consecutive locations of a matrix or in one central location. Subjects who saw the items in different locations made fewer sequencing errors than those who saw each item in a central location, but serial recall levels for these two conditions did not differ. When items appeared in nonconsecutive locations in Experiment 2, the advantage of the spatial presentation on sequencing errors disappeared. Experiment 3 included conditions in which both the consecutive and nonconsecutive spatial formats were paired with retrieval cues that either did or did not indicate the sequence of locations in which the items had appeared. Spatial imagery aided sequencing when, and only when, the order of locations in which the stimuli appeared could be reconstructed at retrieval.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Imagery and concreteness norms and percentage noun usage were obtained on the 1,080 verbal items from the Toronto Word Pool. Imagery was defined as the rated ease with which a word aroused a mental image, and concreteness was defined in relation to level of abstraction. The degree to which a word was functionally a noun was estimated in a sentence generation task. The mean and standard deviation of the imagery and concreteness ratings for each item are reported together with letter and printed frequency counts for the words and indications of sex differences in the ratings. Additional data in the norms include a grammatical function code derived from dictionary definitions, a percent noun judgment, indexes of statistical approximation to English, and an orthographic neighbor ratio. Validity estimates for the imagery and concreteness ratings are derived from comparisons with scale values drawn from the Paivio, Yuille, and Madigan (1968) noun pool and the Toglia and Battig (1978) norms. © 1982 Psychonomic Society, Inc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ability to quickly detect and respond to visual stimuli in the environment is critical to many human activities. While such perceptual and visual-motor skills are important in a myriad of contexts, considerable variability exists between individuals in these abilities. To better understand the sources of this variability, we assessed perceptual and visual-motor skills in a large sample of 230 healthy individuals via the Nike SPARQ Sensory Station, and compared variability in their behavioral performance to demographic, state, sleep and consumption characteristics. Dimension reduction and regression analyses indicated three underlying factors: Visual-Motor Control, Visual Sensitivity, and Eye Quickness, which accounted for roughly half of the overall population variance in performance on this battery. Inter-individual variability in Visual-Motor Control was correlated with gender and circadian patters such that performance on this factor was better for males and for those who had been awake for a longer period of time before assessment. The current findings indicate that abilities involving coordinated hand movements in response to stimuli are subject to greater individual variability, while visual sensitivity and occulomotor control are largely stable across individuals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Saccadic eye movements can be elicited by more than one type of sensory stimulus. This implies substantial transformations of signals originating in different sense organs as they reach a common motor output pathway. In this study, we compared the prevalence and magnitude of auditory- and visually evoked activity in a structure implicated in oculomotor processing, the primate frontal eye fields (FEF). We recorded from 324 single neurons while 2 monkeys performed delayed saccades to visual or auditory targets. We found that 64% of FEF neurons were active on presentation of auditory targets and 87% were active during auditory-guided saccades, compared with 75 and 84% for visual targets and saccades. As saccade onset approached, the average level of population activity in the FEF became indistinguishable on visual and auditory trials. FEF activity was better correlated with the movement vector than with the target location for both modalities. In summary, the large proportion of auditory-responsive neurons in the FEF, the similarity between visual and auditory activity levels at the time of the saccade, and the strong correlation between the activity and the saccade vector suggest that auditory signals undergo tailoring to match roughly the strength of visual signals present in the FEF, facilitating accessing of a common motor output pathway.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent memories are generally recalled from a first-person perspective whereas older memories are often recalled from a third-person perspective. We investigated how repeated retrieval affects the availability of visual information, and whether it could explain the observed shift in perspective with time. In Experiment 1, participants performed mini-events and nominated memories of recent autobiographical events in response to cue words. Next, they described their memory for each event and rated its phenomenological characteristics. Over the following three weeks, they repeatedly retrieved half of the mini-event and cue-word memories. No instructions were given about how to retrieve the memories. In Experiment 2, participants were asked to adopt either a first- or third-person perspective during retrieval. One month later, participants retrieved all of the memories and again provided phenomenology ratings. When first-person visual details from the event were repeatedly retrieved, this information was retained better and the shift in perspective was slowed.