977 resultados para visual pathways
Resumo:
Vocal learning is a critical behavioral substrate for spoken human language. It is a rare trait found in three distantly related groups of birds-songbirds, hummingbirds, and parrots. These avian groups have remarkably similar systems of cerebral vocal nuclei for the control of learned vocalizations that are not found in their more closely related vocal non-learning relatives. These findings led to the hypothesis that brain pathways for vocal learning in different groups evolved independently from a common ancestor but under pre-existing constraints. Here, we suggest one constraint, a pre-existing system for movement control. Using behavioral molecular mapping, we discovered that in songbirds, parrots, and hummingbirds, all cerebral vocal learning nuclei are adjacent to discrete brain areas active during limb and body movements. Similar to the relationships between vocal nuclei activation and singing, activation in the adjacent areas correlated with the amount of movement performed and was independent of auditory and visual input. These same movement-associated brain areas were also present in female songbirds that do not learn vocalizations and have atrophied cerebral vocal nuclei, and in ring doves that are vocal non-learners and do not have cerebral vocal nuclei. A compilation of previous neural tracing experiments in songbirds suggests that the movement-associated areas are connected in a network that is in parallel with the adjacent vocal learning system. This study is the first global mapping that we are aware for movement-associated areas of the avian cerebrum and it indicates that brain systems that control vocal learning in distantly related birds are directly adjacent to brain systems involved in movement control. Based upon these findings, we propose a motor theory for the origin of vocal learning, this being that the brain areas specialized for vocal learning in vocal learners evolved as a specialization of a pre-existing motor pathway that controls movement.
Resumo:
Remembering past events - or episodic retrieval - consists of several components. There is evidence that mental imagery plays an important role in retrieval and that the brain regions supporting imagery overlap with those supporting retrieval. An open issue is to what extent these regions support successful vs. unsuccessful imagery and retrieval processes. Previous studies that examined regional overlap between imagery and retrieval used uncontrolled memory conditions, such as autobiographical memory tasks, that cannot distinguish between successful and unsuccessful retrieval. A second issue is that fMRI studies that compared imagery and retrieval have used modality-aspecific cues that are likely to activate auditory and visual processing regions simultaneously. Thus, it is not clear to what extent identified brain regions support modality-specific or modality-independent imagery and retrieval processes. In the current fMRI study, we addressed this issue by comparing imagery to retrieval under controlled memory conditions in both auditory and visual modalities. We also obtained subjective measures of imagery quality allowing us to dissociate regions contributing to successful vs. unsuccessful imagery. Results indicated that auditory and visual regions contribute both to imagery and retrieval in a modality-specific fashion. In addition, we identified four sets of brain regions with distinct patterns of activity that contributed to imagery and retrieval in a modality-independent fashion. The first set of regions, including hippocampus, posterior cingulate cortex, medial prefrontal cortex and angular gyrus, showed a pattern common to imagery/retrieval and consistent with successful performance regardless of task. The second set of regions, including dorsal precuneus, anterior cingulate and dorsolateral prefrontal cortex, also showed a pattern common to imagery and retrieval, but consistent with unsuccessful performance during both tasks. Third, left ventrolateral prefrontal cortex showed an interaction between task and performance and was associated with successful imagery but unsuccessful retrieval. Finally, the fourth set of regions, including ventral precuneus, midcingulate cortex and supramarginal gyrus, showed the opposite interaction, supporting unsuccessful imagery, but successful retrieval performance. Results are discussed in relation to reconstructive, attentional, semantic memory, and working memory processes. This is the first study to separate the neural correlates of successful and unsuccessful performance for both imagery and retrieval and for both auditory and visual modalities.
Resumo:
When recalling autobiographical memories, individuals often experience visual images associated with the event. These images can be constructed from two different perspectives: first person, in which the event is visualized from the viewpoint experienced at encoding, or third person, in which the event is visualized from an external vantage point. Using a novel technique to measure visual perspective, we examined where the external vantage point is situated in third-person images. Individuals in two studies were asked to recall either 10 or 15 events from their lives and describe the perspectives they experienced. Wide variation in spatial locations was observed within third-person perspectives, with the location of these perspectives relating to the event being recalled. Results suggest remembering from an external viewpoint may be more common than previous studies have demonstrated.
Resumo:
The number of studies examining visual perspective during retrieval has recently grown. However, the way in which perspective has been conceptualized differs across studies. Some studies have suggested perspective is experienced as either a first-person or a third-person perspective, whereas others have suggested both perspectives can be experienced during a single retrieval attempt. This aspect of perspective was examined across three studies, which used different measurement techniques commonly used in studies of perspective. Results suggest that individuals can experience more than one perspective when recalling events. Furthermore, the experience of the two perspectives correlated differentially with ratings of vividness, suggesting that the two perspectives should not be considered in opposition of one another. We also found evidence of a gender effect in the experience of perspective, with females experiencing third-person perspectives more often than males. Future studies should allow for the experience of more than one perspective during retrieval.
Resumo:
Amnesia typically results from trauma to the medial temporal regions that coordinate activation among the disparate areas of cortex that represent the information that make up autobiographical memories. We proposed that amnesia should also result from damage to these regions, particularly regions that subserve long-term visual memory [Rubin, D. C., & Greenberg, D. L. (1998). Visual memory-deficit amnesia: A distinct amnesic presentation and etiology. Proceedings of the National Academy of Sciences of the USA, 95, 5413-5416]. We previously found 11 such cases in the literature, and all 11 had amnesia. We now present a detailed investigation of one of these patients. M.S. suffers from long-term visual memory loss along with some semantic deficits; he also manifests a severe retrograde amnesia and moderate anterograde amnesia. The presentation of his amnesia differs from that of the typical medial-temporal or lateral-temporal amnesic; we suggest that his visual deficits may be contributing to his autobiographical amnesia.
Resumo:
OBJECTIVE: The authors sought to increase understanding of the brain mechanisms involved in cigarette addiction by identifying neural substrates modulated by visual smoking cues in nicotine-deprived smokers. METHOD: Event-related functional magnetic resonance imaging (fMRI) was used to detect brain activation after exposure to smoking-related images in a group of nicotine-deprived smokers and a nonsmoking comparison group. Subjects viewed a pseudo-random sequence of smoking images, neutral nonsmoking images, and rare targets (photographs of animals). Subjects pressed a button whenever a rare target appeared. RESULTS: In smokers, the fMRI signal was greater after exposure to smoking-related images than after exposure to neutral images in mesolimbic dopamine reward circuits known to be activated by addictive drugs (right posterior amygdala, posterior hippocampus, ventral tegmental area, and medial thalamus) as well as in areas related to visuospatial attention (bilateral prefrontal and parietal cortex and right fusiform gyrus). In nonsmokers, no significant differences in fMRI signal following exposure to smoking-related and neutral images were detected. In most regions studied, both subject groups showed greater activation following presentation of rare target images than after exposure to neutral images. CONCLUSIONS: In nicotine-deprived smokers, both reward and attention circuits were activated by exposure to smoking-related images. Smoking cues are processed like rare targets in that they activate attentional regions. These cues are also processed like addictive drugs in that they activate mesolimbic reward regions.
Resumo:
We describe a form of amnesia, which we have called visual memory-deficit amnesia, that is caused by damage to areas of the visual system that store visual information. Because it is caused by a deficit in access to stored visual material and not by an impaired ability to encode or retrieve new material, it has the otherwise infrequent properties of a more severe retrograde than anterograde amnesia with no temporal gradient in the retrograde amnesia. Of the 11 cases of long-term visual memory loss found in the literature, all had amnesia extending beyond a loss of visual memory, often including a near total loss of pretraumatic episodic memory. Of the 6 cases in which both the severity of retrograde and anterograde amnesia and the temporal gradient of the retrograde amnesia were noted, 4 had a more severe retrograde amnesia with no temporal gradient and 2 had a less severe retrograde amnesia with a temporal gradient.
Resumo:
The main impetus for a mini-symposium on corticothalamic interrelationships was the recent number of studies highlighting the role of the thalamus in aspects of cognition beyond sensory processing. The thalamus contributes to a range of basic cognitive behaviors that include learning and memory, inhibitory control, decision-making, and the control of visual orienting responses. Its functions are deeply intertwined with those of the better studied cortex, although the principles governing its coordination with the cortex remain opaque, particularly in higher-level aspects of cognition. How should the thalamus be viewed in the context of the rest of the brain? Although its role extends well beyond relaying of sensory information from the periphery, the main function of many of its subdivisions does appear to be that of a relay station, transmitting neural signals primarily to the cerebral cortex from a number of brain areas. In cognition, its main contribution may thus be to coordinate signals between diverse regions of the telencephalon, including the neocortex, hippocampus, amygdala, and striatum. This central coordination is further subject to considerable extrinsic control, for example, inhibition from the basal ganglia, zona incerta, and pretectal regions, and chemical modulation from ascending neurotransmitter systems. What follows is a brief review on the role of the thalamus in aspects of cognition and behavior, focusing on a summary of the topics covered in a mini-symposium held at the Society for Neuroscience meeting, 2014.
Resumo:
Although it is known that brain regions in one hemisphere may interact very closely with their corresponding contralateral regions (collaboration) or operate relatively independent of them (segregation), the specific brain regions (where) and conditions (how) associated with collaboration or segregation are largely unknown. We investigated these issues using a split field-matching task in which participants matched the meaning of words or the visual features of faces presented to the same (unilateral) or to different (bilateral) visual fields. Matching difficulty was manipulated by varying the semantic similarity of words or the visual similarity of faces. We assessed the white matter using the fractional anisotropy (FA) measure provided by diffusion tensor imaging (DTI) and cross-hemispheric communication in terms of fMRI-based connectivity between homotopic pairs of cortical regions. For both perceptual and semantic matching, bilateral trials became faster than unilateral trials as difficulty increased (bilateral processing advantage, BPA). The study yielded three novel findings. First, whereas FA in anterior corpus callosum (genu) correlated with word-matching BPA, FA in posterior corpus callosum (splenium-occipital) correlated with face-matching BPA. Second, as matching difficulty intensified, cross-hemispheric functional connectivity (CFC) increased in domain-general frontopolar cortex (for both word and face matching) but decreased in domain-specific ventral temporal lobe regions (temporal pole for word matching and fusiform gyrus for face matching). Last, a mediation analysis linking DTI and fMRI data showed that CFC mediated the effect of callosal FA on BPA. These findings clarify the mechanisms by which the hemispheres interact to perform complex cognitive tasks.
Resumo:
Successful interaction with the world depends on accurate perception of the timing of external events. Neurons at early stages of the primate visual system represent time-varying stimuli with high precision. However, it is unknown whether this temporal fidelity is maintained in the prefrontal cortex, where changes in neuronal activity generally correlate with changes in perception. One reason to suspect that it is not maintained is that humans experience surprisingly large fluctuations in the perception of time. To investigate the neuronal correlates of time perception, we recorded from neurons in the prefrontal cortex and midbrain of monkeys performing a temporal-discrimination task. Visual time intervals were presented at a timescale relevant to natural behavior (<500 ms). At this brief timescale, neuronal adaptation--time-dependent changes in the size of successive responses--occurs. We found that visual activity fluctuated with timing judgments in the prefrontal cortex but not in comparable midbrain areas. Surprisingly, only response strength, not timing, predicted task performance. Intervals perceived as longer were associated with larger visual responses and shorter intervals with smaller responses, matching the dynamics of adaptation. These results suggest that the magnitude of prefrontal activity may be read out to provide temporal information that contributes to judging the passage of time.
Resumo:
The image on the retina may move because the eyes move, or because something in the visual scene moves. The brain is not fooled by this ambiguity. Even as we make saccades, we are able to detect whether visual objects remain stable or move. Here we test whether this ability to assess visual stability across saccades is present at the single-neuron level in the frontal eye field (FEF), an area that receives both visual input and information about imminent saccades. Our hypothesis was that neurons in the FEF report whether a visual stimulus remains stable or moves as a saccade is made. Monkeys made saccades in the presence of a visual stimulus outside of the receptive field. In some trials, the stimulus remained stable, but in other trials, it moved during the saccade. In every trial, the stimulus occupied the center of the receptive field after the saccade, thus evoking a reafferent visual response. We found that many FEF neurons signaled, in the strength and timing of their reafferent response, whether the stimulus had remained stable or moved. Reafferent responses were tuned for the amount of stimulus translation, and, in accordance with human psychophysics, tuning was better (more prevalent, stronger, and quicker) for stimuli that moved perpendicular, rather than parallel, to the saccade. Tuning was sometimes present as well for nonspatial transaccadic changes (in color, size, or both). Our results indicate that FEF neurons evaluate visual stability during saccades and may be general purpose detectors of transaccadic visual change.
Resumo:
Organisms in the wild develop with varying food availability. During periods of nutritional scarcity, development may slow or arrest until conditions improve. The ability to modulate developmental programs in response to poor nutritional conditions requires a means of sensing the changing nutritional environment and limiting tissue growth. The mechanisms by which organisms accomplish this adaptation are not well understood. We sought to study this question by examining the effects of nutrient deprivation on Caenorhabditis elegans development during the late larval stages, L3 and L4, a period of extensive tissue growth and morphogenesis. By removing animals from food at different times, we show here that specific checkpoints exist in the early L3 and early L4 stages that systemically arrest the development of diverse tissues and cellular processes. These checkpoints occur once in each larval stage after molting and prior to initiation of the subsequent molting cycle. DAF-2, the insulin/insulin-like growth factor receptor, regulates passage through the L3 and L4 checkpoints in response to nutrition. The FOXO transcription factor DAF-16, a major target of insulin-like signaling, functions cell-nonautonomously in the hypodermis (skin) to arrest developmental upon nutrient removal. The effects of DAF-16 on progression through the L3 and L4 stages are mediated by DAF-9, a cytochrome P450 ortholog involved in the production of C. elegans steroid hormones. Our results identify a novel mode of C. elegans growth in which development progresses from one checkpoint to the next. At each checkpoint, nutritional conditions determine whether animals remain arrested or continue development to the next checkpoint.
Resumo:
Neuronal receptive fields (RFs) provide the foundation for understanding systems-level sensory processing. In early visual areas, investigators have mapped RFs in detail using stochastic stimuli and sophisticated analytical approaches. Much less is known about RFs in prefrontal cortex. Visual stimuli used for mapping RFs in prefrontal cortex tend to cover a small range of spatial and temporal parameters, making it difficult to understand their role in visual processing. To address these shortcomings, we implemented a generalized linear model to measure the RFs of neurons in the macaque frontal eye field (FEF) in response to sparse, full-field stimuli. Our high-resolution, probabilistic approach tracked the evolution of RFs during passive fixation, and we validated our results against conventional measures. We found that FEF neurons exhibited a surprising level of sensitivity to stimuli presented as briefly as 10 ms or to multiple dots presented simultaneously, suggesting that FEF visual responses are more precise than previously appreciated. FEF RF spatial structures were largely maintained over time and between stimulus conditions. Our results demonstrate that the application of probabilistic RF mapping to FEF and similar association areas is an important tool for clarifying the neuronal mechanisms of cognition.
Resumo:
Periodic visual stimulation and analysis of the resulting steady-state visual evoked potentials were first introduced over 80 years ago as a means to study visual sensation and perception. From the first single-channel recording of responses to modulated light to the present use of sophisticated digital displays composed of complex visual stimuli and high-density recording arrays, steady-state methods have been applied in a broad range of scientific and applied settings.The purpose of this article is to describe the fundamental stimulation paradigms for steady-state visual evoked potentials and to illustrate these principles through research findings across a range of applications in vision science.
Resumo:
Practice can improve performance on visual search tasks; the neural mechanisms underlying such improvements, however, are not clear. Response time typically shortens with practice, but which components of the stimulus-response processing chain facilitate this behavioral change? Improved search performance could result from enhancements in various cognitive processing stages, including (1) sensory processing, (2) attentional allocation, (3) target discrimination, (4) motor-response preparation, and/or (5) response execution. We measured event-related potentials (ERPs) as human participants completed a five-day visual-search protocol in which they reported the orientation of a color popout target within an array of ellipses. We assessed changes in behavioral performance and in ERP components associated with various stages of processing. After practice, response time decreased in all participants (while accuracy remained consistent), and electrophysiological measures revealed modulation of several ERP components. First, amplitudes of the early sensory-evoked N1 component at 150 ms increased bilaterally, indicating enhanced visual sensory processing of the array. Second, the negative-polarity posterior-contralateral component (N2pc, 170-250 ms) was earlier and larger, demonstrating enhanced attentional orienting. Third, the amplitude of the sustained posterior contralateral negativity component (SPCN, 300-400 ms) decreased, indicating facilitated target discrimination. Finally, faster motor-response preparation and execution were observed after practice, as indicated by latency changes in both the stimulus-locked and response-locked lateralized readiness potentials (LRPs). These electrophysiological results delineate the functional plasticity in key mechanisms underlying visual search with high temporal resolution and illustrate how practice influences various cognitive and neural processing stages leading to enhanced behavioral performance.