19 resultados para Visual perception
em Duke University
Resumo:
Successful interaction with the world depends on accurate perception of the timing of external events. Neurons at early stages of the primate visual system represent time-varying stimuli with high precision. However, it is unknown whether this temporal fidelity is maintained in the prefrontal cortex, where changes in neuronal activity generally correlate with changes in perception. One reason to suspect that it is not maintained is that humans experience surprisingly large fluctuations in the perception of time. To investigate the neuronal correlates of time perception, we recorded from neurons in the prefrontal cortex and midbrain of monkeys performing a temporal-discrimination task. Visual time intervals were presented at a timescale relevant to natural behavior (<500 ms). At this brief timescale, neuronal adaptation--time-dependent changes in the size of successive responses--occurs. We found that visual activity fluctuated with timing judgments in the prefrontal cortex but not in comparable midbrain areas. Surprisingly, only response strength, not timing, predicted task performance. Intervals perceived as longer were associated with larger visual responses and shorter intervals with smaller responses, matching the dynamics of adaptation. These results suggest that the magnitude of prefrontal activity may be read out to provide temporal information that contributes to judging the passage of time.
Resumo:
Remembering past events - or episodic retrieval - consists of several components. There is evidence that mental imagery plays an important role in retrieval and that the brain regions supporting imagery overlap with those supporting retrieval. An open issue is to what extent these regions support successful vs. unsuccessful imagery and retrieval processes. Previous studies that examined regional overlap between imagery and retrieval used uncontrolled memory conditions, such as autobiographical memory tasks, that cannot distinguish between successful and unsuccessful retrieval. A second issue is that fMRI studies that compared imagery and retrieval have used modality-aspecific cues that are likely to activate auditory and visual processing regions simultaneously. Thus, it is not clear to what extent identified brain regions support modality-specific or modality-independent imagery and retrieval processes. In the current fMRI study, we addressed this issue by comparing imagery to retrieval under controlled memory conditions in both auditory and visual modalities. We also obtained subjective measures of imagery quality allowing us to dissociate regions contributing to successful vs. unsuccessful imagery. Results indicated that auditory and visual regions contribute both to imagery and retrieval in a modality-specific fashion. In addition, we identified four sets of brain regions with distinct patterns of activity that contributed to imagery and retrieval in a modality-independent fashion. The first set of regions, including hippocampus, posterior cingulate cortex, medial prefrontal cortex and angular gyrus, showed a pattern common to imagery/retrieval and consistent with successful performance regardless of task. The second set of regions, including dorsal precuneus, anterior cingulate and dorsolateral prefrontal cortex, also showed a pattern common to imagery and retrieval, but consistent with unsuccessful performance during both tasks. Third, left ventrolateral prefrontal cortex showed an interaction between task and performance and was associated with successful imagery but unsuccessful retrieval. Finally, the fourth set of regions, including ventral precuneus, midcingulate cortex and supramarginal gyrus, showed the opposite interaction, supporting unsuccessful imagery, but successful retrieval performance. Results are discussed in relation to reconstructive, attentional, semantic memory, and working memory processes. This is the first study to separate the neural correlates of successful and unsuccessful performance for both imagery and retrieval and for both auditory and visual modalities.
Resumo:
When recalling autobiographical memories, individuals often experience visual images associated with the event. These images can be constructed from two different perspectives: first person, in which the event is visualized from the viewpoint experienced at encoding, or third person, in which the event is visualized from an external vantage point. Using a novel technique to measure visual perspective, we examined where the external vantage point is situated in third-person images. Individuals in two studies were asked to recall either 10 or 15 events from their lives and describe the perspectives they experienced. Wide variation in spatial locations was observed within third-person perspectives, with the location of these perspectives relating to the event being recalled. Results suggest remembering from an external viewpoint may be more common than previous studies have demonstrated.
Resumo:
Amnesia typically results from trauma to the medial temporal regions that coordinate activation among the disparate areas of cortex that represent the information that make up autobiographical memories. We proposed that amnesia should also result from damage to these regions, particularly regions that subserve long-term visual memory [Rubin, D. C., & Greenberg, D. L. (1998). Visual memory-deficit amnesia: A distinct amnesic presentation and etiology. Proceedings of the National Academy of Sciences of the USA, 95, 5413-5416]. We previously found 11 such cases in the literature, and all 11 had amnesia. We now present a detailed investigation of one of these patients. M.S. suffers from long-term visual memory loss along with some semantic deficits; he also manifests a severe retrograde amnesia and moderate anterograde amnesia. The presentation of his amnesia differs from that of the typical medial-temporal or lateral-temporal amnesic; we suggest that his visual deficits may be contributing to his autobiographical amnesia.
Resumo:
We describe a form of amnesia, which we have called visual memory-deficit amnesia, that is caused by damage to areas of the visual system that store visual information. Because it is caused by a deficit in access to stored visual material and not by an impaired ability to encode or retrieve new material, it has the otherwise infrequent properties of a more severe retrograde than anterograde amnesia with no temporal gradient in the retrograde amnesia. Of the 11 cases of long-term visual memory loss found in the literature, all had amnesia extending beyond a loss of visual memory, often including a near total loss of pretraumatic episodic memory. Of the 6 cases in which both the severity of retrograde and anterograde amnesia and the temporal gradient of the retrograde amnesia were noted, 4 had a more severe retrograde amnesia with no temporal gradient and 2 had a less severe retrograde amnesia with a temporal gradient.
Resumo:
Practice can improve performance on visual search tasks; the neural mechanisms underlying such improvements, however, are not clear. Response time typically shortens with practice, but which components of the stimulus-response processing chain facilitate this behavioral change? Improved search performance could result from enhancements in various cognitive processing stages, including (1) sensory processing, (2) attentional allocation, (3) target discrimination, (4) motor-response preparation, and/or (5) response execution. We measured event-related potentials (ERPs) as human participants completed a five-day visual-search protocol in which they reported the orientation of a color popout target within an array of ellipses. We assessed changes in behavioral performance and in ERP components associated with various stages of processing. After practice, response time decreased in all participants (while accuracy remained consistent), and electrophysiological measures revealed modulation of several ERP components. First, amplitudes of the early sensory-evoked N1 component at 150 ms increased bilaterally, indicating enhanced visual sensory processing of the array. Second, the negative-polarity posterior-contralateral component (N2pc, 170-250 ms) was earlier and larger, demonstrating enhanced attentional orienting. Third, the amplitude of the sustained posterior contralateral negativity component (SPCN, 300-400 ms) decreased, indicating facilitated target discrimination. Finally, faster motor-response preparation and execution were observed after practice, as indicated by latency changes in both the stimulus-locked and response-locked lateralized readiness potentials (LRPs). These electrophysiological results delineate the functional plasticity in key mechanisms underlying visual search with high temporal resolution and illustrate how practice influences various cognitive and neural processing stages leading to enhanced behavioral performance.
Resumo:
The ability to quickly detect and respond to visual stimuli in the environment is critical to many human activities. While such perceptual and visual-motor skills are important in a myriad of contexts, considerable variability exists between individuals in these abilities. To better understand the sources of this variability, we assessed perceptual and visual-motor skills in a large sample of 230 healthy individuals via the Nike SPARQ Sensory Station, and compared variability in their behavioral performance to demographic, state, sleep and consumption characteristics. Dimension reduction and regression analyses indicated three underlying factors: Visual-Motor Control, Visual Sensitivity, and Eye Quickness, which accounted for roughly half of the overall population variance in performance on this battery. Inter-individual variability in Visual-Motor Control was correlated with gender and circadian patters such that performance on this factor was better for males and for those who had been awake for a longer period of time before assessment. The current findings indicate that abilities involving coordinated hand movements in response to stimuli are subject to greater individual variability, while visual sensitivity and occulomotor control are largely stable across individuals.
Resumo:
Each of our movements activates our own sensory receptors, and therefore keeping track of self-movement is a necessary part of analysing sensory input. One way in which the brain keeps track of self-movement is by monitoring an internal copy, or corollary discharge, of motor commands. This concept could explain why we perceive a stable visual world despite our frequent quick, or saccadic, eye movements: corollary discharge about each saccade would permit the visual system to ignore saccade-induced visual changes. The critical missing link has been the connection between corollary discharge and visual processing. Here we show that such a link is formed by a corollary discharge from the thalamus that targets the frontal cortex. In the thalamus, neurons in the mediodorsal nucleus relay a corollary discharge of saccades from the midbrain superior colliculus to the cortical frontal eye field. In the frontal eye field, neurons use corollary discharge to shift their visual receptive fields spatially before saccades. We tested the hypothesis that these two components-a pathway for corollary discharge and neurons with shifting receptive fields-form a circuit in which the corollary discharge drives the shift. First we showed that the known spatial and temporal properties of the corollary discharge predict the dynamic changes in spatial visual processing of cortical neurons when saccades are made. Then we moved from this correlation to causation by isolating single cortical neurons and showing that their spatial visual processing is impaired when corollary discharge from the thalamus is interrupted. Thus the visual processing of frontal neurons is spatiotemporally matched with, and functionally dependent on, corollary discharge input from the thalamus. These experiments establish the first link between corollary discharge and visual processing, delineate a brain circuit that is well suited for mediating visual stability, and provide a framework for studying corollary discharge in other sensory systems.
Resumo:
Mate-choice copying occurs when animals rely on the mating choices of others to inform their own mating decisions. The proximate mechanisms underlying mate-choice copying remain unknown. To address this question, we tracked the gaze of men and women as they viewed a series of photographs in which a potential mate was pictured beside an opposite-sex partner; the participants then indicated their willingness to engage in a long-term relationship with each potential mate. We found that both men and women expressed more interest in engaging in a relationship with a potential mate if that mate was paired with an attractive partner. Men and women's attention to partners varied with partner attractiveness and this gaze attraction influenced their subsequent mate choices. These results highlight the prevalence of non-independent mate choice in humans and implicate social attention and reward circuitry in these decisions.
Resumo:
Standing and walking generate information about friction underfoot. Five experiments examined whether walkers use such perceptual information for prospective control of locomotion. In particular, do walkers integrate information about friction underfoot with visual cues for sloping ground ahead to make adaptive locomotor decisions? Participants stood on low-, medium-, and high-friction surfaces on a flat platform and made perceptual judgments for possibilities for locomotion over upcoming slopes. Perceptual judgments did not match locomotor abilities: Participants tended to overestimate their abilities on low-friction slopes and underestimate on high-friction slopes (Experiments 1-4). Accuracy improved only for judgments made while participants were in direct contact with the slope (Experiment 5), highlighting the difficulty of incorporating information about friction underfoot into a plan for future actions.
Resumo:
In a series of four studies, we investigated the visual cues that walkers use to predict slippery ground surfaces and tested whether visual information is reliable for specifying low-friction conditions. In Study 1, 91% of participants surveyed responded that they would use shine to identify upcoming slippery ground. Studies 2-4 confirmed participants' reliance on shine to predict slip. Participants viewed ground surfaces varying in gloss, paint color, and viewing distance under indoor and outdoor lighting conditions. Shine and slip ratings and functional walking judgments were related to surface gloss level and to surface coefficient of friction (COF). However, judgments were strongly affected by surface color, viewing distance, and lighting conditions--extraneous factors that do not change the surface COF. Results suggest that, although walkers rely on shine to predict slippery ground, shine is not a reliable visual cue for friction. Poor visual information for friction may underlie the high prevalence of friction-related slips and falls.
Resumo:
Failing to find a tumor in an x-ray scan or a gun in an airport baggage screening can have dire consequences, making it fundamentally important to elucidate the mechanisms that hinder performance in such visual searches. Recent laboratory work has indicated that low target prevalence can lead to disturbingly high miss rates in visual search. Here, however, we demonstrate that misses in low-prevalence searches can be readily abated. When targets are rarely present, observers adapt by responding more quickly, and miss rates are high. Critically, though, these misses are often due to response-execution errors, not perceptual or identification errors: Observers know a target was present, but just respond too quickly. When provided an opportunity to correct their last response, observers can catch their mistakes. Thus, low target prevalence may not be a generalizable cause of high miss rates in visual search.
Resumo:
We investigated the effects of visual input at encoding and retrieval on the phenomenology of memory. In Experiment 1, participants took part in events with and without wearing blindfolds, and later were shown a video of the events. Blindfolding, as well as later viewing of the video, both tended to decrease recollection. In Experiment 2, participants were played videos, with and without the visual component, of events involving other people. Events listened to without visual input were recalled with less recollection; later adding of the visual component increased recollection. In Experiment 3, participants were provided with progressively more information about events that they had experienced, either in the form of photographs that they had taken of the events or narrative descriptions of those photographs. In comparison with manipulations at encoding, the addition of more visual or narrative cues at recall had similar but smaller effects on recollection.
Resumo:
OBJECTIVE: The authors sought to increase understanding of the brain mechanisms involved in cigarette addiction by identifying neural substrates modulated by visual smoking cues in nicotine-deprived smokers. METHOD: Event-related functional magnetic resonance imaging (fMRI) was used to detect brain activation after exposure to smoking-related images in a group of nicotine-deprived smokers and a nonsmoking comparison group. Subjects viewed a pseudo-random sequence of smoking images, neutral nonsmoking images, and rare targets (photographs of animals). Subjects pressed a button whenever a rare target appeared. RESULTS: In smokers, the fMRI signal was greater after exposure to smoking-related images than after exposure to neutral images in mesolimbic dopamine reward circuits known to be activated by addictive drugs (right posterior amygdala, posterior hippocampus, ventral tegmental area, and medial thalamus) as well as in areas related to visuospatial attention (bilateral prefrontal and parietal cortex and right fusiform gyrus). In nonsmokers, no significant differences in fMRI signal following exposure to smoking-related and neutral images were detected. In most regions studied, both subject groups showed greater activation following presentation of rare target images than after exposure to neutral images. CONCLUSIONS: In nicotine-deprived smokers, both reward and attention circuits were activated by exposure to smoking-related images. Smoking cues are processed like rare targets in that they activate attentional regions. These cues are also processed like addictive drugs in that they activate mesolimbic reward regions.
Resumo:
The naming impairments in Alzheimer's disease (AD) have been attributed to a variety of cognitive processing deficits, including impairments in semantic memory, visual perception, and lexical access. To further understand the underlying biological basis of the naming failures in AD, the present investigation examined the relationship of various classes of naming errors to regional brain measures of cerebral glucose metabolism as measured with 18 F-Fluoro-2-deoxyglucose (FDG) and positron emission tomography (PET). Errors committed on a visual naming test were categorized according to a cognitive processing schema and then examined in relationship to metabolism within specific brain regions. The results revealed an association of semantic errors with glucose metabolism in the frontal and temporal regions. Language access errors, such as circumlocutions, and word blocking nonresponses were associated with decreased metabolism in areas within the left hemisphere. Visuoperceptive errors were related to right inferior parietal metabolic function. The findings suggest that specific brain areas mediate the perceptual, semantic, and lexical processing demands of visual naming and that visual naming problems in dementia are related to dysfunction in specific neural circuits.