11 resultados para Motion perception (Vision)
em National Center for Biotechnology Information - NCBI
Resumo:
The primate visual system offers unprecedented opportunities for investigating the neural basis of cognition. Even the simplest visual discrimination task requires processing of sensory signals, formation of a decision, and orchestration of a motor response. With our extensive knowledge of the primate visual and oculomotor systems as a base, it is now possible to investigate the neural basis of simple visual decisions that link sensation to action. Here we describe an initial study of neural responses in the lateral intraparietal area (LIP) of the cerebral cortex while alert monkeys discriminated the direction of motion in a visual display. A subset of LIP neurons carried high-level signals that may comprise a neural correlate of the decision process in our task. These signals are neither sensory nor motor in the strictest sense; rather they appear to reflect integration of sensory signals toward a decision appropriate for guiding movement. If this ultimately proves to be the case, several fascinating issues in cognitive neuroscience will be brought under rigorous physiological scrutiny.
Resumo:
The primate visual motion system performs numerous functions essential for survival in a dynamic visual world. Prominent among these functions is the ability to recover and represent the trajectories of objects in a form that facilitates behavioral responses to those movements. The first step toward this goal, which consists of detecting the displacement of retinal image features, has been studied for many years in both psychophysical and neurobiological experiments. Evidence indicates that achievement of this step is computationally straightforward and occurs at the earliest cortical stage. The second step involves the selective integration of retinal motion signals according to the object of origin. Realization of this step is computationally demanding, as the solution is formally underconstrained. It must rely--by definition--upon utilization of retinal cues that are indicative of the spatial relationships within and between objects in the visual scene. Psychophysical experiments have documented this dependence and suggested mechanisms by which it may be achieved. Neurophysiological experiments have provided evidence for a neural substrate that may underlie this selective motion signal integration. Together they paint a coherent portrait of the means by which retinal image motion gives rise to our perceptual experience of moving objects.
Resumo:
The human visual system is able to effortlessly integrate local features to form our rich perception of patterns, despite the fact that visual information is discretely sampled by the retina and cortex. By using a novel perturbation technique, we show that the mechanisms by which features are integrated into coherent percepts are scale-invariant and nonlinear (phase and contrast polarity independent). They appear to operate by assigning position labels or “place tags” to each feature. Specifically, in the first series of experiments, we show that the positional tolerance of these place tags in foveal, and peripheral vision is about half the separation of the features, suggesting that the neural mechanisms that bind features into forms are quite robust to topographical jitter. In the second series of experiment, we asked how many stimulus samples are required for pattern identification by human and ideal observers. In human foveal vision, only about half the features are needed for reliable pattern interpolation. In this regard, human vision is quite efficient (ratio of ideal to real ≈ 0.75). Peripheral vision, on the other hand is rather inefficient, requiring more features, suggesting that the stimulus may be relatively underrepresented at the stage of feature integration.
Resumo:
We have studied patient PB, who, after an electric shock that led to vascular insufficiency, became virtually blind, although he retained a capacity to see colors consciously. For our psychophysical studies, we used a simplified version of the Land experiments [Land, E. (1974) Proc. R. Inst. G. B. 47, 23–58] to learn whether color constancy mechanisms are intact in him, which amounts to learning whether he can assign a constant color to a surface in spite of changes in the precise wavelength composition of the light reflected from that surface. We supplemented our psychophysical studies with imaging ones, using functional magnetic resonance, to learn something about the location of areas that are active in his brain when he perceives colors. The psychophysical results suggested that color constancy mechanisms are severely defective in PB and that his color vision is wavelength-based. The imaging results showed that, when he viewed and recognized colors, significant increases in activity were restricted mainly to V1-V2. We conclude that a partly defective color system operating on its own in a severely damaged brain is able to mediate a conscious experience of color in the virtually total absence of other visual abilities.
Resumo:
In motion standstill, a quickly moving object appears to stand still, and its details are clearly visible. It is proposed that motion standstill can occur when the spatiotemporal resolution of the shape and color systems exceeds that of the motion systems. For moving red-green gratings, the first- and second-order motion systems fail when the grating is isoluminant. The third-order motion system fails when the green/red saturation ratio produces isosalience (equal distinctiveness of red and green). When a variety of high-contrast red-green gratings, with different spatial frequencies and speeds, were made isoluminant and isosalient, the perception of motion standstill was so complete that motion direction judgments were at chance levels. Speed ratings also indicated that, within a narrow range of luminance contrasts and green/red saturation ratios, moving stimuli were perceived as absolutely motionless. The results provide further evidence that isoluminant color motion is perceived only by the third-order motion system, and they have profound implications for the nature of shape and color perception.
Resumo:
Deciphering the information that eyes, ears, and other sensory organs transmit to the brain is important for understanding the neural basis of behavior. Recordings from single sensory nerve cells have yielded useful insights, but single neurons generally do not mediate behavior; networks of neurons do. Monitoring the activity of all cells in a neural network of a behaving animal, however, is not yet possible. Taking an alternative approach, we used a realistic cell-based model to compute the ensemble of neural activity generated by one sensory organ, the lateral eye of the horseshoe crab, Limulus polyphemus. We studied how the neural network of this eye encodes natural scenes by presenting to the model movies recorded with a video camera mounted above the eye of an animal that was exploring its underwater habitat. Model predictions were confirmed by simultaneously recording responses from single optic nerve fibers of the same animal. We report here that the eye transmits to the brain robust “neural images” of objects having the size, contrast, and motion of potential mates. The neural code for such objects is not found in ambiguous messages of individual optic nerve fibers but rather in patterns of coherent activity that extend over small ensembles of nerve fibers and are bound together by stimulus motion. Integrative properties of neurons in the first synaptic layer of the brain appear well suited to detecting the patterns of coherent activity. Neural coding by this relatively simple eye helps explain how horseshoe crabs find mates and may lead to a better understanding of how more complex sensory organs process information.
Resumo:
It has been known for more than 40 years that images fade from perception when they are kept at the same position on the retina by abrogating eye movements. Although aspects of this phenomenon were described earlier, the use of close-fitting contact lenses in the 1950s made possible a series of detailed observations on eye movements and visual continuity. In the intervening decades, many investigators have studied the role of image motion on visual perception. Although several controversies remain, it is clear that images deteriorate and in some cases disappear following stabilization; eye movements are, therefore, essential to sustained exoptic vision. The time course of image degradation has generally been reported to be a few seconds to a minute or more, depending upon the conditions. Here we show that images of entoptic vascular shadows can disappear in less than 80 msec. The rapid vanishing of these images implies an active mechanism of image erasure and creation as the basis of normal visual processing.
Resumo:
Binocular disparity, the differential angular separation between pairs of image points in the two eyes, is the well-recognized basis for binocular distance perception. Without denying disparity's role in perceiving depth, we describe two perceptual phenomena, which indicate that a wider view of binocular vision is warranted. First, we show that disparity can play a critical role in two-dimensional perception by determining whether separate image fragments should be grouped as part of a single surface or segregated as parts of separate surfaces. Second, we show that stereoscopic vision is not limited to the registration and interpretation of binocular disparity but that it relies on half-occluded points, visible to one eye and not the other, to determine the layout and transparency of surfaces. Because these half-visible points are coded by neurons carrying eye-of-origin information, we suggest that the perception of these surface properties depends on neural activity available at visual cortical area V1.
Resumo:
When the visual (striate) cortex (V1) is damaged in human subjects, cortical blindness results in the contralateral visual half field. Nevertheless, under some experimental conditions, subjects demonstrate a capacity to make visual discriminations in the blind hemifield (blindsight), even though they have no phenomenal experience of seeing. This capacity must, therefore, be mediated by parallel projections to other brain areas. It is also the case that some subjects have conscious residual vision in response to fast moving stimuli or sudden changes in light flux level presented to the blind hemifield, characterized by a contentless kind of awareness, a feeling of something happening, albeit not normal seeing. The relationship between these two modes of discrimination has never been studied systematically. We examine, in the same experiment, both the unconscious discrimination and the conscious visual awareness of moving stimuli in a subject with unilateral damage to V1. The results demonstrate an excellent capacity to discriminate motion direction and orientation in the absence of acknowledged perceptual awareness. Discrimination of the stimulus parameters for acknowledged awareness apparently follows a different functional relationship with respect to stimulus speed, displacement, and stimulus contrast. As performance in the two modes can be quantitatively matched, the findings suggest that it should be possible to image brain activity and to identify the active areas involved in the same subject performing the same discrimination task, both with and without conscious awareness, and hence to determine whether any structures contribute uniquely to conscious perception.