123 resultados para Slant aftereffect
Resumo:
A model of laminar visual cortical dynamics proposes how 3D boundary and surface representations of slated and curved 3D objects and 2D images arise. The 3D boundary representations emerge from interactions between non-classical horizontal receptive field interactions with intracorticcal and intercortical feedback circuits. Such non-classical interactions contextually disambiguate classical receptive field responses to ambiguous visual cues using cells that are sensitive to angles and disparity gradients with cortical areas V1 and V2. These cells are all variants of bipole grouping cells. Model simulations show how horizontal connections can develop selectively to angles, how slanted surfaces can activate 3D boundary representations that are sensitive to angles and disparity gradients, how 3D filling-in occurs across slanted surfaces, how a 2D Necker cube image can be represented in 3D, and how bistable Necker cuber percepts occur. The model also explains data about slant aftereffects and 3D neon color spreading. It shows how habituative transmitters that help to control developement also help to trigger bistable 3D percepts and slant aftereffects, and how attention can influence which of these percepts is perceived by propogating along some object boundaries.
Resumo:
Using a speed-matching task, we measured the speed tuning of the dynamic motion aftereVect (MAE). The results of our Wrst experiment, in which we co-varied dot speed in the adaptation and test stimuli, revealed a speed tuning function. We sought to tease apart what contribution, if any, the test stimulus makes towards the observed speed tuning. This was examined by independently manipulating dot speed in the adaptation and test stimuli, and measuring the eVect this had on the perceived speed of the dynamic MAE. The results revealed that the speed tuning of the dynamic MAE is determined, not by the speed of the adaptation stimulus, but by the local motion characteristics of the dynamic test stimulus. The role of the test stimulus in determining the perceived speed of the dynamic MAE was conWrmed by showing that, if one uses a test stimulus containing two sources of local speed information, observers report seeing a transparent MAE; this is despite the fact that adaptation is induced using a single-speed stimulus. Thus while the adaptation stimulus necessarily determines perceived direction of the dynamic MAE, its perceived speed is determined by the test stimulus. This dissociation of speed and direction supports the notion that the processing of these two visual attributes may be partially independent.
Resumo:
The processing of motion information by the visual system can be decomposed into two general stages; point-by-point local motion extraction, followed by global motion extraction through the pooling of the local motion signals. The direction aftereVect (DAE) is a well known phenomenon in which prior adaptation to a unidirectional moving pattern results in an exaggerated perceived direction diVerence between the adapted direction and a subsequently viewed stimulus moving in a diVerent direction. The experiments in this paper sought to identify where the adaptation underlying the DAE occurs within the motion processing hierarchy. We found that the DAE exhibits interocular transfer, thus demonstrating that the underlying adapted neural mechanisms are binocularly driven and must, therefore, reside in the visual cortex. The remaining experiments measured the speed tuning of the DAE, and used the derived function to test a number of local and global models of the phenomenon. Our data provide compelling evidence that the DAE is driven by the adaptation of motion-sensitive neurons at the local-processing stage of motion encoding. This is in contrast to earlier research showing that direction repulsion, which can be viewed as a simultaneous presentation counterpart to the DAE, is a global motion process. This leads us to conclude that the DAE and direction repulsion reflect interactions between motion-sensitive neural mechanisms at different levels of the motion-processing hierarchy.
Resumo:
Neural adaptation and inhibition are pervasive characteristics of the primate brain, and are probably understood better within the context of visual processing than any other sensory modality. These processes are thought to underlie illusions in which one motion affects the perceived direction of another, such as the direction aftereffect (DAE) and direction repulsion. The DAE describes how, following prolonged viewing of motion in one direction, the direction of a subsequently viewed test pattern is misperceived. In the case of direction repulsion, the direction difference between two transparently moving surfaces is over-estimated. Explanations of the DAE appeal to neural adaptation whilst direction repulsion is accounted for through lateral inhibition. Here we report on a new illusion, the Binary DAE, in which superimposed slow and fast dots moving in the same direction are perceived to move in different directions following adaptation to a mixed-speed stimulus. This new phenomenon is essentially a combination of the DAE and direction repulsion. Interestingly the magnitude of the binary DAE is greater than would be expected simply through a linear combination of the DAE and direction repulsion, suggesting that the mechanisms underlying these two phenomena interact in a non-linear fashion.
Resumo:
It is shown that a side-fed bifilar helix antenna with a single feed, can generate a slant 451 linearly polarized onmidirectional toroidal pattern. The antenna has a low profile and does not require a ground plane. The bifilar helix antenna provides slant 45 degrees polarization over a solid angle of almost 4 pi steradians as compared to a crossed dipole which generates a tilted 45 degrees linearly, polarized pattern only over a solid angle of 1.14 pi steradians. The computed results are validated by experimental data.
Resumo:
Simple pictures under everyday viewing conditions evoke impressions of surfaces oriented in depth. These impressions have been studied by measuring the slants of perceived surfaces, with probes (rotating arrowheads) designed to respect the distinctive character of depicted scenes. Converging arguments indicated that the perceived orientation of the probes was near theoretical values. A series of experiments showed that subjects formed well-defined impressions of depicted surface orientation. The literature suggests that perceived objects might be flattened', but that was not the general rule. Instead, both mean slant and uncertainty fitted models in which slant estimates are derived in a relatively straightforward way from local relations in the picture. Simplifying pictures tended to make orientation estimates less certain, particularly away from the natural anchor points (vertical and horizontal). The shape of the object affected all aspects of the observed-object/percept relationship. Individual differences were large, and suggest that different individuals used different relationships as a basis for their estimates. Overall, data suggest that everyday picture perception is strongly selective and weakly integrative. In particular, depicted slant is estimated by finding a picture feature which will be strongly related to it if the object contains a particular regularity, not by additive integration of evidence from multiple directly and indirectly relevant sources.
Resumo:
When the sensory consequences of an action are systematically altered our brain can recalibrate the mappings between sensory cues and properties of our environment. This recalibration can be driven by both cue conflicts and altered sensory statistics, but neither mechanism offers a way for cues to be calibrated so they provide accurate information about the world, as sensory cues carry no information as to their own accuracy. Here, we explored whether sensory predictions based on internal physical models could be used to accurately calibrate visual cues to 3D surface slant. Human observers played a 3D kinematic game in which they adjusted the slant of a surface so that a moving ball would bounce off the surface and through a target hoop. In one group, the ball’s bounce was manipulated so that the surface behaved as if it had a different slant to that signaled by visual cues. With experience of this altered bounce, observers recalibrated their perception of slant so that it was more consistent with the assumed laws of kinematics and physical behavior of the surface. In another group, making the ball spin in a way that could physically explain its altered bounce eliminated this pattern of recalibration. Importantly, both groups adjusted their behavior in the kinematic game in the same way, experienced the same set of slants and were not presented with low-level cue conflicts that could drive the recalibration. We conclude that observers use predictive kinematic models to accurately calibrate visual cues to 3D properties of world.