11 resultados para Plaid

em Aston University Research Archive


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In human vision, the response to luminance contrast at each small region in the image is controlled by a more global process where suppressive signals are pooled over spatial frequency and orientation bands. But what rules govern summation among stimulus components within the suppressive pool? We addressed this question by extending a pedestal plus pattern mask paradigm to use a stimulus with up to three mask components: a vertical 1 c/deg pedestal, plus pattern masks made from either a grating (orientation = -45°) or a plaid (orientation = ±45°), with component spatial frequency of 3 c/deg. The overall contrast of both types of pattern mask was fixed at 20% (i.e., plaid component contrasts were 10%). We found that both of these masks transformed conventional dipper functions (threshold vs. pedestal contrast with no pattern mask) in exactly the same way: The dipper region was raised and shifted to the right, but the dipper handles superimposed. This equivalence of the two pattern masks indicates that contrast summation between the plaid components was perfectly linear prior to the masking stage. Furthermore, the pattern masks did not drive the detecting mechanism above its detection threshold because they did not abolish facilitation by the pedestal (Foley, 1994). Therefore, the pattern masking could not be attributed to within-channel masking, suggesting that linear summation of contrast signals takes place within a suppressive contrast gain pool. We present a quantitative model of the effects and discuss the implications for neurophysiological models of the process. © 2004 ARVO.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The pattern of illumination on an undulating surface can be used to infer its 3-D form (shape from shading). But the recovery of shape would be invalid if the shading actually arose from reflectance variation. When a corrugated surface is painted with an albedo texture, the variation in local mean luminance (LM) due to shading is accompanied by a similar modulation in texture amplitude (AM). This is not so for reflectance variation, nor for roughly textured surfaces. We used a haptic matching technique to show that modulations of texture amplitude play a role in the interpretation of shape from shading. Observers were shown plaid stimuli comprising LM and AM combined in-phase (LM+AM) on one oblique and in anti-phase (LM-AM) on the other. Stimuli were presented via a modified ReachIN workstation allowing the co-registration of visual and haptic stimuli. In the first experiment, observers were asked to adjust the phase of a haptic surface, which had the same orientation as the LM+AM combination, until its peak in depth aligned with the visually perceived peak. The resulting alignments were consistent with the use of a lighting-from-above prior. In the second experiment, observers were asked to adjust the amplitude of the haptic surface to match that of the visually perceived surface. Observers chose relatively large amplitude settings when the haptic surface was oriented and phase-aligned with the LM+AM cue. When the haptic surface was aligned with the LM-AM cue, amplitude settings were close to zero. Thus the LM/AM phase relation is a significant visual depth cue, and is used to discriminate between shading and reflectance variations. [Supported by the Engineering and Physical Sciences Research Council, EPSRC].

Relevância:

10.00% 10.00%

Publicador:

Resumo:

When a textured surface is modulated in depth and illuminated, the level of illumination varies across the surface, producing coarse-scale luminance modulations (LM) and amplitude modulation (AM) of the fine-scale texture. If the surface has an albedo texture (reflectance variation) then the LM and AM components are always in-phase, but if the surface has a relief texture the phase relation between LM and AM varies with the direction and nature of the illuminant. We showed observers sinusoidal luminance and amplitude modulations of a binary noise texture, in various phase relationships, in a paired-comparisons design. In the first experiment, the combinations under test were presented in different temporal intervals. Observers indicated which interval contained the more depthy stimulus. LM and AM in-phase were seen as more depthy than LM alone which was in turn more depthy than LM and AM in anti-phase, but the differences were weak. In the second experiment the combinations under test were presented in a single interval on opposite obliques of a plaid pattern. Observers were asked to indicate the more depthy oblique. Observers produced the same depth rankings as before, but now the effects were more robust and significant. Intermediate LM/AM phase relationships were also tested: phase differences less than 90 deg were seen as more depthy than LM-only, while those greater than 90 deg were seen as less depthy. We conjecture that the visual system construes phase offsets between LM and AM as indicating relief texture and thus perceives these combinations as depthy even when their phase relationship is other than zero. However, when different LM/AM pairs are combined in a plaid, the signals on the obliques are unlikely to indicate corrugations of the same texture, and in this case the out-of-phase pairing is seen as flat. [Supported by the Engineering and Physical Sciences Research Council (EPSRC)].

Relevância:

10.00% 10.00%

Publicador:

Resumo:

When a textured surface is modulated in depth and illuminated, parts of the surface receive different levels of illumination; the resulting variations in luminance can be used to infer the shape of the depth modulations-shape from shading. The changes in illumination also produce changes in the amplitude of the texture, although local contrast remains constant. We investigated the role of texture amplitude in supporting shape from shading. If a luminance plaid is added to a binary noise texture (LM), shape from shading produces perception of corrugations in two directions. If the amplitude of the noise is also modulated (AM) such that it is in-phase with one of the luminance sinusoids and out-of-phase with the other, the resulting surface is seen as corrugated in only one directionöthat supported by the in-phase pairing. We confirmed this subjective report experimentally, using a depth-mapping technique. Further, we asked naïve observers to indicate the direction of corrugations in plaids made up of various combinations of LM and AM. LM+AM was seen as having most depth, then LM-only, then LM-AM, and then AM-only. Our results suggest that while LM is required to see depth from shading, its phase relative to any AM component is also important.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

How does nearby motion affect the perceived speed of a target region? When a central drifting Gabor patch is surrounded by translating noise, its speed can be misperceived over a fourfold range. Typically, when a surround moves in the same direction, perceived centre speed is reduced; for opposite-direction surrounds it increases. Measuring this illusion for a variety of surround properties reveals that the motion context effects are a saturating function of surround speed (Experiment I) and contrast (Experiment II). Our analyses indicate that the effects are consistent with a subtractive process, rather than with speed being averaged over area. In Experiment III we exploit known properties of the motion system to ask where these surround effects impact. Using 2D plaid stimuli, we find that surround-induced shifts in perceived speed of one plaid component produce substantial shifts in perceived plaid direction. This indicates that surrounds exert their influence early in processing, before pattern motion direction is computed. These findings relate to ongoing investigations of surround suppression for direction discrimination, and are consistent with single-cell findings of direction-tuned suppressive and facilitatory interactions in primary visual cortex (V1).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent work has revealed multiple pathways for cross-orientation suppression in cat and human vision. In particular, ipsiocular and interocular pathways appear to assert their influence before binocular summation in human but have different (1) spatial tuning, (2) temporal dependencies, and (3) adaptation after-effects. Here we use mask components that fall outside the excitatory passband of the detecting mechanism to investigate the rules for pooling multiple mask components within these pathways. We measured psychophysical contrast masking functions for vertical 1 cycle/deg sine-wave gratings in the presence of left or right oblique (645 deg) 3 cycles/deg mask gratings with contrast C%, or a plaid made from their sum, where each component (i) had contrast 0.5Ci%. Masks and targets were presented to two eyes (binocular), one eye (monoptic), or different eyes (dichoptic). Binocular-masking functions superimposed when plotted against C, but in the monoptic and dichoptic conditions, the grating produced slightly more suppression than the plaid when Ci $ 16%. We tested contrast gain control models involving two types of contrast combination on the denominator: (1) spatial pooling of the mask after a local nonlinearity (to calculate either root mean square contrast or energy) and (2) "linear suppression" (Holmes & Meese, 2004, Journal of Vision 4, 1080–1089), involving the linear sum of the mask component contrasts. Monoptic and dichoptic masking were typically better fit by the spatial pooling models, but binocular masking was not: it demanded strict linear summation of the Michelson contrast across mask orientation. Another scheme, in which suppressive pooling followed compressive contrast responses to the mask components (e.g., oriented cortical cells), was ruled out by all of our data. We conclude that the different processes that underlie monoptic and dichoptic masking use the same type of contrast pooling within their respective suppressive fields, but the effects do not sum to predict the binocular case.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To explore spatial interactions between visual mechanisms in the Fourier domain we measured detection thresholds for vertical and horizontal sine-wave gratings (4.4 deg diameter) over a range of spatial frequencies (0.5-23 c/deg) in the presence of grating and plaid masks with component contrasts of 8%, orientations of ±45° and a spatial frequency of 1 c/deg. The mask suppressed the target grating over a range of ±1 octave, and the plaid produced more suppression than the grating, consistent with summation of mask components in a broadly tuned contrast gain pool. At greater differences in spatial frequency (∼3 octaves), the plaid and grating masks both produced about 3 dB of facilitation (they reduced detection thresholds by a factor of about √2). At yet further distances (∼4 octaves) the masks had no effect. The facilitation cannot be attributed to a reduction of uncertainty by the mask because (a) it occurs for mask components that have very different spatial frequencies and orientations from the test and (b) the large stimulus size and central fixation point mean there was no spatial uncertainty that could be reduced. We suggest the results are due to long-range sensory interactions (in the Fourier domain) between mask and test-channels. The effects could be due to either direct facilitation or disinhibition. © 2006 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The human visual system is sensitive to second-order modulations of the local contrast (CM) or amplitude (AM) of a carrier signal. Second-order cues are detected independently of first-order luminance signals; however, it is not clear why vision should benet from second-order sensitivity. Analysis of the first-and second-order contents of natural images suggests that these cues tend to occur together, but their phase relationship varies. We have shown that in-phase combinations of LM and AM are perceived as a shaded corrugated surface whereas the anti-phase combination can be seen as corrugated when presented alone or as a flat material change when presented in a plaid containing the in-phase cue. We now extend these findings using new stimulus types and a novel haptic matching task. We also introduce a computational model based on initially separate first-and second-order channels that are combined within orientation and subsequently across orientation to produce a shading signal. Contrast gain control allows the LM + AM cue to suppress responses to the LM-AM when presented in a plaid. Thus, the model sees LM -AM as flat in these circumstances. We conclude that second-order vision plays a key role in disambiguating the origin of luminance changes within an image. © ARVO.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

When viewing a drifting plaid stimulus, perceived motion alternates over time between coherent pattern motion and a transparent impression of the two component gratings. It is known that changing the intrinsic attributes of such patterns (e.g. speed, orientation and spatial frequency of components) can influence percept predominance. Here, we investigate the contribution of extrinsic factors to perception; specifically contextual motion and eye movements. In the first experiment, the percept most similar to the speed and direction of surround motion increased in dominance, implying a tuned integration process. This shift primarily involved an increase in dominance durations of the consistent percept. The second experiment measured eye movements under similar conditions. Saccades were not associated with perceptual transitions, though blink rate increased around the time of a switch. This indicates that saccades do not cause switches, yet saccades in a congruent direction might help to prolong a percept because i) more saccades were directionally congruent with the currently reported percept than expected by chance, and ii) when observers were asked to make deliberate eye movements along one motion axis, this increased percept reports in that direction. Overall, we find evidence that perception of bistable motion can be modulated by information from spatially adjacent regions, and changes to the retinal image caused by blinks and saccades.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Classical studies of area summation measure contrast detection thresholds as a function of grating diameter. Unfortunately, (i) this approach is compromised by retinal inhomogeneity and (ii) it potentially confounds summation of signal with summation of internal noise. The Swiss cheese stimulus of T. S. Meese and R. J. Summers (2007) and the closely related Battenberg stimulus of T. S. Meese (2010) were designed to avoid these problems by keeping target diameter constant and modulating interdigitated checks of first-order carrier contrast within the stimulus region. This approach has revealed a contrast integration process with greater potency than the classical model of spatial probability summation. Here, we used Swiss cheese stimuli to investigate the spatial limits of contrast integration over a range of carrier frequencies (1–16 c/deg) and raised plaid modulator frequencies (0.25–32 cycles/check). Subthreshold summation for interdigitated carrier pairs remained strong (~4 to 6 dB) up to 4 to 8 cycles/check. Our computational analysis of these results implied linear signal combination (following square-law transduction) over either (i) 12 carrier cycles or more or (ii) 1.27 deg or more. Our model has three stages of summation: short-range summation within linear receptive fields, medium-range integration to compute contrast energy for multiple patches of the image, and long-range pooling of the contrast integrators by probability summation. Our analysis legitimizes the inclusion of widespread integration of signal (and noise) within hierarchical image processing models. It also confirms the individual differences in the spatial extent of integration that emerge from our approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To represent the local orientation and energy of a 1-D image signal, many models of early visual processing employ bandpass quadrature filters, formed by combining the original signal with its Hilbert transform. However, representations capable of estimating an image signal's 2-D phase have been largely ignored. Here, we consider 2-D phase representations using a method based upon the Riesz transform. For spatial images there exist two Riesz transformed signals and one original signal from which orientation, phase and energy may be represented as a vector in 3-D signal space. We show that these image properties may be represented by a Singular Value Decomposition (SVD) of the higher-order derivatives of the original and the Riesz transformed signals. We further show that the expected responses of even and odd symmetric filters from the Riesz transform may be represented by a single signal autocorrelation function, which is beneficial in simplifying Bayesian computations for spatial orientation. Importantly, the Riesz transform allows one to weight linearly across orientation using both symmetric and asymmetric filters to account for some perceptual phase distortions observed in image signals - notably one's perception of edge structure within plaid patterns whose component gratings are either equal or unequal in contrast. Finally, exploiting the benefits that arise from the Riesz definition of local energy as a scalar quantity, we demonstrate the utility of Riesz signal representations in estimating the spatial orientation of second-order image signals. We conclude that the Riesz transform may be employed as a general tool for 2-D visual pattern recognition by its virtue of representing phase, orientation and energy as orthogonal signal quantities.