30 resultados para visuo-spatial perception


Relevância:

40.00% 40.00%

Publicador:

Resumo:

How are the image statistics of global image contrast computed? We answered this by using a contrast-matching task for checkerboard configurations of ‘battenberg’ micro-patterns where the contrasts and spatial spreads of interdigitated pairs of micro-patterns were adjusted independently. Test stimuli were 20 × 20 arrays with various sized cluster widths, matched to standard patterns of uniform contrast. When one of the test patterns contained a pattern with much higher contrast than the other, that determined global pattern contrast, as in a max() operation. Crucially, however, the full matching functions had a curious intermediate region where low contrast additions for one pattern to intermediate contrasts of the other caused a paradoxical reduction in perceived global contrast. None of the following models predicted this: RMS, energy, linear sum, max, Legge and Foley. However, a gain control model incorporating wide-field integration and suppression of nonlinear contrast responses predicted the results with no free parameters. This model was derived from experiments on summation of contrast at threshold, and masking and summation effects in dipper functions. Those experiments were also inconsistent with the failed models above. Thus, we conclude that our contrast gain control model (Meese & Summers, 2007) describes a fundamental operation in human contrast vision.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We used magnetoencephalography (MEG) to examine the nature of oscillatory brain rhythms when passively viewing both illusory and real visual contours. Three stimuli were employed: a Kanizsa triangle; a Kanizsa triangle with a real triangular contour superimposed; and a control figure in which the corner elements used to form the Kanizsa triangle were rotated to negate the formation of illusory contours. The MEG data were analysed using synthetic aperture magnetometry (SAM) to enable the spatial localisation of task-related oscillatory power changes within specific frequency bands, and the time-course of activity within given locations-of-interest was determined by calculating time-frequency plots using a Morlet wavelet transform. In contrast to earlier studies, we did not find increases in gamma activity (> 30 Hz) to illusory shapes, but instead a decrease in 10–30 Hz activity approximately 200 ms after stimulus presentation. The reduction in oscillatory activity was primarily evident within extrastriate areas, including the lateral occipital complex (LOC). Importantly, this same pattern of results was evident for each stimulus type. Our results further highlight the importance of the LOC and a network of posterior brain regions in processing visual contours, be they illusory or real in nature. The similarity of the results for both real and illusory contours, however, leads us to conclude that the broadband (< 30 Hz) decrease in power we observed is more likely to reflect general changes in visual attention than neural computations specific to processing visual contours.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Behavioural studies on normal and brain-damaged individuals provide convincing evidence that the perception of objects results in the generation of both visual and motor signals in the brain, irrespective of whether or not there is an intention to act upon the object. In this paper we sought to determine the basis of the motor signals generated by visual objects. By examining how the properties of an object affect an observer's reaction time for judging its orientation, we provide evidence to indicate that directed visual attention is responsible for the automatic generation of motor signals associated with the spatial characteristics of perceived objects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Edge detection is crucial in visual processing. Previous computational and psychophysical models have often used peaks in the gradient or zero-crossings in the 2nd derivative to signal edges. We tested these approaches using a stimulus that has no such features. Its luminance profile was a triangle wave, blurred by a rectangular function. Subjects marked the position and polarity of perceived edges. For all blur widths tested, observers marked edges at or near 3rd derivative maxima, even though these were not 1st derivative maxima or 2nd derivative zero-crossings, at any scale. These results are predicted by a new nonlinear model based on 3rd derivative filtering. As a critical test, we added a ramp of variable slope to the blurred triangle-wave luminance profile. The ramp has no effect on the (linear) 2nd or higher derivatives, but the nonlinear model predicts a shift from seeing two edges to seeing one edge as the ramp gradient increases. Results of two experiments confirmed such a shift, thus supporting the new model. [Supported by the Engineering and Physical Sciences Research Council].

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Previous studies have suggested separate channels for the detection of first-order luminance (LM) and second-order modulations of the local amplitude (AM) of a texture (Schofield and Georgeson, 1999 Vision Research 39 2697 - 2716; Georgeson and Schofield, 2002 Spatial Vision 16 59). It has also been shown that LM and AM mixtures with different phase relationships are easily separated in identification tasks, and (informally) appear very different with the in-phase compound (LM + AM), producing the most realistic depth percept. We investigated the role of these LM and AM components in depth perception. Stimuli consisted of a noise texture background with thin bars formed as local increments or decrements in luminance and/or noise amplitude. These stimuli appear as embossed surfaces with wide and narrow regions. When luminance and amplitude changes have the same sign and magnitude (LM + AM) the overall modulation is consistent with multiplicative shading, but this is not so when the two modulations have opposite sign (LM - AM). Keeping the AM modulation depth fixed at a suprathreshold level, we determined the amount of luminance contrast required for observers to correctly indicate the width (narrow or wide) of raised regions in the display. Performance (compared to the LM-only case) was facilitated by the presence of AM, but, unexpectedly, performance for LM - AM was even better than for LM + AM. Further tests suggested that this improvement in performance is not due to an increase in the detectability of luminance in the compound stimuli. Thus, contrary to previous findings, these results suggest the possibility of interaction between first-order and second-order mechanisms in depth perception.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Following adaptation to an oriented (1-d) signal in central vision, the orientation of subsequently viewed test signals may appear repelled away from or attracted towards the adapting orientation. Small angular differences between the adaptor and test yield 'repulsive' shifts, while large angular differences yield 'attractive' shifts. In peripheral vision, however, both small and large angular differences yield repulsive shifts. To account for these tilt after-effects (TAEs), a cascaded model of orientation estimation that is optimized using hierarchical Bayesian methods is proposed. The model accounts for orientation bias through adaptation-induced losses in information that arise because of signal uncertainties and neural constraints placed upon the propagation of visual information. Repulsive (direct) TAEs arise at early stages of visual processing from adaptation of orientation-selective units with peak sensitivity at the orientation of the adaptor (theta). Attractive (indirect) TAEs result from adaptation of second-stage units with peak sensitivity at theta and theta+90 degrees , which arise from an efficient stage of linear compression that pools across the responses of the first-stage orientation-selective units. A spatial orientation vector is estimated from the transformed oriented unit responses. The change from attractive to repulsive TAEs in peripheral vision can be explained by the differing harmonic biases resulting from constraints on signal power (in central vision) versus signal uncertainties in orientation (in peripheral vision). The proposed model is consistent with recent work by computational neuroscientists in supposing that visual bias reflects the adjustment of a rational system in the light of uncertain signals and system constraints.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Perception of Mach bands may be explained by spatial filtering ('lateral inhibition') that can be approximated by 2nd derivative computation, and several alternative models have been proposed. To distinguish between them, we used a novel set of ‘generalised Gaussian’ images, in which the sharp ramp-plateau junction of the Mach ramp was replaced by smoother transitions. The images ranged from a slightly blurred Mach ramp to a Gaussian edge and beyond, and also included a sine-wave edge. The probability of seeing Mach Bands increased with the (relative) sharpness of the junction, but was largely independent of absolute spatial scale. These data did not fit the predictions of MIRAGE, nor 2nd derivative computation at a single fine scale. In experiment 2, observers used a cursor to mark features on the same set of images. Data on perceived position of Mach bands did not support the local energy model. Perceived width of Mach bands was poorly explained by a single-scale edge detection model, despite its previous success with Mach edges (Wallis & Georgeson, 2009, Vision Research, 49, 1886-1893). A more successful model used separate (odd and even) scale-space filtering for edges and bars, local peak detection to find candidate features, and the MAX operator to compare odd- and even-filter response maps (Georgeson, VSS 2006, Journal of Vision 6(6), 191a). Mach bands are seen when there is a local peak in the even-filter (bar) response map, AND that peak value exceeds corresponding responses in the odd-filter (edge) maps.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Masking is said to occur when a mask stimulus interferes with the visibility of a target (test) stimulus. One widely held view of this process supposes interactions between mask and test mechanisms (cross-channel masking), and explicit models (e.g., J. M. Foley, 1994) have proposed that the interactions are inhibitory. Unlike a within-channel model, where masking involves the combination of mask and test stimulus within a single mechanism, this cross-channel inhibitory model predicts that the mask should attenuate the perceived contrast of a test stimulus. Another possibility is that masking is due to an increase in noise, in which case, perception of contrast should be unaffected once the signal exceeds detection threshold. We use circular patches and annuli of sine-wave grating in contrast detection and contrast matching experiments to test these hypotheses and investigate interactions across spatial frequency, orientation, field position, and eye of origin. In both types of experiments we found substantial effects of masking that can occur over a factor of 3 in spatial frequency, 45° in orientation, across different field positions and between different eyes. We found the effects to be greatest at the lowest test spatial frequency we used (0.46 c/deg), and when the mask and test differed in all four dimensions simultaneously. This is surprising in light of previous work where it was concluded that suppression from the surround was strictly monocular (C. Chubb, G. Sperling, & J. A. Solomon, 1989). The results confirm that above detection threshold, cross-channel masking involves contrast suppression and not (purely) mask-induced noise. We conclude that cross-channel masking can be a powerful phenomenon, particularly at low test spatial frequencies and when mask and test are presented to different eyes. © 2004 ARVO.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Gestalt grouping rules imply a process or mechanism for grouping together local features of an object into a perceptual whole. Several psychophysical experiments have been interpreted as evidence for constrained interactions between nearby spatial filter elements and this has led to the hypothesis that element linking might be mediated by these interactions. A common tacit assumption is that these interactions result in response modulation which disturbs a local contrast code. We addressed this possibility by performing contrast discrimination experiments using two-dimensional arrays of multiple Gabor patches arranged either (i) vertically, (ii) in circles (coherent conditions), or (iii) randomly (incoherent condition), as well as for a single Gabor patch. In each condition, contrast increments were applied to either the entire test stimulus (experiment 1) or a single patch whose position was cued (experiment 2). In experiment 3, the texture stimuli were reduced to a single contour by displaying only the central vertical strip. Performance was better for the multiple-patch conditions than for the single-patch condition, but whether the multiple-patch stimulus was coherent or not had no systematic effect on the results in any of the experiments. We conclude that constrained local interactions do not interfere with a local contrast code for our suprathreshold stimuli, suggesting that, in general, this is not the way in which element linking is achieved. The possibility that interactions are involved in enhancing the detectability of contour elements at threshold remains unchallenged by our experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When viewing a drifting plaid stimulus, perceived motion alternates over time between coherent pattern motion and a transparent impression of the two component gratings. It is known that changing the intrinsic attributes of such patterns (e.g. speed, orientation and spatial frequency of components) can influence percept predominance. Here, we investigate the contribution of extrinsic factors to perception; specifically contextual motion and eye movements. In the first experiment, the percept most similar to the speed and direction of surround motion increased in dominance, implying a tuned integration process. This shift primarily involved an increase in dominance durations of the consistent percept. The second experiment measured eye movements under similar conditions. Saccades were not associated with perceptual transitions, though blink rate increased around the time of a switch. This indicates that saccades do not cause switches, yet saccades in a congruent direction might help to prolong a percept because i) more saccades were directionally congruent with the currently reported percept than expected by chance, and ii) when observers were asked to make deliberate eye movements along one motion axis, this increased percept reports in that direction. Overall, we find evidence that perception of bistable motion can be modulated by information from spatially adjacent regions, and changes to the retinal image caused by blinks and saccades.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To represent the local orientation and energy of a 1-D image signal, many models of early visual processing employ bandpass quadrature filters, formed by combining the original signal with its Hilbert transform. However, representations capable of estimating an image signal's 2-D phase have been largely ignored. Here, we consider 2-D phase representations using a method based upon the Riesz transform. For spatial images there exist two Riesz transformed signals and one original signal from which orientation, phase and energy may be represented as a vector in 3-D signal space. We show that these image properties may be represented by a Singular Value Decomposition (SVD) of the higher-order derivatives of the original and the Riesz transformed signals. We further show that the expected responses of even and odd symmetric filters from the Riesz transform may be represented by a single signal autocorrelation function, which is beneficial in simplifying Bayesian computations for spatial orientation. Importantly, the Riesz transform allows one to weight linearly across orientation using both symmetric and asymmetric filters to account for some perceptual phase distortions observed in image signals - notably one's perception of edge structure within plaid patterns whose component gratings are either equal or unequal in contrast. Finally, exploiting the benefits that arise from the Riesz definition of local energy as a scalar quantity, we demonstrate the utility of Riesz signal representations in estimating the spatial orientation of second-order image signals. We conclude that the Riesz transform may be employed as a general tool for 2-D visual pattern recognition by its virtue of representing phase, orientation and energy as orthogonal signal quantities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Motion is an important aspect of face perception that has been largely neglected to date. Many of the established findings are based on studies that use static facial images, which do not reflect the unique temporal dynamics available from seeing a moving face. In the present thesis a set of naturalistic dynamic facial emotional expressions was purposely created and used to investigate the neural structures involved in the perception of dynamic facial expressions of emotion, with both functional Magnetic Resonance Imaging (fMRI) and Magnetoencephalography (MEG). Through fMRI and connectivity analysis, a dynamic face perception network was identified, which is demonstrated to extend the distributed neural system for face perception (Haxby et al.,2000). Measures of effective connectivity between these regions revealed that dynamic facial stimuli were associated with specific increases in connectivity between early visual regions, such as inferior occipital gyri and superior temporal sulci, along with coupling between superior temporal sulci and amygdalae, as well as with inferior frontal gyri. MEG and Synthetic Aperture Magnetometry (SAM) were used to examine the spatiotemporal profile of neurophysiological activity within this dynamic face perception network. SAM analysis revealed a number of regions showing differential activation to dynamic versus static faces in the distributed face network, characterised by decreases in cortical oscillatory power in the beta band, which were spatially coincident with those regions that were previously identified with fMRI. These findings support the presence of a distributed network of cortical regions that mediate the perception of dynamic facial expressions, with the fMRI data providing information on the spatial co-ordinates paralleled by the MEG data, which indicate the temporal dynamics within this network. This integrated multimodal approach offers both excellent spatial and temporal resolution, thereby providing an opportunity to explore dynamic brain activity and connectivity during face processing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The local image representation produced by early stages of visual analysis is uninformative regarding spatially extensive textures and surfaces. We know little about the cortical algorithm used to combine local information over space, and still less about the area over which it can operate. But such operations are vital to support perception of real-world objects and scenes. Here, we deploy a novel reverse-correlation technique to measure the extent of spatial pooling for target regions of different areas placed either in the central visual field, or more peripherally. Stimuli were large arrays of micropatterns, with their contrasts perturbed individually on an interval-by-interval basis. By comparing trial-by-trial observer responses with the predictions of computational models, we show that substantial regions (up to 13 carrier cycles) of a stimulus can be monitored in parallel by summing contrast over area. This summing strategy is very different from the more widely assumed signal selection strategy (a MAX operation), and suggests that neural mechanisms representing extensive visual textures can be recruited by attention. We also demonstrate that template resolution is much less precise in the parafovea than in the fovea, consistent with recent accounts of crowding. © 2014 The Authors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Simple features such as edges are the building blocks of spatial vision, and so I ask: how arevisual features and their properties (location, blur and contrast) derived from the responses ofspatial filters in early vision; how are these elementary visual signals combined across the twoeyes; and when are they not combined? Our psychophysical evidence from blur-matchingexperiments strongly supports a model in which edges are found at the spatial peaks ofresponse of odd-symmetric receptive fields (gradient operators), and their blur B is givenby the spatial scale of the most active operator. This model can explain some surprisingaspects of blur perception: edges look sharper when they are low contrast, and when theirlength is made shorter. Our experiments on binocular fusion of blurred edges show that singlevision is maintained for disparities up to about 2.5*B, followed by diplopia or suppression ofone edge at larger disparities. Edges of opposite polarity never fuse. Fusion may be served bybinocular combination of monocular gradient operators, but that combination - involvingbinocular summation and interocular suppression - is not completely understood.In particular, linear summation (supported by psychophysical and physiological evidence)predicts that fused edges should look more blurred with increasing disparity (up to 2.5*B),but results surprisingly show that edge blur appears constant across all disparities, whetherfused or diplopic. Finally, when edges of very different blur are shown to the left and righteyes fusion may not occur, but perceived blur is not simply given by the sharper edge, nor bythe higher contrast. Instead, it is the ratio of contrast to blur that matters: the edge with theAbstracts 1237steeper gradient dominates perception. The early stages of binocular spatial vision speak thelanguage of luminance gradients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Distributed representations (DR) of cortical channels are pervasive in models of spatio-temporal vision. A central idea that underpins current innovations of DR stems from the extension of 1-D phase into 2-D images. Neurophysiological evidence, however, provides tenuous support for a quadrature representation in the visual cortex, since even phase visual units are associated with broader orientation tuning than odd phase visual units (J.Neurophys.,88,455–463, 2002). We demonstrate that the application of the steering theorems to a 2-D definition of phase afforded by the Riesz Transform (IEEE Trans. Sig. Proc., 49, 3136–3144), to include a Scale Transform, allows one to smoothly interpolate across 2-D phase and pass from circularly symmetric to orientation tuned visual units, and from more narrowly tuned odd symmetric units to even ones. Steering across 2-D phase and scale can be orthogonalized via a linearizing transformation. Using the tiltafter effect as an example, we argue that effects of visual adaptation can be better explained by via an orthogonal rather than channel specific representation of visual units. This is because of the ability to explicitly account for isotropic and cross-orientation adaptation effect from the orthogonal representation from which both direct and indirect tilt after-effects can be explained.