910 resultados para Multiscale Texture
Resumo:
Summarizing topological relations is fundamental to many spatial applications including spatial query optimization. In this paper, we present several novel techniques to eectively construct cell density based spatial histograms for range (window) summarizations restricted to the four most important topological relations: contains, contained, overlap, and disjoint. We rst present a novel framework to construct a multiscale histogram composed of multiple Euler histograms with the guarantee of the exact summarization results for aligned windows in constant time. Then we present an approximate algorithm, with the approximate ratio 19/12, to minimize the storage spaces of such multiscale Euler histograms, although the problem is generally NP-hard. To conform to a limited storage space where only k Euler histograms are allowed, an effective algorithm is presented to construct multiscale histograms to achieve high accuracy. Finally, we present a new approximate algorithm to query an Euler histogram that cannot guarantee the exact answers; it runs in constant time. Our extensive experiments against both synthetic and real world datasets demonstrated that the approximate mul- tiscale histogram techniques may improve the accuracy of the existing techniques by several orders of magnitude while retaining the cost effciency, and the exact multiscale histogram technique requires only a storage space linearly proportional to the number of cells for the real datasets.
Resumo:
This Letter addresses image segmentation via a generative model approach. A Bayesian network (BNT) in the space of dyadic wavelet transform coefficients is introduced to model texture images. The model is similar to a Hidden Markov model (HMM), but with non-stationary transitive conditional probability distributions. It is composed of discrete hidden variables and observable Gaussian outputs for wavelet coefficients. In particular, the Gabor wavelet transform is considered. The introduced model is compared with the simplest joint Gaussian probabilistic model for Gabor wavelet coefficients for several textures from the Brodatz album [1]. The comparison is based on cross-validation and includes probabilistic model ensembles instead of single models. In addition, the robustness of the models to cope with additive Gaussian noise is investigated. We further study the feasibility of the introduced generative model for image segmentation in the novelty detection framework [2]. Two examples are considered: (i) sea surface pollution detection from intensity images and (ii) image segmentation of the still images with varying illumination across the scene.
Resumo:
The pattern of illumination on an undulating surface can be used to infer its 3-D form (shape from shading). But the recovery of shape would be invalid if the shading actually arose from reflectance variation. When a corrugated surface is painted with an albedo texture, the variation in local mean luminance (LM) due to shading is accompanied by a similar modulation in texture amplitude (AM). This is not so for reflectance variation, nor for roughly textured surfaces. We used a haptic matching technique to show that modulations of texture amplitude play a role in the interpretation of shape from shading. Observers were shown plaid stimuli comprising LM and AM combined in-phase (LM+AM) on one oblique and in anti-phase (LM-AM) on the other. Stimuli were presented via a modified ReachIN workstation allowing the co-registration of visual and haptic stimuli. In the first experiment, observers were asked to adjust the phase of a haptic surface, which had the same orientation as the LM+AM combination, until its peak in depth aligned with the visually perceived peak. The resulting alignments were consistent with the use of a lighting-from-above prior. In the second experiment, observers were asked to adjust the amplitude of the haptic surface to match that of the visually perceived surface. Observers chose relatively large amplitude settings when the haptic surface was oriented and phase-aligned with the LM+AM cue. When the haptic surface was aligned with the LM-AM cue, amplitude settings were close to zero. Thus the LM/AM phase relation is a significant visual depth cue, and is used to discriminate between shading and reflectance variations. [Supported by the Engineering and Physical Sciences Research Council, EPSRC].
Resumo:
When a textured surface is modulated in depth and illuminated, the level of illumination varies across the surface, producing coarse-scale luminance modulations (LM) and amplitude modulation (AM) of the fine-scale texture. If the surface has an albedo texture (reflectance variation) then the LM and AM components are always in-phase, but if the surface has a relief texture the phase relation between LM and AM varies with the direction and nature of the illuminant. We showed observers sinusoidal luminance and amplitude modulations of a binary noise texture, in various phase relationships, in a paired-comparisons design. In the first experiment, the combinations under test were presented in different temporal intervals. Observers indicated which interval contained the more depthy stimulus. LM and AM in-phase were seen as more depthy than LM alone which was in turn more depthy than LM and AM in anti-phase, but the differences were weak. In the second experiment the combinations under test were presented in a single interval on opposite obliques of a plaid pattern. Observers were asked to indicate the more depthy oblique. Observers produced the same depth rankings as before, but now the effects were more robust and significant. Intermediate LM/AM phase relationships were also tested: phase differences less than 90 deg were seen as more depthy than LM-only, while those greater than 90 deg were seen as less depthy. We conjecture that the visual system construes phase offsets between LM and AM as indicating relief texture and thus perceives these combinations as depthy even when their phase relationship is other than zero. However, when different LM/AM pairs are combined in a plaid, the signals on the obliques are unlikely to indicate corrugations of the same texture, and in this case the out-of-phase pairing is seen as flat. [Supported by the Engineering and Physical Sciences Research Council (EPSRC)].
Resumo:
When a textured surface is modulated in depth and illuminated, parts of the surface receive different levels of illumination; the resulting variations in luminance can be used to infer the shape of the depth modulations-shape from shading. The changes in illumination also produce changes in the amplitude of the texture, although local contrast remains constant. We investigated the role of texture amplitude in supporting shape from shading. If a luminance plaid is added to a binary noise texture (LM), shape from shading produces perception of corrugations in two directions. If the amplitude of the noise is also modulated (AM) such that it is in-phase with one of the luminance sinusoids and out-of-phase with the other, the resulting surface is seen as corrugated in only one directionöthat supported by the in-phase pairing. We confirmed this subjective report experimentally, using a depth-mapping technique. Further, we asked naïve observers to indicate the direction of corrugations in plaids made up of various combinations of LM and AM. LM+AM was seen as having most depth, then LM-only, then LM-AM, and then AM-only. Our results suggest that while LM is required to see depth from shading, its phase relative to any AM component is also important.
Resumo:
Previous studies have suggested separate channels for detection of first-order luminance modulations (LM) and second-order modulations of the local amplitude (AM) of a texture. Mixtures of LM and AM with different phase relationships appear very different: in-phase compounds (LM + AM) look like 3-D corrugated surfaces, while out-of-phase compounds (LM - AM) appear flat and/or transparent. This difference may arise because the in-phase compounds are consistent with multiplicative shading, while the out-of-phase compounds are not. We investigated the role of these modulation components in surface depth perception. We used a textured background with thin bars formed by local changes in luminance and/or texture amplitude. These stimuli appear as embossed surfaces with wide and narrow regions. Keeping the AM modulation depth fixed at a suprathreshold level, we determined the amount of luminance contrast required for observers to correctly indicate the width (narrow or wide) of 'raised' regions in the display. Performance (compared to the LM-only case) was facilitated by the presence of AM, but, unexpectedly, performance for LM - AM was as good as for LM + AM. Thus, these results suggest that there is an interaction between first-order and second-order mechanisms during depth perception based on shading cues, but the phase dependence is not yet understood.
Resumo:
Previous studies have suggested separate channels for the detection of first-order luminance (LM) and second-order modulations of the local amplitude (AM) of a texture (Schofield and Georgeson, 1999 Vision Research 39 2697 - 2716; Georgeson and Schofield, 2002 Spatial Vision 16 59). It has also been shown that LM and AM mixtures with different phase relationships are easily separated in identification tasks, and (informally) appear very different with the in-phase compound (LM + AM), producing the most realistic depth percept. We investigated the role of these LM and AM components in depth perception. Stimuli consisted of a noise texture background with thin bars formed as local increments or decrements in luminance and/or noise amplitude. These stimuli appear as embossed surfaces with wide and narrow regions. When luminance and amplitude changes have the same sign and magnitude (LM + AM) the overall modulation is consistent with multiplicative shading, but this is not so when the two modulations have opposite sign (LM - AM). Keeping the AM modulation depth fixed at a suprathreshold level, we determined the amount of luminance contrast required for observers to correctly indicate the width (narrow or wide) of raised regions in the display. Performance (compared to the LM-only case) was facilitated by the presence of AM, but, unexpectedly, performance for LM - AM was even better than for LM + AM. Further tests suggested that this improvement in performance is not due to an increase in the detectability of luminance in the compound stimuli. Thus, contrary to previous findings, these results suggest the possibility of interaction between first-order and second-order mechanisms in depth perception.
Resumo:
We outline a scheme for the way in which early vision may handle information about shading (luminance modulation, LM) and texture (contrast modulation, CM). Previous work on the detection of gratings has found no sub-threshold summation, and no cross-adaptation, between LM and CM patterns. This strongly implied separate channels for the detection of LM and CM structure. However, we now report experiments in which adapting to LM (or CM) gratings creates tilt aftereffects of similar magnitude on both LM and CM test gratings, and reduces the perceived strength (modulation depth) of LM and CM gratings to a similar extent. This transfer of aftereffects between LM and CM might suggest a second stage of processing at which LM and CM information is integrated. The nature of this integration, however, is unclear and several simple predictions are not fulfilled. Firstly, one might expect the integration stage to lose identity information about whether the pattern was LM or CM. We show instead that the identity of barely detectable LM and CM patterns is not lost. Secondly, when LM and CM gratings are combined in-phase or out-of-phase we find no evidence for cancellation, nor for 'phase-blindness'. These results suggest that information about LM and CM is not pooled or merged - shading is not confused with texture variation. We suggest that LM and CM signals are carried by separate channels, but they share a common adaptation mechanism that accounts for the almost complete transfer of perceptual aftereffects.
Resumo:
Edge blur is an important perceptual cue, but how does the visual system encode the degree of blur at edges? Blur could be measured by the width of the luminance gradient profile, peak ^ trough separation in the 2nd derivative profile, or the ratio of 1st-to-3rd derivative magnitudes. In template models, the system would store a set of templates of different sizes and find which one best fits the `signature' of the edge. The signature could be the luminance profile itself, or one of its spatial derivatives. I tested these possibilities in blur-matching experiments. In a 2AFC staircase procedure, observers adjusted the blur of Gaussian edges (30% contrast) to match the perceived blur of various non-Gaussian test edges. In experiment 1, test stimuli were mixtures of 2 Gaussian edges (eg 10 and 30 min of arc blur) at the same location, while in experiment 2, test stimuli were formed from a blurred edge sharpened to different extents by a compressive transformation. Predictions of the various models were tested against the blur-matching data, but only one model was strongly supported. This was the template model, in which the input signature is the 2nd derivative of the luminance profile, and the templates are applied to this signature at the zero-crossings. The templates are Gaussian derivative receptive fields that covary in width and length to form a self-similar set (ie same shape, different sizes). This naturally predicts that shorter edges should look sharper. As edge length gets shorter, responses of longer templates drop more than shorter ones, and so the response distribution shifts towards shorter (smaller) templates, signalling a sharper edge. The data confirmed this, including the scale-invariance implied by self-similarity, and a good fit was obtained from templates with a length-to-width ratio of about 1. The simultaneous analysis of edge blur and edge location may offer a new solution to the multiscale problem in edge detection.