925 resultados para Shape-from-shading
Resumo:
Luminance changes within a scene are ambiguous; they can indicate reflectance changes, shadows, or shading due to surface undulations. How does vision distinguish between these possibilities? When a surface painted with an albedo texture is shaded, the change in local mean luminance (LM) is accompanied by a similar modulation of the local luminance amplitude (AM) of the texture. This relationship does not necessarily hold for reflectance changes or for shading of a relief texture. Here we concentrate on the role of AM in shape-from-shading. Observers were presented with a noise texture onto which sinusoidal LM and AM signals were superimposed, and were asked to indicate which of two marked locations was closer to them. Shape-from-shading was enhanced when LM and AM co-varied (in-phase), and was disrupted when they were out-of-phase. The perceptual differences between cue types (in-phase vs out-of-phase) were enhanced when the two cues were present at different orientations within a single image. Similar results were found with a haptic matching task. We conclude that vision can use AM to disambiguate luminance changes. LM and AM have a positive relationship for rendered, undulating, albedo textures, and we assess the degree to which this relationship holds in natural images. [Supported by EPSRC grants to AJS and MAG].
Resumo:
The pattern of illumination on an undulating surface can be used to infer its 3-D form (shape-from-shading). But the recovery of shape would be invalid if the luminance changes actually arose from changes in reflectance. So how does vision distinguish variation in illumination from variation in reflectance to avoid illusory depth? When a corrugated surface is painted with an albedo texture, the variation in local mean luminance (LM) due to shading is accompanied by a similar modulation in local luminance amplitude (AM). This is not so for reflectance variation, nor for roughly textured surfaces. We used depth mapping and paired comparison methods to show that modulations of local luminance amplitude play a role in the interpretation of shape-from-shading. The shape-from-shading percept was enhanced when LM and AM co-varied (in-phase) and was disrupted when they were out of phase or (to a lesser degree) when AM was absent. The perceptual differences between cue types (in-phase vs out-of-phase) were enhanced when the two cues were present at different orientations within a single image. Our results suggest that when LM and AM co-vary (in-phase) this indicates that the source of variation is illumination (caused by undulations of the surface), rather than surface reflectance. Hence, the congruence of LM and AM is a cue that supports a shape-from-shading interpretation. © 2006 Elsevier Ltd. All rights reserved.
Resumo:
The pattern of illumination on an undulating surface can be used to infer its 3-D form (shape from shading). But the recovery of shape would be invalid if the shading actually arose from reflectance variation. When a corrugated surface is painted with an albedo texture, the variation in local mean luminance (LM) due to shading is accompanied by a similar modulation in texture amplitude (AM). This is not so for reflectance variation, nor for roughly textured surfaces. We used a haptic matching technique to show that modulations of texture amplitude play a role in the interpretation of shape from shading. Observers were shown plaid stimuli comprising LM and AM combined in-phase (LM+AM) on one oblique and in anti-phase (LM-AM) on the other. Stimuli were presented via a modified ReachIN workstation allowing the co-registration of visual and haptic stimuli. In the first experiment, observers were asked to adjust the phase of a haptic surface, which had the same orientation as the LM+AM combination, until its peak in depth aligned with the visually perceived peak. The resulting alignments were consistent with the use of a lighting-from-above prior. In the second experiment, observers were asked to adjust the amplitude of the haptic surface to match that of the visually perceived surface. Observers chose relatively large amplitude settings when the haptic surface was oriented and phase-aligned with the LM+AM cue. When the haptic surface was aligned with the LM-AM cue, amplitude settings were close to zero. Thus the LM/AM phase relation is a significant visual depth cue, and is used to discriminate between shading and reflectance variations. [Supported by the Engineering and Physical Sciences Research Council, EPSRC].
Resumo:
When a textured surface is modulated in depth and illuminated, the level of illumination varies across the surface, producing coarse-scale luminance modulations (LM) and amplitude modulation (AM) of the fine-scale texture. If the surface has an albedo texture (reflectance variation) then the LM and AM components are always in-phase, but if the surface has a relief texture the phase relation between LM and AM varies with the direction and nature of the illuminant. We showed observers sinusoidal luminance and amplitude modulations of a binary noise texture, in various phase relationships, in a paired-comparisons design. In the first experiment, the combinations under test were presented in different temporal intervals. Observers indicated which interval contained the more depthy stimulus. LM and AM in-phase were seen as more depthy than LM alone which was in turn more depthy than LM and AM in anti-phase, but the differences were weak. In the second experiment the combinations under test were presented in a single interval on opposite obliques of a plaid pattern. Observers were asked to indicate the more depthy oblique. Observers produced the same depth rankings as before, but now the effects were more robust and significant. Intermediate LM/AM phase relationships were also tested: phase differences less than 90 deg were seen as more depthy than LM-only, while those greater than 90 deg were seen as less depthy. We conjecture that the visual system construes phase offsets between LM and AM as indicating relief texture and thus perceives these combinations as depthy even when their phase relationship is other than zero. However, when different LM/AM pairs are combined in a plaid, the signals on the obliques are unlikely to indicate corrugations of the same texture, and in this case the out-of-phase pairing is seen as flat. [Supported by the Engineering and Physical Sciences Research Council (EPSRC)].
Resumo:
When a textured surface is modulated in depth and illuminated, parts of the surface receive different levels of illumination; the resulting variations in luminance can be used to infer the shape of the depth modulations-shape from shading. The changes in illumination also produce changes in the amplitude of the texture, although local contrast remains constant. We investigated the role of texture amplitude in supporting shape from shading. If a luminance plaid is added to a binary noise texture (LM), shape from shading produces perception of corrugations in two directions. If the amplitude of the noise is also modulated (AM) such that it is in-phase with one of the luminance sinusoids and out-of-phase with the other, the resulting surface is seen as corrugated in only one directionöthat supported by the in-phase pairing. We confirmed this subjective report experimentally, using a depth-mapping technique. Further, we asked naïve observers to indicate the direction of corrugations in plaids made up of various combinations of LM and AM. LM+AM was seen as having most depth, then LM-only, then LM-AM, and then AM-only. Our results suggest that while LM is required to see depth from shading, its phase relative to any AM component is also important.
Resumo:
People readily perceive smooth luminance variations as being due to the shading produced by undulations of a 3-D surface (shape-from-shading). In doing so, the visual system must simultaneously estimate the shape of the surface and the nature of the illumination. Remarkably, shape-from-shading operates even when both these properties are unknown and neither can be estimated directly from the image. In such circumstances humans are thought to adopt a default illumination model. A widely held view is that the default illuminant is a point source located above the observer's head. However, some have argued instead that the default illuminant is a diffuse source. We now present evidence that humans may adopt a flexible illumination model that includes both diffuse and point source elements. Our model estimates a direction for the point source and then weights the contribution of this source according to a bias function. For most people the preferred illuminant direction is overhead with a strong diffuse component.
Resumo:
This paper presents a method to reconstruct 3D surfaces of silicon wafers from 2D images of printed circuits taken with a scanning electron microscope. Our reconstruction method combines the physical model of the optical acquisition system with prior knowledge about the shapes of the patterns in the circuit; the result is a shape-from-shading technique with a shape prior. The reconstruction of the surface is formulated as an optimization problem with an objective functional that combines a data-fidelity term on the microscopic image with two prior terms on the surface. The data term models the acquisition system through the irradiance equation characteristic of the microscope; the first prior is a smoothness penalty on the reconstructed surface, and the second prior constrains the shape of the surface to agree with the expected shape of the pattern in the circuit. In order to account for the variability of the manufacturing process, this second prior includes a deformation field that allows a nonlinear elastic deformation between the expected pattern and the reconstructed surface. As a result, the minimization problem has two unknowns, and the reconstruction method provides two outputs: 1) a reconstructed surface and 2) a deformation field. The reconstructed surface is derived from the shading observed in the image and the prior knowledge about the pattern in the circuit, while the deformation field produces a mapping between the expected shape and the reconstructed surface that provides a measure of deviation between the circuit design models and the real manufacturing process.
Resumo:
When a flash is presented aligned with a moving stimulus, the former is perceived to lag behind the latter (the flash-lag effect). We study whether this mislocalization occurs when a positional judgment is not required, but a veridical spatial relationship between moving and flashed stimuli is needed to perceive a global shape. To do this, we used Glass patterns that are formed by pairs of correlated dots. One dot of each pair was presented moving and, at a given moment, the other dot of each pair was flashed in order to build the Glass pattern. If a flash-lag effect occurs between each pair of dots, we expect the best perception of the global shape to occur when the flashed dots are presented before the moving dots arrive at the position that physically builds the Glass pattern. Contrary to this, we found that the best detection of Glass patterns occurred for the situation of physical alignment. This result is not consistent with a low-level contribution to the flash-lag effect.
Resumo:
This report presents a set of representations methodologies and tools for the purpose of visualizing, analyzing and designing functional shapes in terms of constraints on motion. The core of the research is an interactive computational environment that provides an explicit visual representation of motion constraints produced by shape interactions, and a series of tools that allow for the manipulation of motion constraints and their underlying shapes for the purpose of design.
Resumo:
Acquiring 3D shape from images is a classic problem in Computer Vision occupying researchers for at least 20 years. Only recently however have these ideas matured enough to provide highly accurate results. We present a complete algorithm to reconstruct 3D objects from images using the stereo correspondence cue. The technique can be described as a pipeline of four basic building blocks: camera calibration, image segmentation, photo-consistency estimation from images, and surface extraction from photo-consistency. In this Chapter we will put more emphasis on the latter two: namely how to extract geometric information from a set of photographs without explicit camera visibility, and how to combine different geometry estimates in an optimal way. © 2010 Springer-Verlag Berlin Heidelberg.
Resumo:
We present a new approach to diffuse reflectance estimation for dynamic scenes. Non-parametric image statistics are used to transfer reflectance properties from a static example set to a dynamic image sequence. The approach allows diffuse reflectance estimation for surface materials with inhomogeneous appearance, such as those which commonly occur with patterned or textured clothing. Material editing is also possible by transferring edited reflectance properties. Material reflectance properties are initially estimated from static images of the subject under multiple directional illuminations using photometric stereo. The estimated reflectance together with the corresponding image under uniform ambient illumination form a prior set of reference material observations. Material reflectance properties are then estimated for video sequences of a moving person captured under uniform ambient illumination by matching the observed local image statistics to the reference observations. Results demonstrate that the transfer of reflectance properties enables estimation of the dynamic surface normals and subsequent relighting combined with material editing. This approach overcomes limitations of previous work on material transfer and relighting of dynamic scenes which was limited to surfaces with regions of homogeneous reflectance. We evaluate our approach for relighting 3D model sequences reconstructed from multiple view video. Comparison to previous model relighting demonstrates improved reproduction of detailed texture and shape dynamics.
Resumo:
A fast marching level set method is presented for monotonically advancing fronts, which leads to an extremely fast scheme for solving the Eikonal equation. Level set methods are numerical techniques for computing the position of propagating fronts. They rely on an initial value partial differential equation for a propagating level set function and use techniques borrowed from hyperbolic conservation laws. Topological changes, corner and cusp development, and accurate determination of geometric properties such as curvature and normal direction are naturally obtained in this setting. This paper describes a particular case of such methods for interfaces whose speed depends only on local position. The technique works by coupling work on entropy conditions for interface motion, the theory of viscosity solutions for Hamilton-Jacobi equations, and fast adaptive narrow band level set methods. The technique is applicable to a variety of problems, including shape-from-shading problems, lithographic development calculations in microchip manufacturing, and arrival time problems in control theory.
Resumo:
In this Study we examine the spectral and morphometric properties of the four important lunar mare dome fields near Cauchy, Arago, Hortensius. and Milichius. We utilize Clementine UV vis mulfispectral data to examine the soil composition of the mare domes while employing telescopic CCD imagery to compute digital elevation maps in order to determine their morphometric properties, especially flank slope, height, and edifice Volume. After reviewing previous attempts to determine topographic data for lunar domes, we propose an image-based 3D reconstruction approach which is based on a combination of photoclinometry and shape from shading. Accordingly, we devise a classification scheme for lunar Marc domes which is based on a principal component analysis of the determined spectral and morphometric features. For the effusive mare domes of the examined fields we establish four Classes, two of which are further divided into two subclasses, respectively, where each class represents distinct combinations of spectral and morphometric dome properties. As a general trend, shallow and steep domes formed out of low-TiO2 basalts are observed in the Hortensius and Milichius dome fields, while the domes near Cauchy and Arago that consist of high-TiO2 basalts are all very shallow. The intrusive domes of our data set cover a wide continuous range of spectral and morphometric quantities, generally characterized by larger diameters and shallower flank slopes than effusive domes. A comparison to effusive and intrusive mare domes in other lunar regions, highland domes, and lunar cones has shown that the examined four mare dome fields display Such a richness in spectral properties and 3D dome shape that the established representation remains valid in a more global context. Furthermore, we estimate the physical parameters of dome formation for the examined domes based on a rheologic model. Each class of effusive domes defined in terms of spectral and morphometric properties is characterized by its specific range of values for lava viscosity, effusion rate, and duration of the effusion process. For our data set we report lava viscosities between about 10(2) and 10(8) Pas, effusion rates between 25 and 600 m(3) s(-1), and durations of the effusion process between three weeks and 18 years. Lava viscosity decreases with increasing R-415/R-750 spectral ratio and thus TiO2 content; however, the correlation is not strong, implying an important influence of further parameters like effusion temperature on lava viscosity.
Resumo:
Countershading, the widespread tendency of animals to be darker on the side that receives strongest illumination, has classically been explained as an adaptation for camouflage: obliterating cues to 3D shape and enhancing background matching. However, there have only been two quantitative tests of whether the patterns observed in different species match the optimal shading to obliterate 3D cues, and no tests of whether optimal countershading actually improves concealment or survival. We use a mathematical model of the light field to predict the optimal countershading for concealment that is specific to the light environment and then test this prediction with correspondingly patterned model “caterpillars” exposed to avian predation in the field. We show that the optimal countershading is strongly illumination-dependent. A relatively sharp transition in surface patterning from dark to light is only optimal under direct solar illumination; if there is diffuse illumination from cloudy skies or shade, the pattern provides no advantage over homogeneous background-matching coloration. Conversely, a smoother gradation between dark and light is optimal under cloudy skies or shade. The demonstration of these illumination-dependent effects of different countershading patterns on predation risk strongly supports the comparative evidence showing that the type of countershading varies with light environment.