998 resultados para Surface normals
Resumo:
This paper describes a study and analysis of surface normal-base descriptors for 3D object recognition. Specifically, we evaluate the behaviour of descriptors in the recognition process using virtual models of objects created from CAD software. Later, we test them in real scenes using synthetic objects created with a 3D printer from the virtual models. In both cases, the same virtual models are used on the matching process to find similarity. The difference between both experiments is in the type of views used in the tests. Our analysis evaluates three subjects: the effectiveness of 3D descriptors depending on the viewpoint of camera, the geometry complexity of the model and the runtime used to do the recognition process and the success rate to recognize a view of object among the models saved in the database.
Resumo:
We present a method for recovering facial shape using an image of a face and a reference model. The zenith angle of the surface normal is recovered directly from the intensities of the image. The azimuth angle of the reference model is then combined with the calculated zenith angle in order to get a new field of surface normals. After integration of the needle map, the recovered surface has the effect of mapped facial features over the reference model. Experiments demonstrate that for the lambertian case, surface recovery is achieved with high accuracy. For non-Lambertian cases, experiments suggest potential for face recognition applications.
Resumo:
Feature vectors can be anything from simple surface normals to more complex feature descriptors. Feature extraction is important to solve various computer vision problems: e.g. registration, object recognition and scene understanding. Most of these techniques cannot be computed online due to their complexity and the context where they are applied. Therefore, computing these features in real-time for many points in the scene is impossible. In this work, a hardware-based implementation of 3D feature extraction and 3D object recognition is proposed to accelerate these methods and therefore the entire pipeline of RGBD based computer vision systems where such features are typically used. The use of a GPU as a general purpose processor can achieve considerable speed-ups compared with a CPU implementation. In this work, advantageous results are obtained using the GPU to accelerate the computation of a 3D descriptor based on the calculation of 3D semi-local surface patches of partial views. This allows descriptor computation at several points of a scene in real-time. Benefits of the accelerated descriptor have been demonstrated in object recognition tasks. Source code will be made publicly available as contribution to the Open Source Point Cloud Library.
Resumo:
Flame particles are surface points that always remain embedded on, by comoving with a given iso-scalar surface within a flame. Tracking flame particles allow us to study the fate of propagating surface locations uniquely identified throughout their evolution with time. In this work, using Direct Numerical Simulations we study the finite lifetime of such flame particles residing on iso-temperature surfaces of statistically planar H-2-air flames interacting with near-isotropic turbulence. We find that individual flame particles as well as their ensemble, experience progressively increasing tangential straining rate (K-t) and increasing negative curvature (kappa) near the end of their lifetime to finally get annihilated. By studying two different turbulent flow conditions, flame particle tracking shows that such tendency of local flame surfaces to be strained and cusped towards pinch-off from the main surface is a rather generic feature, independent of initial conditions, locations and ambient turbulence intensity levels. The evolution of the alignments between the flame surface normals and the principal components of the local straining rates are also tracked. We find that the surface normals initially aligned with the most extensive principal strain rate components, rotate near the end of flame particles' lifetime to enable preferential alignment between the surface tangent and the most extensive principal strain rate component. This could explain the persistently increasing tangential strain rate, sharp negative curvature formation and eventual detachment. (C) 2014 The Combustion Institute. Published by Elsevier Inc. All rights reserved.
Resumo:
We introduce a four-pass laser pulse compressor design based on two grating apertures with two gratings per aperture that is tolerant to some alignment errors and, importantly, to grating-to-grating period variations. Each half-beam samples each grating in a diamond-shaped compressor that is symmetric about a central bisecting plane. For any given grating, the two half-beams impinge on opposite sides of its surface normal. It is shown that the two split beams have no pointing difference from paired gratings with different periods. Furthermore, no phase shift between half-beams is incurred as long as the planes containing a grating line and the surface normal for each grating of the pair are parallel. For grating pairs satisfying this condition, gratings surfaces need not be on the same plane, as changes in the gap between the two can compensate to bring the beams back in phase. © 2008 Optical Society of America.
Resumo:
This thesis describes research in which genetic programming is used to automatically evolve shape grammars that construct three dimensional models of possible external building architectures. A completely automated fitness function is used, which evaluates the three dimensional building models according to different geometric properties such as surface normals, height, building footprint, and more. In order to evaluate the buildings on the different criteria, a multi-objective fitness function is used. The results obtained from the automated system were successful in satisfying the multiple objective criteria as well as creating interesting and unique designs that a human-aided system might not discover. In this study of evolutionary design, the architectures created are not meant to be fully functional and structurally sound blueprints for constructing a building, but are meant to be inspirational ideas for possible architectural designs. The evolved models are applicable for today's architectural industries as well as in the video game and movie industries. Many new avenues for future work have also been discovered and highlighted.
Resumo:
The automatic extraction of biometric descriptors of anonymous people is a challenging scenario in camera networks. This task is typically accomplished making use of visual information. Calibrated RGBD sensors make possible the extraction of point cloud information. We present a novel approach for people semantic description and re-identification using the individual point cloud information. The proposal combines the use of simple geometric features with point cloud features based on surface normals.
Resumo:
We present a new approach to diffuse reflectance estimation for dynamic scenes. Non-parametric image statistics are used to transfer reflectance properties from a static example set to a dynamic image sequence. The approach allows diffuse reflectance estimation for surface materials with inhomogeneous appearance, such as those which commonly occur with patterned or textured clothing. Material editing is also possible by transferring edited reflectance properties. Material reflectance properties are initially estimated from static images of the subject under multiple directional illuminations using photometric stereo. The estimated reflectance together with the corresponding image under uniform ambient illumination form a prior set of reference material observations. Material reflectance properties are then estimated for video sequences of a moving person captured under uniform ambient illumination by matching the observed local image statistics to the reference observations. Results demonstrate that the transfer of reflectance properties enables estimation of the dynamic surface normals and subsequent relighting combined with material editing. This approach overcomes limitations of previous work on material transfer and relighting of dynamic scenes which was limited to surfaces with regions of homogeneous reflectance. We evaluate our approach for relighting 3D model sequences reconstructed from multiple view video. Comparison to previous model relighting demonstrates improved reproduction of detailed texture and shape dynamics.
Resumo:
In this thesis, we develop an adaptive framework for Monte Carlo rendering, and more specifically for Monte Carlo Path Tracing (MCPT) and its derivatives. MCPT is attractive because it can handle a wide variety of light transport effects, such as depth of field, motion blur, indirect illumination, participating media, and others, in an elegant and unified framework. However, MCPT is a sampling-based approach, and is only guaranteed to converge in the limit, as the sampling rate grows to infinity. At finite sampling rates, MCPT renderings are often plagued by noise artifacts that can be visually distracting. The adaptive framework developed in this thesis leverages two core strategies to address noise artifacts in renderings: adaptive sampling and adaptive reconstruction. Adaptive sampling consists in increasing the sampling rate on a per pixel basis, to ensure that each pixel value is below a predefined error threshold. Adaptive reconstruction leverages the available samples on a per pixel basis, in an attempt to have an optimal trade-off between minimizing the residual noise artifacts and preserving the edges in the image. In our framework, we greedily minimize the relative Mean Squared Error (rMSE) of the rendering by iterating over sampling and reconstruction steps. Given an initial set of samples, the reconstruction step aims at producing the rendering with the lowest rMSE on a per pixel basis, and the next sampling step then further reduces the rMSE by distributing additional samples according to the magnitude of the residual rMSE of the reconstruction. This iterative approach tightly couples the adaptive sampling and adaptive reconstruction strategies, by ensuring that we only sample densely regions of the image where adaptive reconstruction cannot properly resolve the noise. In a first implementation of our framework, we demonstrate the usefulness of our greedy error minimization using a simple reconstruction scheme leveraging a filterbank of isotropic Gaussian filters. In a second implementation, we integrate a powerful edge aware filter that can adapt to the anisotropy of the image. Finally, in a third implementation, we leverage auxiliary feature buffers that encode scene information (such as surface normals, position, or texture), to improve the robustness of the reconstruction in the presence of strong noise.
Resumo:
In this paper, we present a consolidation method that is based on a new representation of 3D point sets. The key idea is to augment each surface point into a deep point by associating it with an inner point that resides on the meso-skeleton, which consists of a mixture of skeletal curves and sheets. The deep points representation is a result of a joint optimization applied to both ends of the deep points. The optimization objective is to fairly distribute the end points across the surface and the meso-skeleton, such that the deep point orientations agree with the surface normals. The optimization converges where the inner points form a coherent meso-skeleton, and the surface points are consolidated with the missing regions completed. The strength of this new representation stems from the fact that it is comprised of both local and non-local geometric information. We demonstrate the advantages of the deep points consolidation technique by employing it to consolidate and complete noisy point-sampled geometry with large missing parts.
Resumo:
The alignment of model amyloid peptide YYKLVFFC is investigated in bulk and at a solid surface using a range of spectroscopic methods employing polarized radiation. The peptide is based on a core sequence of the amyloid beta (A beta) peptide, KLVFF. The attached tyrosine and cysteine units are exploited to yield information on alignment and possible formation of disulfide or dityrosine links. Polarized Raman spectroscopy on aligned stalks provides information on tyrosine orientation, which complements data from linear dichroism (LD) on aqueous solutions subjected to shear in a Couette cell. LD provides a detailed picture of alignment of peptide strands and aromatic residues and was also used to probe the kinetics of self-assembly. This suggests initial association of phenylalanine residues, followed by subsequent registry of strands and orientation of tyrosine residues. X-ray diffraction (XRD) data from aligned stalks is used to extract orientational order parameters from the 0.48 nm reflection in the cross-beta pattern, from which an orientational distribution function is obtained. X-ray diffraction on solutions subject to capillary flow confirmed orientation in situ at the level of the cross-beta pattern. The information on fibril and tyrosine orientation from polarized Raman spectroscopy is compared with results from NEXAFS experiments on samples prepared as films on silicon. This indicates fibrils are aligned parallel to the surface, with phenyl ring normals perpendicular to the surface. Possible disulfide bridging leading to peptide dimer formation was excluded by Raman spectroscopy, whereas dityrosine formation was probed by fluorescence experiments and was found not to occur except under alkaline conditions. Congo red binding was found not to influence the cross-beta XRD pattern.
Resumo:
This database (Leemans & Cramer 1991) contains monthly averages of mean temperature, temperature range, precipitation, rain days and sunshine hours for the terrestrial surface of the globe, gridded at 0.5 degree longitude/latitude resolution. All grd-files contain the same 62483 pixels in the same order, with 30' latitude and longitude resolution. The coordinates are in degree-decimals and indicate the SW corner of each pixel. Topography is from ETOPO5 and indicates modal elevation. Data were generated from a large data base, using the partial thin-plate splining algorithm (Hutchinson & Bischof 1983). This version is widely used around the globe, notably by all groups participating in the IGBP NPP model intercomparison.
Resumo:
We present an algorithm and the associated single-view capture methodology to acquire the detailed 3D shape, bends, and wrinkles of deforming surfaces. Moving 3D data has been difficult to obtain by methods that rely on known surface features, structured light, or silhouettes. Multispectral photometric stereo is an attractive alternative because it can recover a dense normal field from an untextured surface. We show how to capture such data, which in turn allows us to demonstrate the strengths and limitations of our simple frame-to-frame registration over time. Experiments were performed on monocular video sequences of untextured cloth and faces with and without white makeup. Subjects were filmed under spatially separated red, green, and blue lights. Our first finding is that the color photometric stereo setup is able to produce smoothly varying per-frame reconstructions with high detail. Second, when these 3D reconstructions are augmented with 2D tracking results, one can register both the surfaces and relax the homogenous-color restriction of the single-hue subject. Quantitative and qualitative experiments explore both the practicality and limitations of this simple multispectral capture system.