43 resultados para Paleoenvironmental and paleodietary reconstruction
em Cambridge University Engineering Department Publications Database
Fourier Analysis and Gabor Filtering for Texture Analysis and Local Reconstruction of General Shapes
Resumo:
This paper proposes a novel framework to construct a geometric and photometric model of a viewed object that can be used for visualisation in arbitrary pose and illumination. The method is solely based on images and does not require any specialised equipment. We assume that the object has a piece-wise smooth surface and that its reflectance can be modelled using a parametric bidirectional reflectance distribution function. Without assuming any prior knowledge on the object, geometry and reflectance have to be estimated simultaneously and occlusion and shadows have to be treated consistently. We exploit the geometric and photometric consistency using the fact that surface orientation and reflectance are local invariants. In a first implementation, we demonstrate the method using a Lambertian object placed on a turn-table and illuminated by a number of unknown point light-sources. A discrete voxel model is initialised to the visual hull and voxels identified as inconsistent with the invariants are removed iteratively. The resulting model is used to render images in novel pose and illumination. © 2004 Elsevier B.V. All rights reserved.
Fourier analysis and gabor filtering for texture analysis and local reconstruction of general shapes
Resumo:
Since the pioneering work of Gibson in 1950, Shape- From-Texture has been considered by researchers as a hard problem, mainly due to restrictive assumptions which often limit its applicability. We assume a very general stochastic homogeneity and perspective camera model, for both deterministic and stochastic textures. A multi-scale distortion is efficiently estimated with a previously presented method based on Fourier analysis and Gabor filters. The novel 3D reconstruction method that we propose applies to general shapes, and includes non-developable and extensive surfaces. Our algorithm is accurate, robust and compares favorably to the present state of the art of Shape-From- Texture. Results show its application to non-invasively study shape changes with laid-on textures, while rendering and retexturing of cloth is suggested for future work. © 2009 IEEE.
Resumo:
Increasing the field of view of a holographic display while maintaining adequate image size is a difficult task. To address this problem, we designed a system that tessellates several sub-holograms into one large hologram at the output. The sub-holograms we generate is similar to a kinoform but without the paraxial approximation during computation. The sub-holograms are loaded onto a single spatial light modulator consecutively and relayed to the appropriate position at the output through a combination of optics and scanning reconstruction light. We will review the method of computer generated hologram and describe the working principles of our system. Results from our proof-of-concept system are shown to have an improved field of view and reconstructed image size. ©2009 IEEE.
Resumo:
This paper presents a novel technique for reconstructing an outdoor sculpture from an uncalibrated image sequence acquired around it using a hand-held camera. The technique introduced here uses only the silhouettes of the sculpture for both motion estimation and model reconstruction, and no corner detection nor matching is necessary. This is very important as most sculptures are composed of smooth textureless surfaces, and hence their silhouettes are very often the only information available from their images. Besides, as opposed to previous works, the proposed technique does not require the camera motion to be perfectly circular (e.g., turntable sequence). It employs an image rectification step before the motion estimation step to obtain a rough estimate of the camera motion which is only approximately circular. A refinement process is then applied to obtain the true general motion of the camera. This allows the technique to handle large outdoor sculptures which cannot be rotated on a turntable, making it much more practical and flexible.
Resumo:
Breather stability and longevity in thermally relaxing nonlinear arrays is investigated under the scrutiny of the analysis and tools employed for time series and state reconstruction of a dynamical system. We briefly review the methods used in the analysis and characterize a breather in terms of the results obtained with such methods. Our present work focuses on spontaneously appearing breathers in thermal Fermi-Pasta-Ulam arrays but we believe that the conclusions are general enough to describe many other related situations; the particular case described in detail is presented as another example of systems where three incommensurable frequencies dominate their chaotic dynamics (reminiscent of the Ruelle-Takens scenario for the appearance of chaotic behavior in nonlinear systems). This characterization may also be of great help for the discovery of breathers in experimental situations where the temporal evolution of a local variable (like the site energy) is the only available/measured data. © 2005 American Institute of Physics.
Resumo:
Temporal synchronization of multiple video recordings of the same dynamic event is a critical task in many computer vision applications e.g. novel view synthesis and 3D reconstruction. Typically this information is implied through the time-stamp information embedded in the video streams. User-generated videos shot using consumer grade equipment do not contain this information; hence, there is a need to temporally synchronize signals using the visual information itself. Previous work in this area has either assumed good quality data with relatively simple dynamic content or the availability of precise camera geometry. Our first contribution is a synchronization technique which tries to establish correspondence between feature trajectories across views in a novel way, and specifically targets the kind of complex content found in consumer generated sports recordings, without assuming precise knowledge of fundamental matrices or homographies. We evaluate performance using a number of real video recordings and show that our method is able to synchronize to within 1 sec, which is significantly better than previous approaches. Our second contribution is a robust and unsupervised view-invariant activity recognition descriptor that exploits recurrence plot theory on spatial tiles. The descriptor is individually shown to better characterize the activities from different views under occlusions than state-of-the-art approaches. We combine this descriptor with our proposed synchronization method and show that it can further refine the synchronization index. © 2013 ACM.
Resumo:
Estimating the fundamental matrix (F), to determine the epipolar geometry between a pair of images or video frames, is a basic step for a wide variety of vision-based functions used in construction operations, such as camera-pair calibration, automatic progress monitoring, and 3D reconstruction. Currently, robust methods (e.g., SIFT + normalized eight-point algorithm + RANSAC) are widely used in the construction community for this purpose. Although they can provide acceptable accuracy, the significant amount of required computational time impedes their adoption in real-time applications, especially video data analysis with many frames per second. Aiming to overcome this limitation, this paper presents and evaluates the accuracy of a solution to find F by combining the use of two speedy and consistent methods: SURF for the selection of a robust set of point correspondences and the normalized eight-point algorithm. This solution is tested extensively on construction site image pairs including changes in viewpoint, scale, illumination, rotation, and moving objects. The results demonstrate that this method can be used for real-time applications (5 image pairs per second with the resolution of 640 × 480) involving scenes of the built environment.
Resumo:
Temporal synchronization of multiple video recordings of the same dynamic event is a critical task in many computer vision applications e.g. novel view synthesis and 3D reconstruction. Typically this information is implied, since recordings are made using the same timebase, or time-stamp information is embedded in the video streams. Recordings using consumer grade equipment do not contain this information; hence, there is a need to temporally synchronize signals using the visual information itself. Previous work in this area has either assumed good quality data with relatively simple dynamic content or the availability of precise camera geometry. In this paper, we propose a technique which exploits feature trajectories across views in a novel way, and specifically targets the kind of complex content found in consumer generated sports recordings, without assuming precise knowledge of fundamental matrices or homographies. Our method automatically selects the moving feature points in the two unsynchronized videos whose 2D trajectories can be best related, thereby helping to infer the synchronization index. We evaluate performance using a number of real recordings and show that synchronization can be achieved to within 1 sec, which is better than previous approaches. Copyright 2013 ACM.