35 resultados para Cameras
em Cambridge University Engineering Department Publications Database
2D PIV measurements in the near field of grid turbulence using stitched fields from multiple cameras
Resumo:
We present measurements of grid turbulence using 2D particle image velocimetry taken immediately downstream from the grid at a Reynolds number of Re M = 16500 where M is the rod spacing. A long field of view of 14M x 4M in the down- and cross-stream directions was achieved by stitching multiple cameras together. Two uniform biplanar grids were selected to have the same M and pressure drop but different rod diameter D and crosssection. A large data set (10 4 vector fields) was obtained to ensure good convergence of second-order statistics. Estimations of the dissipation rate ε of turbulent kinetic energy (TKE) were found to be sensitive to the number of meansquared velocity gradient terms included and not whether the turbulence was assumed to adhere to isotropy or axisymmetry. The resolution dependency of different turbulence statistics was assessed with a procedure that does not rely on the dissipation scale η. The streamwise evolution of the TKE components and ε was found to collapse across grids when the rod diameter was included in the normalisation. We argue that this should be the case between all regular grids when the other relevant dimensionless quantities are matched and the flow has become homogeneous across the stream. Two-point space correlation functions at x/M = 1 show evidence of complex wake interactions which exhibit a strong Reynolds number dependence. However, these changes in initial conditions disappear indicating rapid cross-stream homogenisation. On the other hand, isotropy was, as expected, not found to be established by x/M = 12 for any case studied. © Springer-Verlag 2012.
Resumo:
Model-based optical motion capture systems require knowledge of the position of the markers relative to the underlying skeleton, the lengths of the skeleton's limbs, and which limb each marker is attached to. These model parameters are typically assumed and entered into the system manually, although techniques exist for calculating some of them, such as the position of the markers relative to the skeleton's joints. We present a fully automatic procedure for determining these model parameters. It tracks the 2D positions of the markers on the cameras' image planes and determines which markers lie on each limb before calculating the position of the underlying skeleton. The only assumption is that the skeleton consists of rigid limbs connected with ball joints. The proposed system is demonstrated on a number of real data examples and is shown to calculate good estimates of the model parameters in each. © 2004 Elsevier B.V. All rights reserved.
Resumo:
A modular image capture system with close integration to CCD cameras has been developed. The aim is to produce a system capable of integrating CCD sensor, image capture and image processing into a single compact unit. This close integration provides a direct mapping between CCD pixels and digital image pixels. The system has been interfaced to a digital signal processor board for the development and control of image processing tasks. These have included characterization and enhancement of noisy images from an intensified camera and measurement to subpixel resolutions. A highly compact form of the image capture system is in an advanced stage of development. This consists of a single FPGA device and a single VRAM providing a two chip image capturing system capable of being integrated into a CCD camera. A miniature compact PC has been developed using a novel modular interconnection technique, providing a processing unit in a three dimensional format highly suited to integration into a CCD camera unit. Work is under way to interface the compact capture system to the PC using this interconnection technique, combining CCD sensor, image capture and image processing into a single compact unit. ©2005 Copyright SPIE - The International Society for Optical Engineering.
Resumo:
An adaptive lens, which has variable focus and is rapidly controllable with simple low-power electronics, has numerous applications in optical telecommunications devices, 3D display systems, miniature cameras and adaptive optics. The University of Durham is developing a range of adaptive liquid crystal lenses, and here we describe work on construction of modal liquid crystal lenses. This type of lens was first described by Naumov [1] and further developed by others [24]. In this system, a spatially varying and circularly symmetric voltage profile can be generated across a liquid-crystal cell, generating a lens-like refractive index profile. Such devices are simple in design, and do not require a pixellated structure. The shape and focussing power of the lens can be controlled by the variation of applied electric field and frequency. Results show adaptive lenses operating at optical wavelengths with continuously variable focal lengths from infinity to 70 cm. Switching speeds are of the order of 1 second between focal positions. Manufacturing methods of our adaptive lenses are presented, together with the latest results to the performance of these devices.
Resumo:
This paper addresses the problem of automatically obtaining the object/background segmentation of a rigid 3D object observed in a set of images that have been calibrated for camera pose and intrinsics. Such segmentations can be used to obtain a shape representation of a potentially texture-less object by computing a visual hull. We propose an automatic approach where the object to be segmented is identified by the pose of the cameras instead of user input such as 2D bounding rectangles or brush-strokes. The key behind our method is a pairwise MRF framework that combines (a) foreground/background appearance models, (b) epipolar constraints and (c) weak stereo correspondence into a single segmentation cost function that can be efficiently solved by Graph-cuts. The segmentation thus obtained is further improved using silhouette coherency and then used to update the foreground/background appearance models which are fed into the next Graph-cut computation. These two steps are iterated until segmentation convergences. Our method can automatically provide a 3D surface representation even in texture-less scenes where MVS methods might fail. Furthermore, it confers improved performance in images where the object is not readily separable from the background in colour space, an area that previous segmentation approaches have found challenging. © 2011 IEEE.
Resumo:
Landslides occur both onshore and offshore, however little attention has been given to offshore landslides (submarine landslides). The unique characteristics of submarine landslides include large mass movements and long travel distances at very gentle slopes. Submarine landslides have significant impacts and consequences on offshore and coastal facilities. This paper presents data from a series of centrifuge tests simulating submarine landslide flows on a very gentle slope. Experiments were conducted at different gravity levels to understand the scaling laws involved in simulating submarine landslide flows through centrifuge modelling. The slope was instrumented with miniature sensors for measurements of pore pressure beneath the flow. A series of digital cameras were used to capture the flow in flight. The results provide a better understanding of the scaling laws that needs to be adopted for centrifuge experiments involving submarine landslide flows and gives an insight into the flow mechanisms. © 2010 Taylor & Francis Group, London.
Resumo:
We present a multispectral photometric stereo method for capturing geometry of deforming surfaces. A novel photometric calibration technique allows calibration of scenes containing multiple piecewise constant chromaticities. This method estimates per-pixel photometric properties, then uses a RANSAC-based approach to estimate the dominant chromaticities in the scene. A likelihood term is developed linking surface normal, image intensity and photometric properties, which allows estimating the number of chromaticities present in a scene to be framed as a model estimation problem. The Bayesian Information Criterion is applied to automatically estimate the number of chromaticities present during calibration. A two-camera stereo system provides low resolution geometry, allowing the likelihood term to be used in segmenting new images into regions of constant chromaticity. This segmentation is carried out in a Markov Random Field framework and allows the correct photometric properties to be used at each pixel to estimate a dense normal map. Results are shown on several challenging real-world sequences, demonstrating state-of-the-art results using only two cameras and three light sources. Quantitative evaluation is provided against synthetic ground truth data. © 2011 IEEE.
Resumo:
A number of methods are commonly used today to collect infrastructure's spatial data (time-of-flight, visual triangulation, etc.). However, current practice lacks a solution that is accurate, automatic, and cost-efficient at the same time. This paper presents a videogrammetric framework for acquiring spatial data of infrastructure which holds the promise to address this limitation. It uses a calibrated set of low-cost high resolution video cameras that is progressively traversed around the scene and aims to produce a dense 3D point cloud which is updated in each frame. It allows for progressive reconstruction as opposed to point-and-shoot followed by point cloud stitching. The feasibility of the framework is studied in this paper. Required steps through this process are presented and the unique challenges of each step are identified. Results specific to each step are also presented.