65 resultados para cameras and camera accessories
Resumo:
This paper presents the characterisation of self-excited oscillations in a kerosene burner. The combustion instability exhibits two different modes and frequencies depending on the air flow rate. Experimental results reveal the influence of the spray to shift between these two modes. Pressure and heat release fluctuations have been measured simultaneously and the flame transfer function has been calculated from these measurements. The Mie scattering technique has been used to record spray fluctuations in reacting conditions with a high speed camera. Innovative image processing has enabled us to obtain fluctuations of the Mie scattered light from the spray as a temporal signal acquired simultaneously with pressure fluctuations. This has been used to determine a transfer function relating the image intensity and hence the spray fluctuations to changes in air velocity. This function has identified the different role the spray plays in the two modes of instability. At low air flow rates, the spray responds to an unsteady air flow rate and the time varying spray characteristics lead to unsteady combustion. At higher air flow rates, effective evaporation means that the spray dynamics are less important, leading to a different flame transfer function and frequency of self-excited oscillation. In conclusion, the combustion instabilities observed are closely related with the fluctuations of the spray motion and evaporation.
Resumo:
This paper presents a novel technique for reconstructing an outdoor sculpture from an uncalibrated image sequence acquired around it using a hand-held camera. The technique introduced here uses only the silhouettes of the sculpture for both motion estimation and model reconstruction, and no corner detection nor matching is necessary. This is very important as most sculptures are composed of smooth textureless surfaces, and hence their silhouettes are very often the only information available from their images. Besides, as opposed to previous works, the proposed technique does not require the camera motion to be perfectly circular (e.g., turntable sequence). It employs an image rectification step before the motion estimation step to obtain a rough estimate of the camera motion which is only approximately circular. A refinement process is then applied to obtain the true general motion of the camera. This allows the technique to handle large outdoor sculptures which cannot be rotated on a turntable, making it much more practical and flexible.
Resumo:
A modular image capture system with close integration to CCD cameras has been developed. The aim is to produce a system capable of integrating CCD sensor, image capture and image processing into a single compact unit. This close integration provides a direct mapping between CCD pixels and digital image pixels. The system has been interfaced to a digital signal processor board for the development and control of image processing tasks. These have included characterization and enhancement of noisy images from an intensified camera and measurement to subpixel resolutions. A highly compact form of the image capture system is in an advanced stage of development. This consists of a single FPGA device and a single VRAM providing a two chip image capturing system capable of being integrated into a CCD camera. A miniature compact PC has been developed using a novel modular interconnection technique, providing a processing unit in a three dimensional format highly suited to integration into a CCD camera unit. Work is under way to interface the compact capture system to the PC using this interconnection technique, combining CCD sensor, image capture and image processing into a single compact unit. ©2005 Copyright SPIE - The International Society for Optical Engineering.
Fourier analysis and gabor filtering for texture analysis and local reconstruction of general shapes
Resumo:
Since the pioneering work of Gibson in 1950, Shape- From-Texture has been considered by researchers as a hard problem, mainly due to restrictive assumptions which often limit its applicability. We assume a very general stochastic homogeneity and perspective camera model, for both deterministic and stochastic textures. A multi-scale distortion is efficiently estimated with a previously presented method based on Fourier analysis and Gabor filters. The novel 3D reconstruction method that we propose applies to general shapes, and includes non-developable and extensive surfaces. Our algorithm is accurate, robust and compares favorably to the present state of the art of Shape-From- Texture. Results show its application to non-invasively study shape changes with laid-on textures, while rendering and retexturing of cloth is suggested for future work. © 2009 IEEE.
Resumo:
The effects of varying corona surface treatment on ink drop impact and spreading on a polymer substrate have been investigated. The surface energy of substrates treated with different levels of corona was determined from static contact angle measurement by the Owens and Wendt method. A drop-on-demand print-head was used to eject 38 μm diameter drops of UV-curable graphics ink travelling at 2.7 m/s on to a flat polymer substrate. The kinematic impact phase was imaged with a high speed camera at 500k frames per second, while the spreading phase was imaged at 20k frames per secoiui. The resultant images were analyzed to track the changes in the drop diameter during the different phases of drop spreading. Further experiments were carried out with white-light intetferometry to accurately measure the final diameter of drops which had been printed on different corona treated substrates and UV cured. The results are correlated to characterize the effects of corona treatment on drop impact behavior and final print quality.
Resumo:
The movement of the circular piston in an oscillating piston positive displacement flowmeter is important in understanding the operation of the flowmeter, and the leakage of liquid past the piston plays a key role in the performance of the meter. The clearances between the piston and the chamber are small, typically less than 60 νm. In order to measure this film thickness a fluorescent dye was added to the water passing through the meter, which was illuminated with UV light. Visible light images were captured with a digital camera and analysed to give a measure of the film thickness with an uncertainty of less than 7%. It is known that this method lacks precision unless careful calibration is undertaken. Methods to achieve this are discussed in the paper. The grey level values for a range of film thicknesses were calibrated in situ with six dye concentrations to select the most appropriate one for the range of liquid film thickness. Data obtained for the oscillating piston flowmeter demonstrate the value of the fluorescence technique. The method is useful, inexpensive and straightforward and can be extended to other applications where measurement of liquid film thickness is required. © 2011 IOP Publishing Ltd.
Resumo:
This paper addresses the problem of automatically obtaining the object/background segmentation of a rigid 3D object observed in a set of images that have been calibrated for camera pose and intrinsics. Such segmentations can be used to obtain a shape representation of a potentially texture-less object by computing a visual hull. We propose an automatic approach where the object to be segmented is identified by the pose of the cameras instead of user input such as 2D bounding rectangles or brush-strokes. The key behind our method is a pairwise MRF framework that combines (a) foreground/background appearance models, (b) epipolar constraints and (c) weak stereo correspondence into a single segmentation cost function that can be efficiently solved by Graph-cuts. The segmentation thus obtained is further improved using silhouette coherency and then used to update the foreground/background appearance models which are fed into the next Graph-cut computation. These two steps are iterated until segmentation convergences. Our method can automatically provide a 3D surface representation even in texture-less scenes where MVS methods might fail. Furthermore, it confers improved performance in images where the object is not readily separable from the background in colour space, an area that previous segmentation approaches have found challenging. © 2011 IEEE.
2D PIV measurements in the near field of grid turbulence using stitched fields from multiple cameras
Resumo:
We present measurements of grid turbulence using 2D particle image velocimetry taken immediately downstream from the grid at a Reynolds number of Re M = 16500 where M is the rod spacing. A long field of view of 14M x 4M in the down- and cross-stream directions was achieved by stitching multiple cameras together. Two uniform biplanar grids were selected to have the same M and pressure drop but different rod diameter D and crosssection. A large data set (10 4 vector fields) was obtained to ensure good convergence of second-order statistics. Estimations of the dissipation rate ε of turbulent kinetic energy (TKE) were found to be sensitive to the number of meansquared velocity gradient terms included and not whether the turbulence was assumed to adhere to isotropy or axisymmetry. The resolution dependency of different turbulence statistics was assessed with a procedure that does not rely on the dissipation scale η. The streamwise evolution of the TKE components and ε was found to collapse across grids when the rod diameter was included in the normalisation. We argue that this should be the case between all regular grids when the other relevant dimensionless quantities are matched and the flow has become homogeneous across the stream. Two-point space correlation functions at x/M = 1 show evidence of complex wake interactions which exhibit a strong Reynolds number dependence. However, these changes in initial conditions disappear indicating rapid cross-stream homogenisation. On the other hand, isotropy was, as expected, not found to be established by x/M = 12 for any case studied. © Springer-Verlag 2012.
Resumo:
Camera motion estimation is one of the most significant steps for structure-from-motion (SFM) with a monocular camera. The normalized 8-point, the 7-point, and the 5-point algorithms are normally adopted to perform the estimation, each of which has distinct performance characteristics. Given unique needs and challenges associated to civil infrastructure SFM scenarios, selection of the proper algorithm directly impacts the structure reconstruction results. In this paper, a comparison study of the aforementioned algorithms is conducted to identify the most suitable algorithm, in terms of accuracy and reliability, for reconstructing civil infrastructure. The free variables tested are baseline, depth, and motion. A concrete girder bridge was selected as the "test-bed" to reconstruct using an off-the-shelf camera capturing imagery from all possible positions that maximally the bridge's features and geometry. The feature points in the images were extracted and matched via the SURF descriptor. Finally, camera motions are estimated based on the corresponding image points by applying the aforementioned algorithms, and the results evaluated.
Resumo:
Automating the model generation process of infrastructure can substantially reduce the modeling time and cost. This paper presents a method to generate a sparse point cloud of an infrastructure scene using a single video camera under practical constraints. It is the first step towards establishing an automatic framework for object-oriented as-built modeling. Motion blur and key frame selection criteria are considered. Structure from motion and bundle adjustment are explored. The method is demonstrated in a case study where the scene of a reinforced concrete bridge is videotaped, reconstructed, and metrically validated. The result indicates the applicability, efficiency, and accuracy of the proposed method.
Resumo:
Most of the existing automated machine vision-based techniques for as-built documentation of civil infrastructure utilize only point features to recover the 3D structure of a scene. However it is often the case in man-made structures that not enough point features can be reliably detected (e.g. buildings and roofs); this can potentially lead to the failure of these techniques. To address the problem, this paper utilizes the prominence of straight lines in infrastructure scenes. It presents a hybrid approach that benefits from both point and line features. A calibrated stereo set of video cameras is used to collect data. Point and line features are then detected and matched across video frames. Finally, the 3D structure of the scene is recovered by finding 3D coordinates of the matched features. The proposed approach has been tested on realistic outdoor environments and preliminary results indicate its capability to deal with a variety of scenes.
Resumo:
Localization of chess-board vertices is a common task in computer vision, underpinning many applications, but relatively little work focusses on designing a specific feature detector that is fast, accurate and robust. In this paper the `Chess-board Extraction by Subtraction and Summation' (ChESS) feature detector, designed to exclusively respond to chess-board vertices, is presented. The method proposed is robust against noise, poor lighting and poor contrast, requires no prior knowledge of the extent of the chess-board pattern, is computationally very efficient, and provides a strength measure of detected features. Such a detector has significant application both in the key field of camera calibration, as well as in Structured Light 3D reconstruction. Evidence is presented showing its robustness, accuracy, and efficiency in comparison to other commonly used detectors both under simulation and in experimental 3D reconstruction of flat plate and cylindrical objects
Resumo:
Optical motion capture systems suffer from marker occlusions resulting in loss of useful information. This paper addresses the problem of real-time joint localisation of legged skeletons in the presence of such missing data. The data is assumed to be labelled 3d marker positions from a motion capture system. An integrated framework is presented which predicts the occluded marker positions using a Variable Turn Model within an Unscented Kalman filter. Inferred information from neighbouring markers is used as observation states; these constraints are efficient, simple, and real-time implementable. This work also takes advantage of the common case that missing markers are still visible to a single camera, by combining predictions with under-determined positions, resulting in more accurate predictions. An Inverse Kinematics technique is then applied ensuring that the bone lengths remain constant over time; the system can thereby maintain a continuous data-flow. The marker and Centre of Rotation (CoR) positions can be calculated with high accuracy even in cases where markers are occluded for a long period of time. Our methodology is tested against some of the most popular methods for marker prediction and the results confirm that our approach outperforms these methods in estimating both marker and CoR positions. © 2012 Springer-Verlag.