81 resultados para Pushbroom camera
Resumo:
High-altitude relight inside a lean-direct-injection gas-turbine combustor is investigated experimentally by highspeed imaging. Realistic operating conditions are simulated in a ground-based test facility, with two conditions being studied: one inside and one outside the combustor ignition loop. The motion of hot gases during the early stages of relight is recorded using a high-speed camera. An algorithm is developed to track the flame movement and breakup, revealing important characteristics of the flame development process, including stabilization timescales, spatial trajectories, and typical velocities of hot gas motion. Although the observed patterns of ignition failure are in broad agreement with results from laboratory-scale studies, other aspects of relight behavior are not reproduced in laboratory experiments employing simplified flow geometries and operating conditions. For example, when the spark discharge occurs, the air velocity below the igniter in a real combustor is much less strongly correlated to ignition outcome than laboratory studies would suggest. Nevertheless, later flame development and stabilization are largely controlled by the cold flowfield, implying that the location of the igniter may, in the first instance, be selected based on the combustor cold flow. Copyright © 2010.
Resumo:
The effects of varying corona surface treatment on ink drop impact and spreading on a polymer substrate have been investigated. The surface energy of substrates treated with different levels of corona was determined from static contact angle measurement by the Owens and Wendt method. A drop-on-demand print-head was used to eject 38 μm diameter drops of UV-curable graphics ink travelling at 2.7 m/s on to a flat polymer substrate. The kinematic impact phase was imaged with a high speed camera at 500k frames per second, while the spreading phase was imaged at 20k frames per secoiui. The resultant images were analyzed to track the changes in the drop diameter during the different phases of drop spreading. Further experiments were carried out with white-light intetferometry to accurately measure the final diameter of drops which had been printed on different corona treated substrates and UV cured. The results are correlated to characterize the effects of corona treatment on drop impact behavior and final print quality.
Resumo:
This paper analyzes the forced response of swirl-stabilized lean-premixed flames to high-amplitude acoustic forcing in a laboratory-scale stratified burner operated with CH4 and air at atmospheric pressure. The double-swirler, double-channel annular burner was specially designed to generate high-amplitude acoustic velocity oscillations and a radial equivalence ratio gradient at the inlet of the combustion chamber. Temporal oscillations of equivalence ratio along the axial direction are dissipated over a long distance, and therefore the effects of time-varying fuel/air ratio on the response are not considered in the present investigation. Simultaneous measurements of inlet velocity and heat release rate oscillations were made using a constant temperature anemometer and photomultiplier tubes with narrow-band OH*/CH* interference filters. Time-averaged and phase-synchronized CH* chemiluminescence intensities were measured using an intensified CCD camera. The measurements show that flame stabilization mechanisms vary depending on equivalence ratio gradients for a constant global equivalence ratio (φg=0.60). Under uniformly premixed conditions, an enveloped M-shaped flame is observed. In contrast, under stratified conditions, a dihedral V-flame and a toroidal detached flame develop in the outer stream and inner stream fuel enrichment cases, respectively. The modification of the stabilization mechanism has a significant impact on the nonlinear response of stratified flames to high-amplitude acoustic forcing (u'/U∼0.45 and f=60, 160Hz). Outer stream enrichment tends to improve the flame's stiffness with respect to incident acoustic/vortical disturbances, whereas inner stream stratification tends to enhance the nonlinear flame dynamics, as manifested by the complex interaction between the swirl flame and large-scale coherent vortices with different length scales and shedding points. It was found that the behavior of the measured flame describing functions (FDF), which depend on radial fuel stratification, are well correlated with previous measurements of the intensity of self-excited combustion instabilities in the stratified swirl burner. The results presented in this paper provide insight into the impact of nonuniform reactant stoichiometry on combustion instabilities, its effect on flame location and the interaction with unsteady flow structures. © 2011 The Combustion Institute.
Resumo:
The movement of the circular piston in an oscillating piston positive displacement flowmeter is important in understanding the operation of the flowmeter, and the leakage of liquid past the piston plays a key role in the performance of the meter. The clearances between the piston and the chamber are small, typically less than 60 νm. In order to measure this film thickness a fluorescent dye was added to the water passing through the meter, which was illuminated with UV light. Visible light images were captured with a digital camera and analysed to give a measure of the film thickness with an uncertainty of less than 7%. It is known that this method lacks precision unless careful calibration is undertaken. Methods to achieve this are discussed in the paper. The grey level values for a range of film thicknesses were calibrated in situ with six dye concentrations to select the most appropriate one for the range of liquid film thickness. Data obtained for the oscillating piston flowmeter demonstrate the value of the fluorescence technique. The method is useful, inexpensive and straightforward and can be extended to other applications where measurement of liquid film thickness is required. © 2011 IOP Publishing Ltd.
Resumo:
This paper addresses the problem of automatically obtaining the object/background segmentation of a rigid 3D object observed in a set of images that have been calibrated for camera pose and intrinsics. Such segmentations can be used to obtain a shape representation of a potentially texture-less object by computing a visual hull. We propose an automatic approach where the object to be segmented is identified by the pose of the cameras instead of user input such as 2D bounding rectangles or brush-strokes. The key behind our method is a pairwise MRF framework that combines (a) foreground/background appearance models, (b) epipolar constraints and (c) weak stereo correspondence into a single segmentation cost function that can be efficiently solved by Graph-cuts. The segmentation thus obtained is further improved using silhouette coherency and then used to update the foreground/background appearance models which are fed into the next Graph-cut computation. These two steps are iterated until segmentation convergences. Our method can automatically provide a 3D surface representation even in texture-less scenes where MVS methods might fail. Furthermore, it confers improved performance in images where the object is not readily separable from the background in colour space, an area that previous segmentation approaches have found challenging. © 2011 IEEE.
Resumo:
We propose a novel model for the spatio-temporal clustering of trajectories based on motion, which applies to challenging street-view video sequences of pedestrians captured by a mobile camera. A key contribution of our work is the introduction of novel probabilistic region trajectories, motivated by the non-repeatability of segmentation of frames in a video sequence. Hierarchical image segments are obtained by using a state-of-the-art hierarchical segmentation algorithm, and connected from adjacent frames in a directed acyclic graph. The region trajectories and measures of confidence are extracted from this graph using a dynamic programming-based optimisation. Our second main contribution is a Bayesian framework with a twofold goal: to learn the optimal, in a maximum likelihood sense, Random Forests classifier of motion patterns based on video features, and construct a unique graph from region trajectories of different frames, lengths and hierarchical levels. Finally, we demonstrate the use of Isomap for effective spatio-temporal clustering of the region trajectories of pedestrians. We support our claims with experimental results on new and existing challenging video sequences. © 2011 IEEE.
Resumo:
An experimental setup and a simple reconstruction method are presented to measure velocity fields inside slightly tapering cylindrical liquid jets traveling through still air. Particle image velocimetry algorithms are used to calculate velocity fields from high speed images of jets of transparent liquid containing seed particles. An inner central plane is illuminated by a laser sheet pointed at the center of the jet and visualized through the jet by a high speed camera. Optical distortions produced by the shape of the jet and the difference between the refractive index of the fluid and the surrounding air are corrected by using a ray tracing method. The effect of the jet speed on the velocity fields is investigated at four jet speeds. The relaxation rate for the velocity profile downstream of the nozzle exit is reasonably consistent with theoretical expectations for the low Reynolds numbers and the fluid used, although the velocity profiles are considerably flatter than expected. © 2012 American Society of Mechanical Engineers.
Resumo:
The Particle Image Velocimetry (PIV) technique is an image processing tool to obtain instantaneous velocity measurements during an experiment. The basic principle of PIV analysis is to divide the image into small patches and calculate the locations of the individual patches in consecutive images with the help of cross correlation functions. This paper focuses on the application of the PIV analysis in dynamic centrifuge tests on small scale tunnels in loose, dry sand. Digital images were captured during the application of the earthquake loading on tunnel models using a fast digital camera capable of taking digital images at 1000 frames per second at 1 Megapixel resolution. This paper discusses the effectiveness of the existing methods used to conduct PIV analyses on dynamic centrifuge tests. Results indicate that PIV analysis in dynamic testing requires special measures in order to obtain reasonable deformation data. Nevertheless, it was possible to obtain interesting mechanisms regarding the behaviour of the tunnels from PIV analyses. © 2010 Taylor & Francis Group, London.
Resumo:
This paper tackles the novel challenging problem of 3D object phenotype recognition from a single 2D silhouette. To bridge the large pose (articulation or deformation) and camera viewpoint changes between the gallery images and query image, we propose a novel probabilistic inference algorithm based on 3D shape priors. Our approach combines both generative and discriminative learning. We use latent probabilistic generative models to capture 3D shape and pose variations from a set of 3D mesh models. Based on these 3D shape priors, we generate a large number of projections for different phenotype classes, poses, and camera viewpoints, and implement Random Forests to efficiently solve the shape and pose inference problems. By model selection in terms of the silhouette coherency between the query and the projections of 3D shapes synthesized using the galleries, we achieve the phenotype recognition result as well as a fast approximate 3D reconstruction of the query. To verify the efficacy of the proposed approach, we present new datasets which contain over 500 images of various human and shark phenotypes and motions. The experimental results clearly show the benefits of using the 3D priors in the proposed method over previous 2D-based methods. © 2011 IEEE.
Resumo:
We present a multispectral photometric stereo method for capturing geometry of deforming surfaces. A novel photometric calibration technique allows calibration of scenes containing multiple piecewise constant chromaticities. This method estimates per-pixel photometric properties, then uses a RANSAC-based approach to estimate the dominant chromaticities in the scene. A likelihood term is developed linking surface normal, image intensity and photometric properties, which allows estimating the number of chromaticities present in a scene to be framed as a model estimation problem. The Bayesian Information Criterion is applied to automatically estimate the number of chromaticities present during calibration. A two-camera stereo system provides low resolution geometry, allowing the likelihood term to be used in segmenting new images into regions of constant chromaticity. This segmentation is carried out in a Markov Random Field framework and allows the correct photometric properties to be used at each pixel to estimate a dense normal map. Results are shown on several challenging real-world sequences, demonstrating state-of-the-art results using only two cameras and three light sources. Quantitative evaluation is provided against synthetic ground truth data. © 2011 IEEE.
Resumo:
Estimating the fundamental matrix (F), to determine the epipolar geometry between a pair of images or video frames, is a basic step for a wide variety of vision-based functions used in construction operations, such as camera-pair calibration, automatic progress monitoring, and 3D reconstruction. Currently, robust methods (e.g., SIFT + normalized eight-point algorithm + RANSAC) are widely used in the construction community for this purpose. Although they can provide acceptable accuracy, the significant amount of required computational time impedes their adoption in real-time applications, especially video data analysis with many frames per second. Aiming to overcome this limitation, this paper presents and evaluates the accuracy of a solution to find F by combining the use of two speedy and consistent methods: SURF for the selection of a robust set of point correspondences and the normalized eight-point algorithm. This solution is tested extensively on construction site image pairs including changes in viewpoint, scale, illumination, rotation, and moving objects. The results demonstrate that this method can be used for real-time applications (5 image pairs per second with the resolution of 640 × 480) involving scenes of the built environment.
Resumo:
Tracking of project related entities such as construction equipment, materials, and personnel is used to calculate productivity, detect travel path conflicts, enhance the safety on the site, and monitor the project. Radio frequency tracking technologies (Wi-Fi, RFID, UWB) and GPS are commonly used for this purpose. However, on large-scale sites, deploying, maintaining and removing such systems can be costly and time-consuming. In addition, privacy issues with personnel tracking often limits the usability of these technologies on construction sites. This paper presents a vision based tracking framework that holds promise to address these limitations. The framework uses videos from a set of two or more static cameras placed on construction sites. In each camera view, the framework identifies and tracks construction entities providing 2D image coordinates across frames. Combining the 2D coordinates based on the installed camera system (the distance between the cameras and the view angles of them), 3D coordinates are calculated at each frame. The results of each step are presented to illustrate the feasibility of the framework.
Resumo:
When tracking resources in large-scale, congested, outdoor construction sites, the cost and time for purchasing, installing and maintaining the position sensors needed to track thousands of materials, and hundreds of equipment and personnel can be significant. To alleviate this problem a novel vision based tracking method that allows each sensor (camera) to monitor the position of multiple entities simultaneously has been proposed. This paper presents the full-scale validation experiments for this method. The validation included testing the method under harsh conditions at a variety of mega-project construction sites. The procedure for collecting data from the sites, the testing procedure, metrics, and results are reported. Full-scale validation demonstrates that the novel vision tracking provides a good solution to track different entities on a large, congested construction site.
Resumo:
Camera motion estimation is one of the most significant steps for structure-from-motion (SFM) with a monocular camera. The normalized 8-point, the 7-point, and the 5-point algorithms are normally adopted to perform the estimation, each of which has distinct performance characteristics. Given unique needs and challenges associated to civil infrastructure SFM scenarios, selection of the proper algorithm directly impacts the structure reconstruction results. In this paper, a comparison study of the aforementioned algorithms is conducted to identify the most suitable algorithm, in terms of accuracy and reliability, for reconstructing civil infrastructure. The free variables tested are baseline, depth, and motion. A concrete girder bridge was selected as the "test-bed" to reconstruct using an off-the-shelf camera capturing imagery from all possible positions that maximally the bridge's features and geometry. The feature points in the images were extracted and matched via the SURF descriptor. Finally, camera motions are estimated based on the corresponding image points by applying the aforementioned algorithms, and the results evaluated.
Resumo:
Vision-based object detection has been introduced in construction for recognizing and locating construction entities in on-site camera views. It can provide spatial locations of a large number of entities, which is beneficial in large-scale, congested construction sites. However, even a few false detections prevent its practical applications. In resolving this issue, this paper presents a novel hybrid method for locating construction equipment that fuses the function of detection and tracking algorithms. This method detects construction equipment in the video view by taking advantage of entities' motion, shape, and color distribution. Background subtraction, Haar-like features, and eigen-images are used for motion, shape, and color information, respectively. A tracking algorithm steps in the process to make up for the false detections. False detections are identified by catching drastic changes in object size and appearance. The identified false detections are replaced with tracking results. Preliminary experiments show that the combination with tracking has the potential to enhance the detection performance.