21 resultados para time-image
em Cambridge University Engineering Department Publications Database
Resumo:
Image-based (i.e., photo/videogrammetry) and time-of-flight-based (i.e., laser scanning) technologies are typically used to collect spatial data of infrastructure. In order to help architecture, engineering, and construction (AEC) industries make cost-effective decisions in selecting between these two technologies with respect to their settings, this paper makes an attempt to measure the accuracy, quality, time efficiency, and cost of applying image-based and time-of-flight-based technologies to conduct as-built 3D reconstruction of infrastructure. In this paper, a novel comparison method is proposed, and preliminary experiments are conducted. The results reveal that if the accuracy and quality level desired for a particular application is not high (i.e., error < 10 cm, and completeness rate > 80%), image-based technologies constitute a good alternative for time-of-flight-based technologies and significantly reduce the time and cost needed for collecting the data on site.
Resumo:
Image-based (i.e., photo/videogrammetry) and time-of-flight-based (i.e., laser scanning) technologies are typically used to collect spatial data of infrastructure. In order to help architecture, engineering, and construction (AEC) industries make cost-effective decisions in selecting between these two technologies with respect to their settings, this paper makes an attempt to measure the accuracy, quality, time efficiency, and cost of applying image-based and time-of-flight-based technologies to conduct as-built 3D reconstruction of infrastructure. In this paper, a novel comparison method is proposed, and preliminary experiments are conducted. The results reveal that if the accuracy and quality level desired for a particular application is not high (i.e., error < 10 cm, and completeness rate > 80%), image-based technologies constitute a good alternative for time-of-flight-based technologies and significantly reduce the time and cost needed for collecting the data on site.
Resumo:
Time-resolved particle image velocimetry (PIV) has been performed inside the nozzle of a commercially available inkjet print-head to obtain the time-dependent velocity waveform. A printhead with a single transparent nozzle 80 μm in orifice diameter was used to eject single droplets at a speed of 5 m/s. An optical microscope was used with an ultra-high-speed camera to capture the motion of particles suspended in a transparent liquid at the center of the nozzle and above the fluid meniscus at a rate of half a million frames per second. Time-resolved velocity fields were obtained from a fluid layer approximately 200 μm thick within the nozzle for a complete jetting cycle. A Lagrangian finite-element numerical model with experimental measurements as inputs was used to predict the meniscus movement. The model predictions showed good agreement with the experimental results. This work provides the first experimental verification of physical models and numerical simulations of flows within a drop-on-demand nozzle. © 2012 Society for Imaging Science and Technology.
Resumo:
We have developed a novel human facial tracking system that operates in real time at a video frame rate without needing any special hardware. The approach is based on the use of Lie algebra, and uses three-dimensional feature points on the targeted human face. It is assumed that the roughly estimated facial model (relative coordinates of the three-dimensional feature points) is known. First, the initial feature positions of the face are determined using a model fitting technique. Then, the tracking is operated by the following sequence: (1) capture the new video frame and render feature points to the image plane; (2) search for new positions of the feature points on the image plane; (3) get the Euclidean matrix from the moving vector and the three-dimensional information for the points; and (4) rotate and translate the feature points by using the Euclidean matrix, and render the new points on the image plane. The key algorithm of this tracker is to estimate the Euclidean matrix by using a least square technique based on Lie algebra. The resulting tracker performed very well on the task of tracking a human face.
Resumo:
Calibration of a camera system is a necessary step in any stereo metric process. It correlates all cameras to a common coordinate system by measuring the intrinsic and extrinsic parameters of each camera. Currently, manual calibration of a camera system is the only way to achieve calibration in civil engineering operations that require stereo metric processes (photogrammetry, videogrammetry, vision based asset tracking, etc). This type of calibration however is time-consuming and labor-intensive. Furthermore, in civil engineering operations, camera systems are exposed to open, busy sites. In these conditions, the position of presumably stationary cameras can easily be changed due to external factors such as wind, vibrations or due to an unintentional push/touch from personnel on site. In such cases manual calibration must be repeated. In order to address this issue, several self-calibration algorithms have been proposed. These algorithms use Projective Geometry, Absolute Conic and Kruppa Equations and variations of these to produce processes that achieve calibration. However, most of these methods do not consider all constraints of a camera system such as camera intrinsic constraints, scene constraints, camera motion or varying camera intrinsic properties. This paper presents a novel method that takes all constraints into consideration to auto-calibrate cameras using an image alignment algorithm originally meant for vision based tracking. In this method, image frames are taken from cameras. These frames are used to calculate the fundamental matrix that gives epipolar constraints. Intrinsic and extrinsic properties of cameras are acquired from this calculation. Test results are presented in this paper with recommendations for further improvement.
Resumo:
The capability to automatically identify shapes, objects and materials from the image content through direct and indirect methodologies has enabled the development of several civil engineering related applications that assist in the design, construction and maintenance of construction projects. Examples include surface cracks detection, assessment of fire-damaged mortar, fatigue evaluation of asphalt mixes, aggregate shape measurements, velocimentry, vehicles detection, pore size distribution in geotextiles, damage detection and others. This capability is a product of the technological breakthroughs in the area of Image and Video Processing that has allowed for the development of a large number of digital imaging applications in all industries ranging from the well established medical diagnostic tools (magnetic resonance imaging, spectroscopy and nuclear medical imaging) to image searching mechanisms (image matching, content based image retrieval). Content based image retrieval techniques can also assist in the automated recognition of materials in construction site images and thus enable the development of reliable methods for image classification and retrieval. The amount of original imaging information produced yearly in the construction industry during the last decade has experienced a tremendous growth. Digital cameras and image databases are gradually replacing traditional photography while owners demand complete site photograph logs and engineers store thousands of images for each project to use in a number of construction management tasks. However, construction companies tend to store images without following any standardized indexing protocols, thus making the manual searching and retrieval a tedious and time-consuming effort. Alternatively, material and object identification techniques can be used for the development of automated, content based, construction site image retrieval methodology. These methods can utilize automatic material or object based indexing to remove the user from the time-consuming and tedious manual classification process. In this paper, a novel material identification methodology is presented. This method utilizes content based image retrieval concepts to match known material samples with material clusters within the image content. The results demonstrate the suitability of this methodology for construction site image retrieval purposes and reveal the capability of existing image processing technologies to accurately identify a wealth of materials from construction site images.