33 resultados para Frames and Locales
Resumo:
This paper describes work performed as part of the U.K. Alvey sponsored Voice Operated Database Inquiry System (VODIS) project in the area of intelligent dialogue control. The principal aims of the work were to develop a habitable interface for the untrained user; to investigate the degree to which dialogue control can be used to compensate for deficiencies in recognition performance; and to examine the requirements on dialogue control for generating natural speech output. A data-driven methodology is described based on the use of frames in which dialogue topics are organized hierarchically. The concept of a dynamically adjustable scope is introduced to permit adaptation to recognizer performance and the use of historical and hierarchical contexts are described to facilitate the construction of contextually relevant output messages. © 1989.
Resumo:
The effects of varying corona surface treatment on ink drop impact and spreading on a polymer substrate have been investigated. The surface energy of substrates treated with different levels of corona was determined from static contact angle measurement by the Owens and Wendt method. A drop-on-demand print-head was used to eject 38 μm diameter drops of UV-curable graphics ink travelling at 2.7 m/s on to a flat polymer substrate. The kinematic impact phase was imaged with a high speed camera at 500k frames per second, while the spreading phase was imaged at 20k frames per secoiui. The resultant images were analyzed to track the changes in the drop diameter during the different phases of drop spreading. Further experiments were carried out with white-light intetferometry to accurately measure the final diameter of drops which had been printed on different corona treated substrates and UV cured. The results are correlated to characterize the effects of corona treatment on drop impact behavior and final print quality.
Resumo:
Vision based tracking can provide the spatial location of construction entities such as equipment, workers, and materials in large scale, congested construction sites. It tracks entities in video streams by inferring their locations based on the entities’ visual features and motion histories. To initiate the process, it is necessary to determine the pixel areas corresponding to the construction entities to be tracked in the following consecutive video frames. In order to fully automate the process, an automated way of initialization is needed. This paper presents the method for construction worker detection which can automatically recognize and localize construction workers in video frames. The method first finds the foreground areas of moving objects using a background subtraction method. Within these foreground areas, construction workers are recognized based on the histogram of oriented gradients (HOG) and histogram of the HSV colors. HOG’s have proved to work effectively for detection of people, and the histogram of HSV colors helps differentiate between pedestrians and construction workers wearing safety vests. Preliminary experiments show that the proposed method has the potential to automate the initialization process of vision based tracking.
Resumo:
Most of the existing automated machine vision-based techniques for as-built documentation of civil infrastructure utilize only point features to recover the 3D structure of a scene. However it is often the case in man-made structures that not enough point features can be reliably detected (e.g. buildings and roofs); this can potentially lead to the failure of these techniques. To address the problem, this paper utilizes the prominence of straight lines in infrastructure scenes. It presents a hybrid approach that benefits from both point and line features. A calibrated stereo set of video cameras is used to collect data. Point and line features are then detected and matched across video frames. Finally, the 3D structure of the scene is recovered by finding 3D coordinates of the matched features. The proposed approach has been tested on realistic outdoor environments and preliminary results indicate its capability to deal with a variety of scenes.
Resumo:
Vision-based object detection has been introduced in construction for recognizing and locating construction entities in on-site camera views. It can provide spatial locations of a large number of entities, which is beneficial in large-scale, congested construction sites. However, even a few false detections prevent its practical applications. In resolving this issue, this paper presents a novel hybrid method for locating construction equipment that fuses the function of detection and tracking algorithms. This method detects construction equipment in the video view by taking advantage of entities' motion, shape, and color distribution. Background subtraction, Haar-like features, and eigen-images are used for motion, shape, and color information, respectively. A tracking algorithm steps in the process to make up for the false detections. False detections are identified by catching drastic changes in object size and appearance. The identified false detections are replaced with tracking results. Preliminary experiments show that the combination with tracking has the potential to enhance the detection performance.
Resumo:
Videogrammetry is an inexpensive and easy-to-use technology for spatial 3D scene recovery. When applied to large scale civil infrastructure scenes, only a small percentage of the collected video frames are required to achieve robust results. However, choosing the right frames requires careful consideration. Videotaping a built infrastructure scene results in large video files filled with blurry, noisy, or redundant frames. This is due to frame rate to camera speed ratios that are often higher than necessary; camera and lens imperfections and limitations that result in imaging noise; and occasional jerky motions of the camera that result in motion blur; all of which can significantly affect the performance of the videogrammetric pipeline. To tackle these issues, this paper proposes a novel method for automating the selection of an optimized number of informative, high quality frames. According to this method, as the first step, blurred frames are removed using the thresholds determined based on a minimum level of frame quality required to obtain robust results. Then, an optimum number of key frames are selected from the remaining frames using the selection criteria devised by the authors. Experimental results show that the proposed method outperforms existing methods in terms of improved 3D reconstruction results, while maintaining the optimum number of extracted frames needed to generate high quality 3D point clouds.© 2012 Elsevier Ltd. All rights reserved.