126 resultados para Stereo Vision


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A convenient system for the rapid extraction of three dimensional information from pairs of SEM images has been constructed, eliminating the need for time-consuming photography. Results are produced in a digestable form. Distortions inherent in the SEM record display and in the photographic system are not relevant to the system described; only those arising within the column and stage need be considered.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Discusses a refinement to the process by which manufacturing strategy is created. Builds on an existing strategy process (Platts, 1990) and adapts it to fit more closely within the dynamic manufacturing vision. The method for creating a manufacturing vision allows a business to do this in a two- to three-week period as part of a 10-12 week manufacturing strategy project. A conceptual model of manufacturing vision has been developed that enables practitioners to explore the factors that influenced the potential competitive contribution of manufacturing and to agree an explicit direction for change. Describes the successful application of the process in six manufacturing organizations and highlights the practical limitations of the approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This Chapter presents a vision-based system for touch-free interaction with a display at a distance. A single camera is fixed on top of the screen and is pointing towards the user. An attention mechanism allows the user to start the interaction and control a screen pointer by moving their hand in a fist pose directed at the camera. On-screen items can be chosen by a selection mechanism. Current sample applications include browsing video collections as well as viewing a gallery of 3D objects, which the user can rotate with their hand motion. We have included an up-to-date review of hand tracking methods, and comment on the merits and shortcomings of previous approaches. The proposed tracker uses multiple cues, appearance, color, and motion, for robustness. As the space of possible observation models is generally too large for exhaustive online search, we select models that are suitable for the particular tracking task at hand. During a training stage, various off-the-shelf trackers are evaluated. From this data differentmethods of fusing them online are investigated, including parallel and cascaded tracker evaluation. For the case of fist tracking, combining a small number of observers in a cascade results in an efficient algorithm that is used in our gesture interface. The system has been on public display at conferences where over a hundred users have engaged with it. © 2010 Springer-Verlag Berlin Heidelberg.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In stereo displays, binocular disparity creates a striking impression of depth. However, such displays present focus cues - blur and accommodation - that specify a different depth than disparity, thereby causing a conflict. This conflict causes several problems including misperception of the 3D layout, difficulty fusing binocular images, and visual fatigue. To address these problems, we developed a display that preserves the advantages of conventional stereo displays, while presenting correct or nearly correct focus cues. In our new stereo display each eye views a display through a lens that switches between four focal distances at very high rate. The switches are synchronized to the display, so focal distance and the distance being simulated on the display are consistent or nearly consistent with one another. Focus cues for points in-between the four focal planes are simulated by using a depth-weighted blending technique. We will describe the design of the new display, discuss the retinal images it forms under various conditions, and describe an experiment that illustrates the effectiveness of the display in maximizing visual performance while minimizing visual fatigue. © 2009 SPIE-IS&T.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Vision tracking has significant potential for tracking resources on large scale, congested construction sites, where a small number of cameras strategically placed around the site could replace hundreds of tracking tags. The correlation of vision tracking 2D positions from multiple views can provide the 3D position. However, there are many 2D vision trackers available in the literature, and little information is available on which one is most effective for construction applications. In this paper, a comparative study of various vision tracker categories is carried out, to identify which one is most effective in tracking construction resources. Testing parameters for evaluating categories of trackers are identified, and benefits and limitations of each category are presented. The most promising trackers are tested using a database of construction operations videos. The results indicate the effectiveness of each tracker in relation to each parameter of the test, and the most suitable tracker needed to research effective 3D vision trackers of construction resources.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Estimating the fundamental matrix (F), to determine the epipolar geometry between a pair of images or video frames, is a basic step for a wide variety of vision-based functions used in construction operations, such as camera-pair calibration, automatic progress monitoring, and 3D reconstruction. Currently, robust methods (e.g., SIFT + normalized eight-point algorithm + RANSAC) are widely used in the construction community for this purpose. Although they can provide acceptable accuracy, the significant amount of required computational time impedes their adoption in real-time applications, especially video data analysis with many frames per second. Aiming to overcome this limitation, this paper presents and evaluates the accuracy of a solution to find F by combining the use of two speedy and consistent methods: SURF for the selection of a robust set of point correspondences and the normalized eight-point algorithm. This solution is tested extensively on construction site image pairs including changes in viewpoint, scale, illumination, rotation, and moving objects. The results demonstrate that this method can be used for real-time applications (5 image pairs per second with the resolution of 640 × 480) involving scenes of the built environment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tracking of project related entities such as construction equipment, materials, and personnel is used to calculate productivity, detect travel path conflicts, enhance the safety on the site, and monitor the project. Radio frequency tracking technologies (Wi-Fi, RFID, UWB) and GPS are commonly used for this purpose. However, on large-scale sites, deploying, maintaining and removing such systems can be costly and time-consuming. In addition, privacy issues with personnel tracking often limits the usability of these technologies on construction sites. This paper presents a vision based tracking framework that holds promise to address these limitations. The framework uses videos from a set of two or more static cameras placed on construction sites. In each camera view, the framework identifies and tracks construction entities providing 2D image coordinates across frames. Combining the 2D coordinates based on the installed camera system (the distance between the cameras and the view angles of them), 3D coordinates are calculated at each frame. The results of each step are presented to illustrate the feasibility of the framework.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

When tracking resources in large-scale, congested, outdoor construction sites, the cost and time for purchasing, installing and maintaining the position sensors needed to track thousands of materials, and hundreds of equipment and personnel can be significant. To alleviate this problem a novel vision based tracking method that allows each sensor (camera) to monitor the position of multiple entities simultaneously has been proposed. This paper presents the full-scale validation experiments for this method. The validation included testing the method under harsh conditions at a variety of mega-project construction sites. The procedure for collecting data from the sites, the testing procedure, metrics, and results are reported. Full-scale validation demonstrates that the novel vision tracking provides a good solution to track different entities on a large, congested construction site.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Manually inspecting concrete surface defects (e.g., cracks and air pockets) is not always reliable. Also, it is labor-intensive. In order to overcome these limitations, automated inspection using image processing techniques was proposed. However, the current work can only detect defects in an image without the ability of evaluating them. This paper presents a novel approach for automatically assessing the impact of two common surface defects (i.e., air pockets and discoloration). These two defects are first located using the developed detection methods. Their attributes, such as the number of air pockets and the area of discoloration regions, are then retrieved to calculate defects’ visual impact ratios (VIRs). The appropriate threshold values for these VIRs are selected through a manual rating survey. This way, for a given concrete surface image, its quality in terms of air pockets and discoloration can be automatically measured by judging whether their VIRs are below the threshold values or not. The method presented in this paper was implemented in C++ and a database of concrete surface images was tested to validate its performance. Read More: http://ascelibrary.org/doi/abs/10.1061/%28ASCE%29CO.1943-7862.0000126?journalCode=jcemd4

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Calibration of a camera system is a necessary step in any stereo metric process. It correlates all cameras to a common coordinate system by measuring the intrinsic and extrinsic parameters of each camera. Currently, manual calibration of a camera system is the only way to achieve calibration in civil engineering operations that require stereo metric processes (photogrammetry, videogrammetry, vision based asset tracking, etc). This type of calibration however is time-consuming and labor-intensive. Furthermore, in civil engineering operations, camera systems are exposed to open, busy sites. In these conditions, the position of presumably stationary cameras can easily be changed due to external factors such as wind, vibrations or due to an unintentional push/touch from personnel on site. In such cases manual calibration must be repeated. In order to address this issue, several self-calibration algorithms have been proposed. These algorithms use Projective Geometry, Absolute Conic and Kruppa Equations and variations of these to produce processes that achieve calibration. However, most of these methods do not consider all constraints of a camera system such as camera intrinsic constraints, scene constraints, camera motion or varying camera intrinsic properties. This paper presents a novel method that takes all constraints into consideration to auto-calibrate cameras using an image alignment algorithm originally meant for vision based tracking. In this method, image frames are taken from cameras. These frames are used to calculate the fundamental matrix that gives epipolar constraints. Intrinsic and extrinsic properties of cameras are acquired from this calculation. Test results are presented in this paper with recommendations for further improvement.