997 resultados para 3D coordinates
Resumo:
The commercial far-range (>10 m) spatial data collection methods for acquiring infrastructure’s geometric data are not completely automated because of the necessary manual pre- and/or post-processing work. The required amount of human intervention and, in some cases, the high equipment costs associated with these methods impede their adoption by the majority of infrastructure mapping activities. This paper presents an automated stereo vision-based method, as an alternative and inexpensive solution, to producing a sparse Euclidean 3D point cloud of an infrastructure scene utilizing two video streams captured by a set of two calibrated cameras. In this process SURF features are automatically detected and matched between each pair of stereo video frames. 3D coordinates of the matched feature points are then calculated via triangulation. The detected SURF features in two successive video frames are automatically matched and the RANSAC algorithm is used to discard mismatches. The quaternion motion estimation method is then used along with bundle adjustment optimization to register successive point clouds. The method was tested on a database of infrastructure stereo video streams. The validity and statistical significance of the results were evaluated by comparing the spatial distance of randomly selected feature points with their corresponding tape measurements.
Resumo:
Most of the existing automated machine vision-based techniques for as-built documentation of civil infrastructure utilize only point features to recover the 3D structure of a scene. However it is often the case in man-made structures that not enough point features can be reliably detected (e.g. buildings and roofs); this can potentially lead to the failure of these techniques. To address the problem, this paper utilizes the prominence of straight lines in infrastructure scenes. It presents a hybrid approach that benefits from both point and line features. A calibrated stereo set of video cameras is used to collect data. Point and line features are then detected and matched across video frames. Finally, the 3D structure of the scene is recovered by finding 3D coordinates of the matched features. The proposed approach has been tested on realistic outdoor environments and preliminary results indicate its capability to deal with a variety of scenes.
Resumo:
There is a clear need to develop fisheries independent methods to quantify individual sizes, density, and three dimensional characteristics of reef fish spawning aggregations for use in population assessments and to provide critical baseline data on reproductive life history of exploited populations. We designed, constructed, calibrated, and applied an underwater stereo-video system to estimate individual sizes and three dimensional (3D) positions of Nassau grouper (Epinephelus striatus) at a spawning aggregation site located on a reef promontory on the western edge of Little Cayman Island, Cayman Islands, BWI, on 23 January 2003. The system consists of two free-running camcorders mounted on a meter-long bar and supported by a SCUBA diver. Paired video “stills” were captured, and nose and tail of individual fish observed in the field of view of both cameras were digitized using image analysis software. Conversion of these two dimensional screen coordinates to 3D coordinates was achieved through a matrix inversion algorithm and calibration data. Our estimate of mean total length (58.5 cm, n = 29) was in close agreement with estimated lengths from a hydroacoustic survey and from direct measures of fish size using visual census techniques. We discovered a possible bias in length measures using the video method, most likely arising from some fish orientations that were not perpendicular with respect to the optical axis of the camera system. We observed 40 individuals occupying a volume of 33.3 m3, resulting in a concentration of 1.2 individuals m–3 with a mean (SD) nearest neighbor distance of 70.0 (29.7) cm. We promote the use of roving diver stereo-videography as a method to assess the size distribution, density, and 3D spatial structure of fish spawning aggregations.
Resumo:
Tracking of project related entities such as construction equipment, materials, and personnel is used to calculate productivity, detect travel path conflicts, enhance the safety on the site, and monitor the project. Radio frequency tracking technologies (Wi-Fi, RFID, UWB) and GPS are commonly used for this purpose. However, on large-scale sites, deploying, maintaining and removing such systems can be costly and time-consuming. In addition, privacy issues with personnel tracking often limits the usability of these technologies on construction sites. This paper presents a vision based tracking framework that holds promise to address these limitations. The framework uses videos from a set of two or more static cameras placed on construction sites. In each camera view, the framework identifies and tracks construction entities providing 2D image coordinates across frames. Combining the 2D coordinates based on the installed camera system (the distance between the cameras and the view angles of them), 3D coordinates are calculated at each frame. The results of each step are presented to illustrate the feasibility of the framework.
Resumo:
Vision trackers have been proposed as a promising alternative for tracking at large-scale, congested construction sites. They provide the location of a large number of entities in a camera view across frames. However, vision trackers provide only two-dimensional (2D) pixel coordinates, which are not adequate for construction applications. This paper proposes and validates a method that overcomes this limitation by employing stereo cameras and converting 2D pixel coordinates to three-dimensional (3D) metric coordinates. The proposed method consists of four steps: camera calibration, camera pose estimation, 2D tracking, and triangulation. Given that the method employs fixed, calibrated stereo cameras with a long baseline, appropriate algorithms are selected for each step. Once the first two steps reveal camera system parameters, the third step determines 2D pixel coordinates of entities in subsequent frames. The 2D coordinates are triangulated on the basis of the camera system parameters to obtain 3D coordinates. The methodology presented in this paper has been implemented and tested with data collected from a construction site. The results demonstrate the suitability of this method for on-site tracking purposes.
Resumo:
A portable 3D laser scanning system has been designed and built for robot vision. By tilting the charge coupled device (CCD) plane of portable 3D scanning system according to the Scheimpflug condition, the depth-of-view is successfully extended from less than 40 to 100 mm. Based on the tilted camera model, the traditional two-step camera calibration method is modified by introducing the angle factor. Meanwhile, a novel segmental calibration approach, i.e., dividing the whole work range into two parts and calibrating, respectively, with corresponding system parameters, is proposed to effectively improve the measurement accuracy of the large depth-of-view 3D laser scanner. In the process of 3D reconstruction, different calibration parameters are used to transform the 2D coordinates into 3D coordinates according to the different positions of the image in the CCD plane, and the measurement accuracy of 60 mu m is obtained experimentally. Finally, the experiment of scanning a lamina by the large depth-of-view portable 3D laser scanner used by an industrial robot IRB 4400 is also employed to demonstrate the effectiveness and high measurement accuracy of our scanning system. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Rainbow三维摄像机是一种基于光谱分析的快速三维信息获取方法。该方法利用连续变化的彩色光谱照射景物 ,彩色CCD摄像机摄取的景物图像将呈现有规律的颜色变化 ,而且不同的颜色 (波长 )构成了不同的空间颜色面。通过标定这些颜色面和摄像机成象模型 ,即可计算出图像中各点的三维坐标。
Resumo:
Rainbow 三维摄像机是一种基于光谱分析的快速三维信息获取方法。该方法利用连续变化的彩色光谱照射景物,彩色CCD 摄像机摄取的景物图象将呈现有规律的颜色变化,而且不同的颜色(波长)构成了不同的空间颜色面。通过标定这些颜色面和摄像机成象模型,即可计算出图象中各点的三维坐标。该文重点讨论实现该方法的标定技术和颜色分类技术,最后给出实验结果。
Resumo:
Des anomalies dans la morphologie pelvienne, la posture du tronc et le contrôle de l’équilibre de jeunes filles atteintes de scoliose idiopathique de l’adolescence (SIA) ont souvent été l’objet d’études. Rares sont celles ayant distingué ces troubles en fonction de la sévérité de la déformation vertébrale. De plus, aucune n’a évalué à l’intérieur d’une même étude l’orientation et la distorsion pelvienne, l’asymétrie posturale et l’instabilité en position debout de sujets SIA. Une telle étude permettrait de comprendre le développement de la maladie et de mettre en évidence des facteurs de risque aidant au pronostic. L’objectif général de cette thèse est d’identifier des facteurs biomécaniques associés à la croissance osseuse, la posture et l’équilibre distinguant une SIA modérée d’une sévère. Les positions 3D de 14 repères prises sur 46 filles ayant une SIA droite (modérée et sévère) et 28 sujets témoins ont été captées pour quantifier la morphologie pelvienne et la posture. Un maintien en position debout de 64 s sur une plate-forme de force a aussi été enregistré afin d’évaluer leur équilibre. Les paramètres retenus sont les angles d’orientation pelvienne et du tronc; les distances entre la crête iliaque et S1 mesurant la distorsion pelvienne; la moyenne, l’amplitude et la vitesse du centre de pression (COP) en médiolatéral (ML) et antéropostérieur ainsi que la moyenne et l’amplitude du moment libre. Les différences entre les trois groupes (témoin, SIA modérée et SIA sévère) sont testées par des ANOVA et les relations entre l’angle de Cobb et les paramètres pelviens, posturaux ou d’équilibre, par des coefficients de corrélations. De plus, des régressions multiples exprimant l’angle de Cobb sont effectuées avec les paramètres pelviens, posturaux et d’équilibre afin de déterminer la classe de paramètres prédisant le mieux l’angle de Cobb. Aucune ANOVA n’est significative pour l’orientation pelvienne, bien que des différences de géométrie pelvienne soient notées entre les deux groupes de sujets SIA. Les SIA modérées ont un pelvis gauche moins profond que les SIA sévères et les sujets témoins, tandis que les SIA sévères ont un pelvis droit plus large d’environ 1,5 cm que celui des SIA modérées. Un coefficient de corrélation de -0,54 associe une rotation pelvienne droite à l’augmentation de la largeur de la crête iliaque. Au niveau postural, les SIA sévères démontrent des inclinaisons latérales et antérieures du tronc ainsi qu’une rotation axiale du haut du corps plus marquées que les SIA modérées. Les corrélations entre les paramètres posturaux, l’angle de Cobb et la morphologie pelvienne indiquent que l’attitude posturale est associée à la distorsion pelvienne dans tous les plans anatomiques, tandis qu’elle ne l’est que dans les plans sagittal et horizontal à l’angle de Cobb. Les différences retrouvées entre les deux groupes SIA concernant les troubles d’équilibre résultent en une augmentation de l’amplitude et de la vitesse du COPML. Une régression multiple de 0,896 est observée par l’emploi des paramètres pelviens, posturaux et d’équilibre, bien que ceux se rapportant à la distorsion pelvienne soient les mieux corrélés à l’angle de Cobb. Cette thèse permet de distinguer la morphologie pelvienne de sujets SIA sévères des modérées, soulignant la détection d’une anomalie de croissance avant que l’angle de Cobb ne soit élevé. Bien que les indices de croissance pelvienne soient davantage corrélés à ce dernier, c’est en considérant globalement la morphologie pelvienne, la posture et l’équilibre qu’une détermination précise de la sévérité d’une scoliose est réalisée. La mise en évidence de tels facteurs de prédiction de la SIA peut faciliter le pronostic d’une courbure.
Resumo:
This work proposes a kinematic control scheme, using visual feedback for a robot arm with five degrees of freedom. Using computational vision techniques, a method was developed to determine the cartesian 3d position and orientation of the robot arm (pose) using a robot image obtained through a camera. A colored triangular label is disposed on the robot manipulator tool and efficient heuristic rules are used to obtain the vertexes of that label in the image. The tool pose is obtained from those vertexes through numerical methods. A color calibration scheme based in the K-means algorithm was implemented to guarantee the robustness of the vision system in the presence of light variations. The extrinsic camera parameters are computed from the image of four coplanar points whose cartesian 3d coordinates, related to a fixed frame, are known. Two distinct poses of the tool, initial and final, obtained from image, are interpolated to generate a desired trajectory in cartesian space. The error signal in the proposed control scheme consists in the difference between the desired tool pose and the actual tool pose. Gains are applied at the error signal and the signal resulting is mapped in joint incrementals using the pseudoinverse of the manipulator jacobian matrix. These incrementals are applied to the manipulator joints moving the tool to the desired pose
Resumo:
This paper presents a method to recover 3D geometry of Lambertian surfaces by using multiple images taken from the same view point and with the scene illuminated from different positions. This approach differs from Stereo Photometry in that it considers the light source at a finite distance from the object and the perspective projection in image formation. The proposed model allows local solution and recovery of 3D coordinates, in addition to surface orientation. A procedure to calibrate the light sources is also presented. Results of the application of the algorithm to synthetic images are shown.
Resumo:
The aim of this work is to evaluate the influence of point measurements in images, with subpixel accuracy, and its contribution in the calibration of digital cameras. Also, the effect of subpixel measurements in 3D coordinates of check points in the object space will be evaluated. With this purpose, an algorithm that allows subpixel accuracy was implemented for semi-automatic determination of points of interest, based on Fõrstner operator. Experiments were accomplished with a block of images acquired with the multispectral camera DuncanTech MS3100-CIR. The influence of subpixel measurements in the adjustment by Least Square Method (LSM) was evaluated by the comparison of estimated standard deviation of parameters in both situations, with manual measurement (pixel accuracy) and with subpixel estimation. Additionally, the influence of subpixel measurements in the 3D reconstruction was also analyzed. Based on the obtained results, i.e., on the quantification of the standard deviation reduction in the Inner Orientation Parameters (IOP) and also in the relative error of the 3D reconstruction, it was shown that measurements with subpixel accuracy are relevant for some tasks in Photogrammetry, mainly for those in which the metric quality is of great relevance, as Camera Calibration.
Resumo:
Pós-graduação em Ciências da Motricidade - IBRC
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
In this paper we present a model-based approach for real-time camera pose estimation in industrial scenarios. The line model which is used for tracking is generated by rendering a polygonal model and extracting contours out of the rendered scene. By un-projecting a point on the contour with the depth value stored in the z-buffer, the 3D coordinates of the contour can be calculated. For establishing 2D/3D correspondences the 3D control points on the contour are projected into the image and a perpendicular search for gradient maxima for every point on the contour is performed. Multiple hypotheses of 2D image points corresponding to a 3D control point make the pose estimation robust against ambiguous edges in the image.