3 resultados para 3d Reconstruction
em Universidade Federal do Rio Grande do Norte(UFRN)
Resumo:
3D Reconstruction is the process used to obtain a detailed graphical model in three dimensions that represents some real objectified scene. This process uses sequences of images taken from the scene, so it can automatically extract the information about the depth of feature points. These points are then highlighted using some computational technique on the images that compose the used dataset. Using SURF feature points this work propose a model for obtaining depth information of feature points detected by the system. At the ending, the proposed system extract three important information from the images dataset: the 3D position for feature points; relative rotation and translation matrices between images; the realtion between the baseline for adjacent images and the 3D point accuracy error found.
Resumo:
Visual Odometry is the process that estimates camera position and orientation based solely on images and in features (projections of visual landmarks present in the scene) extraced from them. With the increasing advance of Computer Vision algorithms and computer processing power, the subarea known as Structure from Motion (SFM) started to supply mathematical tools composing localization systems for robotics and Augmented Reality applications, in contrast with its initial purpose of being used in inherently offline solutions aiming 3D reconstruction and image based modelling. In that way, this work proposes a pipeline to obtain relative position featuring a previously calibrated camera as positional sensor and based entirely on models and algorithms from SFM. Techniques usually applied in camera localization systems such as Kalman filters and particle filters are not used, making unnecessary additional information like probabilistic models for camera state transition. Experiments assessing both 3D reconstruction quality and camera position estimated by the system were performed, in which image sequences captured in reallistic scenarios were processed and compared to localization data gathered from a mobile robotic platform
Resumo:
Registration of point clouds captured by depth sensors is an important task in 3D reconstruction applications based on computer vision. In many applications with strict performance requirements, the registration should be executed not only with precision, but also in the same frequency as data is acquired by the sensor. This thesis proposes theuse of the pyramidal sparse optical flow algorithm to incrementally register point clouds captured by RGB-D sensors (e.g. Microsoft Kinect) in real time. The accumulated errorinherent to the process is posteriorly minimized by utilizing a marker and pose graph optimization. Experimental results gathered by processing several RGB-D datasets validatethe system proposed by this thesis in visual odometry and simultaneous localization and mapping (SLAM) applications.