7 resultados para AVT Prosilica GC2450C camera system
em Universidad de Alicante
Resumo:
Nowadays, the use of RGB-D sensors have focused a lot of research in computer vision and robotics. These kinds of sensors, like Kinect, allow to obtain 3D data together with color information. However, their working range is limited to less than 10 meters, making them useless in some robotics applications, like outdoor mapping. In these environments, 3D lasers, working in ranges of 20-80 meters, are better. But 3D lasers do not usually provide color information. A simple 2D camera can be used to provide color information to the point cloud, but a calibration process between camera and laser must be done. In this paper we present a portable calibration system to calibrate any traditional camera with a 3D laser in order to assign color information to the 3D points obtained. Thus, we can use laser precision and simultaneously make use of color information. Unlike other techniques that make use of a three-dimensional body of known dimensions in the calibration process, this system is highly portable because it makes use of small catadioptrics that can be placed in a simple manner in the environment. We use our calibration system in a 3D mapping system, including Simultaneous Location and Mapping (SLAM), in order to get a 3D colored map which can be used in different tasks. We show that an additional problem arises: 2D cameras information is different when lighting conditions change. So when we merge 3D point clouds from two different views, several points in a given neighborhood could have different color information. A new method for color fusion is presented, obtaining correct colored maps. The system will be tested by applying it to 3D reconstruction.
Resumo:
Paper submitted to the 43rd International Symposium on Robotics (ISR2012), Taipei, Taiwan, Aug. 29-31, 2012.
Resumo:
In this work, we present a multi-camera surveillance system based on the use of self-organizing neural networks to represent events on video. The system processes several tasks in parallel using GPUs (graphic processor units). It addresses multiple vision tasks at various levels, such as segmentation, representation or characterization, analysis and monitoring of the movement. These features allow the construction of a robust representation of the environment and interpret the behavior of mobile agents in the scene. It is also necessary to integrate the vision module into a global system that operates in a complex environment by receiving images from multiple acquisition devices at video frequency. Offering relevant information to higher level systems, monitoring and making decisions in real time, it must accomplish a set of requirements, such as: time constraints, high availability, robustness, high processing speed and re-configurability. We have built a system able to represent and analyze the motion in video acquired by a multi-camera network and to process multi-source data in parallel on a multi-GPU architecture.
Resumo:
Image Based Visual Servoing (IBVS) is a robotic control scheme based on vision. This scheme uses only the visual information obtained from a camera to guide a robot from any robot pose to a desired one. However, IBVS requires the estimation of different parameters that cannot be obtained directly from the image. These parameters range from the intrinsic camera parameters (which can be obtained from a previous camera calibration), to the measured distance on the optical axis between the camera and visual features, it is the depth. This paper presents a comparative study of the performance of D-IBVS estimating the depth from three different ways using a low cost RGB-D sensor like Kinect. The visual servoing system has been developed over ROS (Robot Operating System), which is a meta-operating system for robots. The experiments prove that the computation of the depth value for each visual feature improves the system performance.
Resumo:
This study analyzes the repeatability, reproducibility and accuracy of a new hyperspectral system based on a pushbroom sensor as a means of measuring spectral features and color of materials and objects. The hyperspectral system consisted of a CCD camera, a spectrograph and an objective lens. An additional linear moving system allowed the mechanical scanning of the complete scene. A uniform overhead luminaire with daylight configuration was used to irradiate the scene using d:45 geometry. We followed the guidelines of the ASTM E2214-08 Standard Practice for Specifying and Verifying the Performance of Color-Measuring Instruments that define the standards and latest multidimensional procedures. The results obtained are analyzed in-depth and compared to those recently reported by other authors for spectrophotometers and multispectral systems. It can be concluded that hyperspectral systems are reliable and can be used in the industry to perform spectral and color readings with a high spatial resolution.
Resumo:
In this study, a digital CMOS camera was calibrated for use as a non-contact colorimeter for measuring the color of granite artworks. The low chroma values of the granite, which yield similar stimulation of the three color channels of the camera, proved to be the most challenging aspect of the task. The appropriate parameters for converting the device-dependent RGB color space into a device-independent color space were established. For this purpose, the color of a large number of Munsell samples (corresponding to the previously defined color gamut of granite) was measured with a digital camera and with a spectrophotometer (reference instrument). The color data were then compared using the CIELAB color formulae. The best correlations between measurements were obtained when the camera works to 10-bits and the spectrophotometric measures in SCI mode. Finally, the calibrated instrument was used successfully to measure the color of six commercial varieties of Spanish granite.
Resumo:
This paper presents a method for fast calculation of the egomotion done by a robot using visual features. The method is part of a complete system for automatic map building and Simultaneous Localization and Mapping (SLAM). The method uses optical flow in order to determine if the robot has done a movement. If so, some visual features which do not accomplish several criteria (like intersection, unicity, etc,) are deleted, and then the egomotion is calculated. We use a state-of-the-art algorithm (TORO) in order to rectify the map and solve the SLAM problem. The proposed method provides better efficiency that other current methods.