4 resultados para Calibration data

em Universidad de Alicante


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Hoy en día es común estudiar los patrones globales de biodiversidad a partir de las predicciones generadas por diferentes modelos de nicho ecológico. Habitualmente, estos modelos se calibran con datos procedentes de bases de datos de libre acceso (e.g. GBIF). Sin embargo, a pesar de la facilidad de descarga y de la accesibilidad de los datos, la información almacenada sobre las localidades donde están presentes las especies suele tener sesgos y errores. Estos problemas en los datos de calibración pueden modificar drásticamente las predicciones de los modelos y con ello pueden enmascarar los patrones macroecológicos reales. El objetivo de este trabajo es investigar qué métodos producen resultados más precisos cuando los datos de calibración incluyen sesgos y cuáles producen mejores resultados cuando los datos de calibración tienen, además de sesgos, errores. Para ello creado una especie virtual, hemos proyectado su distribución en la península ibérica, hemos muestreado su distribución de manera sesgada y hemos calibrado dos tipos de modelos de distribución (Bioclim y Maxent) con muestras de distintos tamaños. Nuestros resultados indican que cuando los datos sólo están sesgados, los resultados de Bioclim son mejores que los de Maxent. Sin embargo, Bioclim es extremadamente sensible a la presencia de errores en los datos de calibración. En estas situaciones, el comportamiento de Maxent es mucho más robusto y las predicciones que proporciona son más ajustadas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Image Based Visual Servoing (IBVS) is a robotic control scheme based on vision. This scheme uses only the visual information obtained from a camera to guide a robot from any robot pose to a desired one. However, IBVS requires the estimation of different parameters that cannot be obtained directly from the image. These parameters range from the intrinsic camera parameters (which can be obtained from a previous camera calibration), to the measured distance on the optical axis between the camera and visual features, it is the depth. This paper presents a comparative study of the performance of D-IBVS estimating the depth from three different ways using a low cost RGB-D sensor like Kinect. The visual servoing system has been developed over ROS (Robot Operating System), which is a meta-operating system for robots. The experiments prove that the computation of the depth value for each visual feature improves the system performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays, the use of RGB-D sensors have focused a lot of research in computer vision and robotics. These kinds of sensors, like Kinect, allow to obtain 3D data together with color information. However, their working range is limited to less than 10 meters, making them useless in some robotics applications, like outdoor mapping. In these environments, 3D lasers, working in ranges of 20-80 meters, are better. But 3D lasers do not usually provide color information. A simple 2D camera can be used to provide color information to the point cloud, but a calibration process between camera and laser must be done. In this paper we present a portable calibration system to calibrate any traditional camera with a 3D laser in order to assign color information to the 3D points obtained. Thus, we can use laser precision and simultaneously make use of color information. Unlike other techniques that make use of a three-dimensional body of known dimensions in the calibration process, this system is highly portable because it makes use of small catadioptrics that can be placed in a simple manner in the environment. We use our calibration system in a 3D mapping system, including Simultaneous Location and Mapping (SLAM), in order to get a 3D colored map which can be used in different tasks. We show that an additional problem arises: 2D cameras information is different when lighting conditions change. So when we merge 3D point clouds from two different views, several points in a given neighborhood could have different color information. A new method for color fusion is presented, obtaining correct colored maps. The system will be tested by applying it to 3D reconstruction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Paper submitted to the 43rd International Symposium on Robotics (ISR2012), Taipei, Taiwan, Aug. 29-31, 2012.