12 resultados para accelerometri magnetometri scanner 3D Kinect

em Universidad de Alicante


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Paper submitted to the 43rd International Symposium on Robotics (ISR), Taipei, Taiwan, August 29-31, 2012.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several recent works deal with 3D data in mobile robotic problems, e.g., mapping. Data comes from any kind of sensor (time of flight, Kinect or 3D lasers) that provide a huge amount of unorganized 3D data. In this paper we detail an efficient approach to build complete 3D models using a soft computing method, the Growing Neural Gas (GNG). As neural models deal easily with noise, imprecision, uncertainty or partial data, GNG provides better results than other approaches. The GNG obtained is then applied to a sequence. We present a comprehensive study on GNG parameters to ensure the best result at the lowest time cost. From this GNG structure, we propose to calculate planar patches and thus obtaining a fast method to compute the movement performed by a mobile robot by means of a 3D models registration algorithm. Final results of 3D mapping are also shown.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays, the use of RGB-D sensors have focused a lot of research in computer vision and robotics. These kinds of sensors, like Kinect, allow to obtain 3D data together with color information. However, their working range is limited to less than 10 meters, making them useless in some robotics applications, like outdoor mapping. In these environments, 3D lasers, working in ranges of 20-80 meters, are better. But 3D lasers do not usually provide color information. A simple 2D camera can be used to provide color information to the point cloud, but a calibration process between camera and laser must be done. In this paper we present a portable calibration system to calibrate any traditional camera with a 3D laser in order to assign color information to the 3D points obtained. Thus, we can use laser precision and simultaneously make use of color information. Unlike other techniques that make use of a three-dimensional body of known dimensions in the calibration process, this system is highly portable because it makes use of small catadioptrics that can be placed in a simple manner in the environment. We use our calibration system in a 3D mapping system, including Simultaneous Location and Mapping (SLAM), in order to get a 3D colored map which can be used in different tasks. We show that an additional problem arises: 2D cameras information is different when lighting conditions change. So when we merge 3D point clouds from two different views, several points in a given neighborhood could have different color information. A new method for color fusion is presented, obtaining correct colored maps. The system will be tested by applying it to 3D reconstruction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Paper submitted to the 43rd International Symposium on Robotics (ISR2012), Taipei, Taiwan, Aug. 29-31, 2012.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

New low cost sensors and open free libraries for 3D image processing are making important advances in robot vision applications possible, such as three-dimensional object recognition, semantic mapping, navigation and localization of robots, human detection and/or gesture recognition for human-machine interaction. In this paper, a novel method for recognizing and tracking the fingers of a human hand is presented. This method is based on point clouds from range images captured by a RGBD sensor. It works in real time and it does not require visual marks, camera calibration or previous knowledge of the environment. Moreover, it works successfully even when multiple objects appear in the scene or when the ambient light is changed. Furthermore, this method was designed to develop a human interface to control domestic or industrial devices, remotely. In this paper, the method was tested by operating a robotic hand. Firstly, the human hand was recognized and the fingers were detected. Secondly, the movement of the fingers was analysed and mapped to be imitated by a robotic hand.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of 3D data in mobile robotics provides valuable information about the robot’s environment. Traditionally, stereo cameras have been used as a low-cost 3D sensor. However, the lack of precision and texture for some surfaces suggests that the use of other 3D sensors could be more suitable. In this work, we examine the use of two sensors: an infrared SR4000 and a Kinect camera. We use a combination of 3D data obtained by these cameras, along with features obtained from 2D images acquired from these cameras, using a Growing Neural Gas (GNG) network applied to the 3D data. The goal is to obtain a robust egomotion technique. The GNG network is used to reduce the camera error. To calculate the egomotion, we test two methods for 3D registration. One is based on an iterative closest points algorithm, and the other employs random sample consensus. Finally, a simultaneous localization and mapping method is applied to the complete sequence to reduce the global error. The error from each sensor and the mapping results from the proposed method are examined.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of 3D data in mobile robotics applications provides valuable information about the robot’s environment but usually the huge amount of 3D information is unmanageable by the robot storage and computing capabilities. A data compression is necessary to store and manage this information but preserving as much information as possible. In this paper, we propose a 3D lossy compression system based on plane extraction which represent the points of each scene plane as a Delaunay triangulation and a set of points/area information. The compression system can be customized to achieve different data compression or accuracy ratios. It also supports a color segmentation stage to preserve original scene color information and provides a realistic scene reconstruction. The design of the method provides a fast scene reconstruction useful for further visualization or processing tasks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The complete characterization of rock masses implies the acquisition of information of both, the materials which compose the rock mass and the discontinuities which divide the outcrop. Recent advances in the use of remote sensing techniques – such as Light Detection and Ranging (LiDAR) – allow the accurate and dense acquisition of 3D information that can be used for the characterization of discontinuities. This work presents a novel methodology which allows the calculation of the normal spacing of persistent and non-persistent discontinuity sets using 3D point cloud datasets considering the three dimensional relationships between clusters. This approach requires that the 3D dataset has been previously classified. This implies that discontinuity sets are previously extracted, every single point is labeled with its corresponding discontinuity set and every exposed planar surface is analytically calculated. Then, for each discontinuity set the method calculates the normal spacing between an exposed plane and its nearest one considering 3D space relationship. This link between planes is obtained calculating for every point its nearest point member of the same discontinuity set, which provides its nearest plane. This allows calculating the normal spacing for every plane. Finally, the normal spacing is calculated as the mean value of all the normal spacings for each discontinuity set. The methodology is validated through three cases of study using synthetic data and 3D laser scanning datasets. The first case illustrates the fundamentals and the performance of the proposed methodology. The second and the third cases of study correspond to two rock slopes for which datasets were acquired using a 3D laser scanner. The second case study has shown that results obtained from the traditional and the proposed approaches are reasonably similar. Nevertheless, a discrepancy between both approaches has been found when the exposed planes members of a discontinuity set were hard to identify and when the planes pairing was difficult to establish during the fieldwork campaign. The third case study also has evidenced that when the number of identified exposed planes is high, the calculated normal spacing using the proposed approach is minor than those using the traditional approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this project, we propose the implementation of a 3D object recognition system which will be optimized to operate under demanding time constraints. The system must be robust so that objects can be recognized properly in poor light conditions and cluttered scenes with significant levels of occlusion. An important requirement must be met: the system must exhibit a reasonable performance running on a low power consumption mobile GPU computing platform (NVIDIA Jetson TK1) so that it can be integrated in mobile robotics systems, ambient intelligence or ambient assisted living applications. The acquisition system is based on the use of color and depth (RGB-D) data streams provided by low-cost 3D sensors like Microsoft Kinect or PrimeSense Carmine. The range of algorithms and applications to be implemented and integrated will be quite broad, ranging from the acquisition, outlier removal or filtering of the input data and the segmentation or characterization of regions of interest in the scene to the very object recognition and pose estimation. Furthermore, in order to validate the proposed system, we will create a 3D object dataset. It will be composed by a set of 3D models, reconstructed from common household objects, as well as a handful of test scenes in which those objects appear. The scenes will be characterized by different levels of occlusion, diverse distances from the elements to the sensor and variations on the pose of the target objects. The creation of this dataset implies the additional development of 3D data acquisition and 3D object reconstruction applications. The resulting system has many possible applications, ranging from mobile robot navigation and semantic scene labeling to human-computer interaction (HCI) systems based on visual information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En este trabajo se estudia el uso de las nubes de puntos en 3D, es decir, un conjunto de puntos en un sistema de referencia cartesiano en R3, para la identificación y caracterización de las discontinuidades que afloran en un macizo rocoso y su aplicación al campo de la Mecánica de Rocas. Las nubes de puntos utilizadas se han adquirido mediante tres técnicas: sintéticas, 3D laser scanner y la técnica de fotogrametría digital Structure From Motion (SfM). El enfoque está orientado a la extracción y caracterización de familias de discontinuidades y su aplicación a la evaluación de la calidad de un talud rocoso mediante la clasificación geomecánica Slope Mass Rating (SMR). El contenido de la misma está dividido en tres bloques, como son: (1) metodología de extracción de discontinuidades y clasificación de la nube de puntos 3D; (2) análisis de espaciados normales en nubes de puntos 3D; y (3) análisis de la evaluación de la calidad geomecánica de taludes rocoso mediante la clasificación geomecánica SMR a partir de nubes de puntos 3D. La primera línea de investigación consiste en el estudio de las nubes de puntos 3D con la finalidad de extraer y caracterizar las discontinuidades planas presentes en la superficie de un macizo rocoso. En primer lugar, se ha recopilado información de las metodologías existentes y la disponibilidad de programas para su estudio. Esto motivó la decisión de investigar y diseñar un proceso de clasificación novedoso, que muestre todos los pasos para su programación e incluso ofreciendo el código programado a la comunidad científica bajo licencia GNU GPL. De esta forma, se ha diseñado una novedosa metodología y se ha programado un software que analiza nubes de puntos 3D de forma semi-automática, permitiendo al usuario interactuar con el proceso de clasificación. Dicho software se llama Discontinuity Set Extractor (DSE). El método se ha validado empleando nubes de puntos sintéticas y adquiridas con 3D laser scanner. En primer lugar, este código analiza la nube de puntos efectuando un test de coplanaridad para cada punto y sus vecinos próximos para, a continuación, calcular el vector normal de la superficie en el punto estudiado. En segundo lugar, se representan los polos de los vectores normales calculados en el paso previo en una falsilla estereográfica. A continuación se calcula la densidad de los polos y los polos con mayor densidad o polos principales. Estos indican las orientaciones de la superficie más representadas, y por tanto las familias de discontinuidades. En tercer lugar, se asigna a cada punto una familia en dependencia del ángulo formado por el vector normal del punto y el de la familia. En este punto el usuario puede visualizar la nube de puntos clasificada con las familias de discontinuidades que ha determinado para validar el resultado intermedio. En cuarto lugar, se realiza un análisis cluster en el que se determina la agrupación de puntos según planos para cada familia (clusters). A continuación, se filtran aquellos que no tengan un número de puntos suficiente y se determina la ecuación de cada plano. Finalmente, se exportan los resultados de la clasificación a un archivo de texto para su análisis y representación en otros programas. La segunda línea de investigación consiste en el estudio del espaciado entre discontinuidades planas que afloran en macizos rocosos a partir de nubes de puntos 3D. Se desarrolló una metodología de cálculo de espaciados a partir de nubes de puntos 3D previamente clasificadas con el fin de determinar las relaciones espaciales entre planos de cada familia y calcular el espaciado normal. El fundamento novedoso del método propuesto es determinar el espaciado normal de familia basándonos en los mismos principios que en campo, pero sin la restricción de las limitaciones espaciales, condiciones de inseguridad y dificultades inherentes al proceso. Se consideraron dos aspectos de las discontinuidades: su persistencia finita o infinita, siendo la primera el aspecto más novedoso de esta publicación. El desarrollo y aplicación del método a varios casos de estudio permitió determinar su ámbito de aplicación. La validación se llevó a cabo con nubes de puntos sintéticas y adquiridas con 3D laser scanner. La tercera línea de investigación consiste en el análisis de la aplicación de la información obtenida con nubes de puntos 3D a la evaluación de la calidad de un talud rocoso mediante la clasificación geomecánica SMR. El análisis se centró en la influencia del uso de orientaciones determinadas con distintas fuentes de información (datos de campo y técnicas de adquisición remota) en la determinación de los factores de ajuste y al valor del índice SMR. Los resultados de este análisis muestran que el uso de fuentes de información y técnicas ampliamente aceptadas pueden ocasionar cambios en la evaluación de la calidad del talud rocoso de hasta una clase geomecánica (es decir, 20 unidades). Asimismo, los análisis realizados han permitido constatar la validez del índice SMR para cartografiar zonas inestables de un talud. Los métodos y programas informáticos desarrollados suponen un importante avance científico para el uso de nubes de puntos 3D para: (1) el estudio y caracterización de las discontinuidades de los macizos rocosos y (2) su aplicación a la evaluación de la calidad de taludes en roca mediante las clasificaciones geomecánicas. Asimismo, las conclusiones obtenidas y los medios y métodos empleados en esta tesis doctoral podrán ser contrastadas y utilizados por otros investigadores, al estar disponibles en la web del autor bajo licencia GNU GPL.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Rock mass classification systems are widely used tools for assessing the stability of rock slopes. Their calculation requires the prior quantification of several parameters during conventional fieldwork campaigns, such as the orientation of the discontinuity sets, the main properties of the existing discontinuities and the geo-mechanical characterization of the intact rock mass, which can be time-consuming and an often risky task. Conversely, the use of relatively new remote sensing data for modelling the rock mass surface by means of 3D point clouds is changing the current investigation strategies in different rock slope engineering applications. In this paper, the main practical issues affecting the application of Slope Mass Rating (SMR) for the characterization of rock slopes from 3D point clouds are reviewed, using three case studies from an end-user point of view. To this end, the SMR adjustment factors, which were calculated from different sources of information and processes, using the different softwares, are compared with those calculated using conventional fieldwork data. In the presented analysis, special attention is paid to the differences between the SMR indexes derived from the 3D point cloud and conventional field work approaches, the main factors that determine the quality of the data and some recognized practical issues. Finally, the reliability of Slope Mass Rating for the characterization of rocky slopes is highlighted.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays, new computers generation provides a high performance that enables to build computationally expensive computer vision applications applied to mobile robotics. Building a map of the environment is a common task of a robot and is an essential part to allow the robots to move through these environments. Traditionally, mobile robots used a combination of several sensors from different technologies. Lasers, sonars and contact sensors have been typically used in any mobile robotic architecture, however color cameras are an important sensor due to we want the robots to use the same information that humans to sense and move through the different environments. Color cameras are cheap and flexible but a lot of work need to be done to give robots enough visual understanding of the scenes. Computer vision algorithms are computational complex problems but nowadays robots have access to different and powerful architectures that can be used for mobile robotics purposes. The advent of low-cost RGB-D sensors like Microsoft Kinect which provide 3D colored point clouds at high frame rates made the computer vision even more relevant in the mobile robotics field. The combination of visual and 3D data allows the systems to use both computer vision and 3D processing and therefore to be aware of more details of the surrounding environment. The research described in this thesis was motivated by the need of scene mapping. Being aware of the surrounding environment is a key feature in many mobile robotics applications from simple robotic navigation to complex surveillance applications. In addition, the acquisition of a 3D model of the scenes is useful in many areas as video games scene modeling where well-known places are reconstructed and added to game systems or advertising where once you get the 3D model of one room the system can add furniture pieces using augmented reality techniques. In this thesis we perform an experimental study of the state-of-the-art registration methods to find which one fits better to our scene mapping purposes. Different methods are tested and analyzed on different scene distributions of visual and geometry appearance. In addition, this thesis proposes two methods for 3d data compression and representation of 3D maps. Our 3D representation proposal is based on the use of Growing Neural Gas (GNG) method. This Self-Organizing Maps (SOMs) has been successfully used for clustering, pattern recognition and topology representation of various kind of data. Until now, Self-Organizing Maps have been primarily computed offline and their application in 3D data has mainly focused on free noise models without considering time constraints. Self-organising neural models have the ability to provide a good representation of the input space. In particular, the Growing Neural Gas (GNG) is a suitable model because of its flexibility, rapid adaptation and excellent quality of representation. However, this type of learning is time consuming, specially for high-dimensional input data. Since real applications often work under time constraints, it is necessary to adapt the learning process in order to complete it in a predefined time. This thesis proposes a hardware implementation leveraging the computing power of modern GPUs which takes advantage of a new paradigm coined as General-Purpose Computing on Graphics Processing Units (GPGPU). Our proposed geometrical 3D compression method seeks to reduce the 3D information using plane detection as basic structure to compress the data. This is due to our target environments are man-made and therefore there are a lot of points that belong to a plane surface. Our proposed method is able to get good compression results in those man-made scenarios. The detected and compressed planes can be also used in other applications as surface reconstruction or plane-based registration algorithms. Finally, we have also demonstrated the goodness of the GPU technologies getting a high performance implementation of a CAD/CAM common technique called Virtual Digitizing.