867 resultados para Collision avoidance, Human robot cooperation, Mobile robot sensor placement
Resumo:
Este proyecto de fin de carrera forma parte de un proyecto más extenso. Supone la construcción de un robot móvil y autónomo capaz de recibir información del entorno que le rodea y ser capaz de recoger pelotas inmóviles de una pista de tenis.
Resumo:
110 p.
Resumo:
In this paper we present a fast and precise method to estimate the planar motion of a lidar from consecutive range scans. For every scanned point we formulate the range flow constraint equation in terms of the sensor velocity, and minimize a robust function of the resulting geometric constraints to obtain the motion estimate. Conversely to traditional approaches, this method does not search for correspondences but performs dense scan alignment based on the scan gradients, in the fashion of dense 3D visual odometry. The minimization problem is solved in a coarse-to-fine scheme to cope with large displacements, and a smooth filter based on the covariance of the estimate is employed to handle uncertainty in unconstraint scenarios (e.g. corridors). Simulated and real experiments have been performed to compare our approach with two prominent scan matchers and with wheel odometry. Quantitative and qualitative results demonstrate the superior performance of our approach which, along with its very low computational cost (0.9 milliseconds on a single CPU core), makes it suitable for those robotic applications that require planar odometry. For this purpose, we also provide the code so that the robotics community can benefit from it.
Resumo:
Most approaches to stereo visual odometry reconstruct the motion based on the tracking of point features along a sequence of images. However, in low-textured scenes it is often difficult to encounter a large set of point features, or it may happen that they are not well distributed over the image, so that the behavior of these algorithms deteriorates. This paper proposes a probabilistic approach to stereo visual odometry based on the combination of both point and line segment that works robustly in a wide variety of scenarios. The camera motion is recovered through non-linear minimization of the projection errors of both point and line segment features. In order to effectively combine both types of features, their associated errors are weighted according to their covariance matrices, computed from the propagation of Gaussian distribution errors in the sensor measurements. The method, of course, is computationally more expensive that using only one type of feature, but still can run in real-time on a standard computer and provides interesting advantages, including a straightforward integration into any probabilistic framework commonly employed in mobile robotics.
Resumo:
Las transformaciones tecnológicas y de información que está experimentando la sociedad, especialmente en la última década, está produciendo un crecimiento exponencial de los datos en todos los ámbitos de la sociedad. Los datos que se generan en los diferentes ámbitos se corresponden con elementos primarios de información que por sí solos son irrelevantes como apoyo a las tomas de decisiones. Para que estos datos puedan ser de utilidad en cualquier proceso de decisión, es preciso que se conviertan en información, es decir, en un conjunto de datos procesados con un significado, para ayudar a crear conocimiento. Estos procesos de transformación de datos en información se componen de diferentes fases como la localización de las fuentes de información, captura, análisis y medición.Este cambio tecnológico y a su vez de la sociedad ha provocado un aumento de las fuentes de información, de manera que cualquier persona, empresas u organización, puede generar información que puede ser relevante para el negocio de las empresas o gobiernos. Localizar estas fuentes, identificar información relevante en la fuente y almacenar la información que generan, la cual puede tener diferentes formatos, es el primer paso de todo el proceso anteriormente descrito, el cual tiene que ser ejecutado de manera correcta ya que el resto de fases dependen de las fuentes y datos recolectados. Para la identificación de información relevante en las fuentes se han creado lo que se denomina, robot de búsqueda, los cuales examinan de manera automática una fuente de información, localizando y recolectando datos que puedan ser de interés.En este trabajo se diseña e implementa un robot de conocimiento junto con los sistemas de captura de información online para fuentes hipertextuales y redes sociales.
Resumo:
El XXI Congreso Latinoamericano sobre Espíritu Empresarial, se realizó en la Universidad Icesi los días 6, 7 y 8 de abril de 2011.
Resumo:
A combined Short-Term Learning (STL) and Long-Term Learning (LTL) approach to solving mobile robot navigation problems is presented and tested in both real and simulated environments. The LTL consists of rapid simulations that use a Genetic Algorithm to derive diverse sets of behaviours. These sets are then transferred to an idiotypic Artificial Immune System (AIS), which forms the STL phase, and the system is said to be seeded. The combined LTL-STL approach is compared with using STL only, and with using a handdesigned controller. In addition, the STL phase is tested when the idiotypic mechanism is turned off. The results provide substantial evidence that the best option is the seeded idiotypic system, i.e. the architecture that merges LTL with an idiotypic AIS for the STL. They also show that structurally different environments can be used for the two phases without compromising transferability.
Resumo:
En esta tesis se aborda el problema de la navegabilidad de robots móviles sobre terrenos irregulares, los cuales poseen diferentes inclinaciones y variedad de obstáculos. Este tema constituye actualmente una línea de investigación activa dirigida al desarrollo de nuevos robots y, adicionalmente, enfocada al desarrollo de estrategias de navegación eficientes y con el mínimo riesgo de inutilización. En primer lugar se desarrolló el robot móvil Lázaro para navegar en este tipo de terrenos, el cual posee un brazo articulado con una rueda como efector final. Esta rueda le permite al brazo mantener un punto de contacto adicional con el suelo que puede ayudar al robot a compensar situaciones de inestabilidad y sobrepasar algunos obstáculos que pudieran presentarse en estos entornos. Posteriormente, se desarrollaron tres medidas cuantitativas que permiten evaluar la navegabilidad de cualquier robot móvil cuando transita sobre terreno irregular. Estas tres medidas son: un índice de estabilidad, el cual evalúa la propensión al vuelco; un índice de direccionamiento, el cual evalúa la disponibilidad del robot para direccionarse y seguir una trayectoria dada y, por último, un índice de deslizamiento, el cual evalúa la propensión del robot a deslizarse hacia abajo cuando se desplaza sobre superficies inclinadas. Finalmente, se definieron un conjunto de maniobras que puede ejecutar Lázaro y que están dirigidas a garantizar la navegación cuando el robot se desplaza sobre superficies inclinadas o cuando debe sobrepasar obstáculos tales como escalones, rampas o zanjas. Todas las estrategias diseñadas se fundamentan en el uso del brazo como herramienta adicional que posee el robot para mejorar su navegabilidad.
Resumo:
Nowadays, new computers generation provides a high performance that enables to build computationally expensive computer vision applications applied to mobile robotics. Building a map of the environment is a common task of a robot and is an essential part to allow the robots to move through these environments. Traditionally, mobile robots used a combination of several sensors from different technologies. Lasers, sonars and contact sensors have been typically used in any mobile robotic architecture, however color cameras are an important sensor due to we want the robots to use the same information that humans to sense and move through the different environments. Color cameras are cheap and flexible but a lot of work need to be done to give robots enough visual understanding of the scenes. Computer vision algorithms are computational complex problems but nowadays robots have access to different and powerful architectures that can be used for mobile robotics purposes. The advent of low-cost RGB-D sensors like Microsoft Kinect which provide 3D colored point clouds at high frame rates made the computer vision even more relevant in the mobile robotics field. The combination of visual and 3D data allows the systems to use both computer vision and 3D processing and therefore to be aware of more details of the surrounding environment. The research described in this thesis was motivated by the need of scene mapping. Being aware of the surrounding environment is a key feature in many mobile robotics applications from simple robotic navigation to complex surveillance applications. In addition, the acquisition of a 3D model of the scenes is useful in many areas as video games scene modeling where well-known places are reconstructed and added to game systems or advertising where once you get the 3D model of one room the system can add furniture pieces using augmented reality techniques. In this thesis we perform an experimental study of the state-of-the-art registration methods to find which one fits better to our scene mapping purposes. Different methods are tested and analyzed on different scene distributions of visual and geometry appearance. In addition, this thesis proposes two methods for 3d data compression and representation of 3D maps. Our 3D representation proposal is based on the use of Growing Neural Gas (GNG) method. This Self-Organizing Maps (SOMs) has been successfully used for clustering, pattern recognition and topology representation of various kind of data. Until now, Self-Organizing Maps have been primarily computed offline and their application in 3D data has mainly focused on free noise models without considering time constraints. Self-organising neural models have the ability to provide a good representation of the input space. In particular, the Growing Neural Gas (GNG) is a suitable model because of its flexibility, rapid adaptation and excellent quality of representation. However, this type of learning is time consuming, specially for high-dimensional input data. Since real applications often work under time constraints, it is necessary to adapt the learning process in order to complete it in a predefined time. This thesis proposes a hardware implementation leveraging the computing power of modern GPUs which takes advantage of a new paradigm coined as General-Purpose Computing on Graphics Processing Units (GPGPU). Our proposed geometrical 3D compression method seeks to reduce the 3D information using plane detection as basic structure to compress the data. This is due to our target environments are man-made and therefore there are a lot of points that belong to a plane surface. Our proposed method is able to get good compression results in those man-made scenarios. The detected and compressed planes can be also used in other applications as surface reconstruction or plane-based registration algorithms. Finally, we have also demonstrated the goodness of the GPU technologies getting a high performance implementation of a CAD/CAM common technique called Virtual Digitizing.
Resumo:
The development of robots has shown itself as a very complex interdisciplinary research field. The predominant procedure for these developments in the last decades is based on the assumption that each robot is a fully personalized project, with the direct embedding of hardware and software technologies in robot parts with no level of abstraction. Although this methodology has brought countless benefits to the robotics research, on the other hand, it has imposed major drawbacks: (i) the difficulty to reuse hardware and software parts in new robots or new versions; (ii) the difficulty to compare performance of different robots parts; and (iii) the difficulty to adapt development needs-in hardware and software levels-to local groups expertise. Large advances might be reached, for example, if physical parts of a robot could be reused in a different robot constructed with other technologies by other researcher or group. This paper proposes a framework for robots, TORP (The Open Robot Project), that aims to put forward a standardization in all dimensions (electrical, mechanical and computational) of a robot shared development model. This architecture is based on the dissociation between the robot and its parts, and between the robot parts and their technologies. In this paper, the first specification for a TORP family and the first humanoid robot constructed following the TORP specification set are presented, as well as the advances proposed for their improvement.
Resumo:
Previous work has shown that robot navigation systems that employ an architecture based upon the idiotypic network theory of the immune system have an advantage over control techniques that rely on reinforcement learning only. This is thought to be a result of intelligent behaviour selection on the part of the idiotypic robot. In this paper an attempt is made to imitate idiotypic dynamics by creating controllers that use reinforcement with a number of different probabilistic schemes to select robot behaviour. The aims are to show that the idiotypic system is not merely performing some kind of periodic random behaviour selection, and to try to gain further insight into the processes that govern the idiotypic mechanism. Trials are carried out using simulated Pioneer robots that undertake navigation exercises. Results show that a scheme that boosts the probability of selecting highly-ranked alternative behaviours to 50% during stall conditions comes closest to achieving the properties of the idiotypic system, but remains unable to match it in terms of all round performance.
Resumo:
Este trabajo fin de grado trata sobre la implementación de un simulador cinemático de un robot manipulador industrial, orientado al aprendizaje de los principios de programación y desarrollado mediante la herramienta de software matemático MATLAB, dicho simulador debe tener como características principales ser capaz de emular las características de programación que incorporan los lenguajes a nivel robot y resultar fácilmente accesible a los alumnos de las ingenierías. Asimismo, el simulador tendrá la capacidad de definir los objetos que integran el entorno físico que rodean al robot con el objeto de simular la interacción cinemática del brazo manipulador con dicho entorno. Para ello, primero se realizará un estudio de los lenguajes de nivel robot, en este caso concreto V+, con el objeto de elaborar un catálogo de funciones y estructuras relevantes, concretamente se trataran las estructuras de datos, funciones del robot, etc. A partir de estos, se elaborarán las especificaciones que debe cumplir el simulador cinemático. Por último se realizarán unas prácticas sobre el simulador orientadas al aprendizaje y elaboración de los manuales de usuario del mismo.
Resumo:
Tactile sensing is an important aspect of robotic systems, and enables safe, dexterous robot-environment interaction. The design and implementation of tactile sensors on robots has been a topic of research over the past 30 years, and current challenges include mechanically flexible “sensing skins”, high dynamic range (DR) sensing (i.e.: high force range and fine force resolution), multi-axis sensing, and integration between the sensors and robot. This dissertation focuses on addressing some of these challenges through a novel manufacturing process that incorporates conductive and dielectric elastomers in a reusable, multilength-scale mold, and new sensor designs for multi-axis sensing that improve force range without sacrificing resolution. A single taxel was integrated into a 1 degree of freedom robotic gripper for closed-loop slip detection. Manufacturing involved casting a composite silicone rubber, polydimethylsiloxane (PDMS) filled with conductive particles such as carbon nanotubes, into a mold to produce microscale flexible features on the order of 10s of microns. Molds were produced via microfabrication of silicon wafers, but were limited in sensing area and were costly. An improved technique was developed that produced molds of acrylic using a computer numerical controlled (CNC) milling machine. This maintained the ability to produce microscale features, and increased the sensing area while reducing costs. New sensing skins had features as small as 20 microns over an area as large as a human hand. Sensor architectures capable of sensing both shear and normal force sensing with high dynamic range were produced. Using this architecture, two sensing modalities were developed: a capacitive approach and a contact resistive approach. The capacitive approach demonstrated better dynamic range, while the contact resistive approach used simpler circuitry. Using the contact resistive approach, normal force range and resolution were 8,000 mN and 1,000 mN, respectively, and shear force range and resolution were 450 mN and 100 mN, respectively. Using the capacitive approach, normal force range and resolution were 10,000 mN and 100 mN, respectively, and shear force range and resolution were 1,500 mN and 50 mN, respectively.