904 resultados para Simultaneous Localization
Resumo:
Simultaneous localization and mapping(SLAM) is a very important problem in mobile robotics. Many solutions have been proposed by different scientists during the last two decades, nevertheless few studies have considered the use of multiple sensors simultane¬ously. The solution is on combining several data sources with the aid of an Extended Kalman Filter (EKF). Two approaches are proposed. The first one is to use the ordinary EKF SLAM algorithm for each data source separately in parallel and then at the end of each step, fuse the results into one solution. Another proposed approach is the use of multiple data sources simultaneously in a single filter. The comparison of the computational com¬plexity of the two methods is also presented. The first method is almost four times faster than the second one.
Resumo:
This paper illustrates a method for finding useful visual landmarks for performing simultaneous localization and mapping (SLAM). The method is based loosely on biological principles, using layers of filtering and pooling to create learned templates that correspond to different views of the environment. Rather than using a set of landmarks and reporting range and bearing to the landmark, this system maps views to poses. The challenge is to produce a system that produces the same view for small changes in robot pose, but provides different views for larger changes in pose. The method has been developed to interface with the RatSLAM system, a biologically inspired method of SLAM. The paper describes the method of learning and recalling visual landmarks in detail, and shows the performance of the visual system in real robot tests.
Resumo:
Probabilistic robotics most often applied to the problem of simultaneous localisation and mapping (SLAM), requires measures of uncertainty to accompany observations of the environment. This paper describes how uncertainty can be characterised for a vision system that locates coloured landmarks in a typical laboratory environment. The paper describes a model of the uncertainty in segmentation, the internal cameral model and the mounting of the camera on the robot. It explains the implementation of the system on a laboratory robot, and provides experimental results that show the coherence of the uncertainty model.
Resumo:
This paper presents the implementation of a modified particle filter for vision-based simultaneous localization and mapping of an autonomous robot in a structured indoor environment. Through this method, artificial landmarks such as multi-coloured cylinders can be tracked with a camera mounted on the robot, and the position of the robot can be estimated at the same time. Experimental results in simulation and in real environments show that this approach has advantages over the extended Kalman filter with ambiguous data association and various levels of odometric noise.
Resumo:
The work presents a new approach to the problem of simultaneous localization and mapping - SLAM - inspired by computational models of the hippocampus of rodents. The rodent hippocampus has been extensively studied with respect to navigation tasks, and displays many of the properties of a desirable SLAM solution. RatSLAM is an implementation of a hippocampal model that can perform SLAM in real time on a real robot. It uses a competitive attractor network to integrate odometric information with landmark sensing to form a consistent representation of the environment. Experimental results show that RatSLAM can operate with ambiguous landmark information and recover from both minor and major path integration errors.
Resumo:
Simultaneous Localization and Mapping (SLAM) is a procedure used to determine the location of a mobile vehicle in an unknown environment, while constructing a map of the unknown environment at the same time. Mobile platforms, which make use of SLAM algorithms, have industrial applications in autonomous maintenance, such as the inspection of flaws and defects in oil pipelines and storage tanks. A typical SLAM consists of four main components, namely, experimental setup (data gathering), vehicle pose estimation, feature extraction, and filtering. Feature extraction is the process of realizing significant features from the unknown environment such as corners, edges, walls, and interior features. In this work, an original feature extraction algorithm specific to distance measurements obtained through SONAR sensor data is presented. This algorithm has been constructed by combining the SONAR Salient Feature Extraction Algorithm and the Triangulation Hough Based Fusion with point-in-polygon detection. The reconstructed maps obtained through simulations and experimental data with the fusion algorithm are compared to the maps obtained with existing feature extraction algorithms. Based on the results obtained, it is suggested that the proposed algorithm can be employed as an option for data obtained from SONAR sensors in environment, where other forms of sensing are not viable. The algorithm fusion for feature extraction requires the vehicle pose estimation as an input, which is obtained from a vehicle pose estimation model. For the vehicle pose estimation, the author uses sensor integration to estimate the pose of the mobile vehicle. Different combinations of these sensors are studied (e.g., encoder, gyroscope, or encoder and gyroscope). The different sensor fusion techniques for the pose estimation are experimentally studied and compared. The vehicle pose estimation model, which produces the least amount of error, is used to generate inputs for the feature extraction algorithm fusion. In the experimental studies, two different environmental configurations are used, one without interior features and another one with two interior features. Numerical and experimental findings are discussed. Finally, the SLAM algorithm is implemented along with the algorithms for feature extraction and vehicle pose estimation. Three different cases are experimentally studied, with the floor of the environment intentionally altered to induce slipping. Results obtained for implementations with and without SLAM are compared and discussed. The present work represents a step towards the realization of autonomous inspection platforms for performing concurrent localization and mapping in harsh environments.
Resumo:
The role of catecholamines in the control of the GnRH pulse generator is unclear as studies have relied on the use of peripheral or intracerebroventricular injections, which lack specificity in relation to the anatomical site of action. Direct brain site infusions have been used, however, these are limited by the ability to accurately target small brain regions. One such area of interest in the control of GnRH is the median eminence and arcuate nucleus within the medial basal hypothalamus. Here we describe a method of stereotaxically targeting this area in a large animal (sheep) and an infusion system to deliver drugs into unrestrained conscious animals. To test our technique we infused the dopamine agonist, quinpirole or vehicle into the medial basal hypothalamus of ovariectomised ewes. Quinpirole significantly suppressed LH pulsatility only in animals with injectors located close to the lateral median eminence. This in vivo result supports the hypothesis that dopamine inhibits GnRH secretion by presynaptic inhibition in the lateral median eminence. Also infusion of quinpirole into the medial basal hypothalamus suppressed prolactin secretion providing in vivo evidence that is consistent with the hypothesis that there are stimulatory autoreceptors on tubero-infundibular dopamine neurons. (C) 1997 Elsevier Science B.V.
Resumo:
A presente dissertação apresenta uma solução para o problema de modelização tridimensional de galerias subterrâneas. O trabalho desenvolvido emprega técnicas provenientes da área da robótica móvel para obtenção um sistema autónomo móvel de modelização, capaz de operar em ambientes não estruturados sem acesso a sistemas de posicionamento global, designadamente GPS. Um sistema de modelização móvel e autónomo pode ser bastante vantajoso, pois constitui um método rápido e simples de monitorização das estruturas e criação de representações virtuais das galerias com um elevado nível de detalhe. O sistema de modelização desloca-se no interior dos túneis para recolher informações sensoriais sobre a geometria da estrutura. A tarefa de organização destes dados com vista _a construção de um modelo coerente, exige um conhecimento exacto do percurso praticado pelo sistema, logo o problema de localização da plataforma sensorial tem que ser resolvido. A formulação de um sistema de localização autónoma tem que superar obstáculos que se manifestam vincadamente nos ambientes underground, tais como a monotonia estrutural e a já referida ausência de sistemas de posicionamento global. Neste contexto, foi abordado o conceito de SLAM (Simultaneous Loacalization and Mapping) para determinação da localização da plataforma sensorial em seis graus de liberdade. Seguindo a abordagem tradicional, o núcleo do algoritmo de SLAM consiste no filtro de Kalman estendido (EKF { Extended Kalman Filter ). O sistema proposto incorpora métodos avançados do estado da arte, designadamente a parametrização em profundidade inversa (Inverse Depth Parametrization) e o método de rejeição de outliers 1-Point RANSAC. A contribuição mais importante do método por nós proposto para o avanço do estado da arte foi a fusão da informação visual com a informação inercial. O algoritmo de localização foi testado com base em dados reais, adquiridos no interior de um túnel rodoviário. Os resultados obtidos permitem concluir que, ao fundir medidas inerciais com informações visuais, conseguimos evitar o fenómeno de degeneração do factor de escala, comum nas aplicações de localização através de sistemas puramente monoculares. Provámos simultaneamente que a correcção de um sistema de localização inercial através da consideração de informações visuais é eficaz, pois permite suprimir os desvios de trajectória que caracterizam os sistemas de dead reckoning. O algoritmo de modelização, com base na localização estimada, organiza no espaço tridimensional os dados geométricos adquiridos, resultando deste processo um modelo em nuvem de pontos, que posteriormente _e convertido numa malha triangular, atingindo-se assim uma representação mais realista do cenário original.
Resumo:
A navegação e a interpretação do meio envolvente por veículos autónomos em ambientes não estruturados continua a ser um grande desafio na actualidade. Sebastian Thrun, descreve em [Thr02], que o problema do mapeamento em sistemas robóticos é o da aquisição de um modelo espacial do meio envolvente do robô. Neste contexto, a integração de sistemas sensoriais em plataformas robóticas, que permitam a construção de mapas do mundo que as rodeia é de extrema importância. A informação recolhida desses dados pode ser interpretada, tendo aplicabilidade em tarefas de localização, navegação e manipulação de objectos. Até à bem pouco tempo, a generalidade dos sistemas robóticos que realizavam tarefas de mapeamento ou Simultaneous Localization And Mapping (SLAM), utilizavam dispositivos do tipo laser rangefinders e câmaras stereo. Estes equipamentos, para além de serem dispendiosos, fornecem apenas informação bidimensional, recolhidas através de cortes transversais 2D, no caso dos rangefinders. O paradigma deste tipo de tecnologia mudou consideravelmente, com o lançamento no mercado de câmaras RGB-D, como a desenvolvida pela PrimeSense TM e o subsequente lançamento da Kinect, pela Microsoft R para a Xbox 360 no final de 2010. A qualidade do sensor de profundidade, dada a natureza de baixo custo e a sua capacidade de aquisição de dados em tempo real, é incontornável, fazendo com que o sensor se tornasse instantaneamente popular entre pesquisadores e entusiastas. Este avanço tecnológico deu origem a várias ferramentas de desenvolvimento e interacção humana com este tipo de sensor, como por exemplo a Point Cloud Library [RC11] (PCL). Esta ferramenta tem como objectivo fornecer suporte para todos os blocos de construção comuns que uma aplicação 3D necessita, dando especial ênfase ao processamento de nuvens de pontos de n dimensões adquiridas a partir de câmaras RGB-D, bem como scanners laser, câmaras Time-of-Flight ou câmaras stereo. Neste contexto, é realizada nesta dissertação, a avaliação e comparação de alguns dos módulos e métodos constituintes da biblioteca PCL, para a resolução de problemas inerentes à construção e interpretação de mapas, em ambientes indoor não estruturados, utilizando os dados provenientes da Kinect. A partir desta avaliação, é proposta uma arquitectura de sistema que sistematiza o registo de nuvens de pontos, correspondentes a vistas parciais do mundo, num modelo global consistente. Os resultados da avaliação realizada à biblioteca PCL atestam a sua viabilidade, para a resolução dos problemas propostos. Prova da sua viabilidade, são os resultados práticos obtidos, da implementação da arquitectura de sistema proposta, que apresenta resultados de desempenho interessantes, como também boas perspectivas de integração deste tipo de conceitos e tecnologia em plataformas robóticas desenvolvidas no âmbito de projectos do Laboratório de Sistemas Autónomos (LSA).
Resumo:
Robotica 2012: 12th International Conference on Autonomous Robot Systems and Competitions April 11, 2012, Guimarães, Portugal
Resumo:
The underground scenarios are one of the most challenging environments for accurate and precise 3d mapping where hostile conditions like absence of Global Positioning Systems, extreme lighting variations and geometrically smooth surfaces may be expected. So far, the state-of-the-art methods in underground modelling remain restricted to environments in which pronounced geometric features are abundant. This limitation is a consequence of the scan matching algorithms used to solve the localization and registration problems. This paper contributes to the expansion of the modelling capabilities to structures characterized by uniform geometry and smooth surfaces, as is the case of road and train tunnels. To achieve that, we combine some state of the art techniques from mobile robotics, and propose a method for 6DOF platform positioning in such scenarios, that is latter used for the environment modelling. A visual monocular Simultaneous Localization and Mapping (MonoSLAM) approach based on the Extended Kalman Filter (EKF), complemented by the introduction of inertial measurements in the prediction step, allows our system to localize himself over long distances, using exclusively sensors carried on board a mobile platform. By feeding the Extended Kalman Filter with inertial data we were able to overcome the major problem related with MonoSLAM implementations, known as scale factor ambiguity. Despite extreme lighting variations, reliable visual features were extracted through the SIFT algorithm, and inserted directly in the EKF mechanism according to the Inverse Depth Parametrization. Through the 1-Point RANSAC (Random Sample Consensus) wrong frame-to-frame feature matches were rejected. The developed method was tested based on a dataset acquired inside a road tunnel and the navigation results compared with a ground truth obtained by post-processing a high grade Inertial Navigation System and L1/L2 RTK-GPS measurements acquired outside the tunnel. Results from the localization strategy are presented and analyzed.
Resumo:
Dissertação de mestrado em Engenharia Eletrónica Industrial e Computadores (área de especialização em Robótica)
Resumo:
In this communication we review the results obtained with the confocal laser scanning microscope to characterize the interaction of epimastigote and trypomastigote forms of Trypanosoma cruzi and tachyzoites of Toxoplasma gondii with host cells. Early events of the interaction process were studied by the simultaneous localization of sites of protein phosphorylation, revealed by immunocytochemistry, and sites of actin assembly, revealed by the use of labeled phaloidin. The results obtained show that proteins localized in the interaction sites are phosphorylated. The process of formation of the parasitophorous vacuole was monitored by labeling the host cell surface with fluorescent probes for lipids (PKH26), proteins (DTAF) and sialic acid (FITC-thiosemicarbazide) before interaction with the parasites. Evidence was obtained indicating transfer of components of the host cell surface to the parasite surface in the beginning of the interaction process. We also analyzed the distribution of cytoskeletal structures (microtubules and microfilaments visualized with specific antibodies), mitochondria (visualized with rhodamine 123), the Golgi complex (visualized with C6-NBD-ceramide) and the endoplasmic reticulum (visualized with anti-reticulin antibodies and DIOC6) during the evolution of intracellular parasitism. The results obtained show that some, but not all, structures change their position during evolution of the intracellular parasitism.
Resumo:
The emergence of depth sensors has made it possible to track – not only monocular cues – but also the actual depth values of the environment. This is especially useful in augmented reality solutions, where the position and orientation (pose) of the observer need to be accurately determined. This allows virtual objects to be installed to the view of the user through, for example, a screen of a tablet or augmented reality glasses (e.g. Google glass, etc.). Although the early 3D sensors have been physically quite large, the size of these sensors is decreasing, and possibly – eventually – a 3D sensor could be embedded – for example – to augmented reality glasses. The wider subject area considered in this review is 3D SLAM methods, which take advantage of the 3D information available by modern RGB-D sensors, such as Microsoft Kinect. Thus the review for SLAM (Simultaneous Localization and Mapping) and 3D tracking in augmented reality is a timely subject. We also try to find out the limitations and possibilities of different tracking methods, and how they should be improved, in order to allow efficient integration of the methods to the augmented reality solutions of the future.
Resumo:
Aquesta tesi tracta sobre el problema de la navegació per a vehicles submarins autònoms que operen en entorns artificials estructurats com ara ports, canals, plataformes marines i altres escenaris similars. A partir d'una estimació precisa de la posició en aquests entorns, les capacitats dels vehicles submarins s'incrementen notablement i s'obre una porta al seu funcionament autònom. El manteniment, inspecció i vigilància d'instal lacions marines són alguns exemples de possibles aplicacions. Les principals contribucions d'aquesta tesi consisteixen per una banda en el desenvolupament de diferents sistemes de localització per a aquelles situacions on es disposa d'un mapa previ de l'entorn i per l'altra en el desenvolupament d'una nova solució al problema de la Localització i Construcció Simultània de Mapes (SLAM en les seves sigles en anglès), la finalitat del qual és fer que un vehicle autònom creï un mapa de l'entorn desconegut que el rodeja i, al mateix temps, utilitzi aquest mapa per a determinar la seva pròpia posició. S'ha escollit un sonar d'imatges d'escaneig mecànic com a sensor principal per a aquest treball tant pel seu relatiu baix cost com per la seva capacitat per produir una representació detallada de l'entorn. Per altra banda, les particularitats de la seva operació i, especialment, la baixa freqúència a la que es produeixen les mesures, constitueixen els principals inconvenients que s'han hagut d'abordar en les estratègies de localització proposades. Les solucions adoptades per aquests problemes constitueixen una altra contribució d'aquesta tesi. El desenvolupament de vehicles autònoms i el seu ús com a plataformes experimentals és un altre aspecte important d'aquest treball. Experiments portats a terme tant en el laboratori com en escenaris reals d'aplicació han proporcionat les dades necessàries per a provar i avaluar els diferents sistemes de localització proposats.