992 resultados para Visual Localisation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a vision-based method of vehicle localisation that has been developed and tested on a large forklift type robotic vehicle which operates in a mainly outdoor industrial setting. The localiser uses a sparse 3D edgemap of the environment and a particle filter to estimate the pose of the vehicle. The vehicle operates in dynamic and non-uniform outdoor lighting conditions, an issue that is addressed by using knowledge of the scene to intelligently adjust the camera exposure and hence improve the quality of the information in the image. Results from the industrial vehicle are shown and compared to another laser-based localiser which acts as a ground truth. An improved likelihood metric, using peredge calculation, is presented and has shown to be 40% more accurate in estimating rotation. Visual localization results from the vehicle driving an arbitrary 1.5km path during a bright sunny period show an average position error of 0.44m and rotation error of 0.62deg.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Localisation of an AUV is challenging and a range of inspection applications require relatively accurate positioning information with respect to submerged structures. We have developed a vision based localisation method that uses a 3D model of the structure to be inspected. The system comprises a monocular vision system, a spotlight and a low-cost IMU. Previous methods that attempt to solve the problem in a similar way try and factor out the effects of lighting. Effects, such as shading on curved surfaces or specular reflections, are heavily dependent on the light direction and are difficult to deal with when using existing techniques. The novelty of our method is that we explicitly model the light source. Results are shown of an implementation on a small AUV in clear water at night.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In outdoor environments shadows are common. These typically strong visual features cause considerable change in the appearance of a place, and therefore confound vision-based localisation approaches. In this paper we describe how to convert a colour image of the scene to a greyscale invariant image where pixel values are a function of underlying material property not lighting. We summarise the theory of shadow invariant images and discuss the modelling and calibration issues which are important for non-ideal off-the-shelf colour cameras. We evaluate the technique with a commonly used robotic camera and an autonomous car operating in an outdoor environment, and show that it can outperform the use of ordinary greyscale images for the task of visual localisation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

But: La perte unilatérale du cortex visuel postérieur engendre une cécité corticale controlatérale à la lésion, qu’on appelle hémianopsie homonyme (HH). Celle-ci est notamment accompagnée de problèmes d’exploration visuelle dans l’hémichamp aveugle dus à des stratégies oculaires déficitaires, qui ont été la cible des thérapies de compensation. Or, cette perte de vision peut s’accompagner d’une perception visuelle inconsciente, appelée blindsight. Notre hypothèse propose que le blindsight soit médié par la voie rétino-colliculaire extrastriée, recrutant le colliculus supérieur (CS), une structure multisensorielle. Notre programme a pour objectif d’évaluer l’impact d’un entraînement multisensoriel (audiovisuel) sur la performance visuelle inconsciente des personnes hémianopsiques et les stratégies oculaires. Nous essayons, ainsi, de démontrer l’implication du CS dans le phénomène de blindsight et la pertinence de la technique de compensation multisensorielle comme thérapie de réadaptation. Méthode: Notre participante, ML, atteinte d’une HH droite a effectué un entraînement d’intégration audiovisuel pour une période de 10 jours. Nous avons évalué la performance visuelle en localisation et en détection ainsi que les stratégies oculaires selon trois comparaisons principales : (1) entre l’hémichamp normal et l’hémichamp aveugle; (2) entre la condition visuelle et les conditions audiovisuelles; (3) entre les sessions de pré-entraînement, post-entraînement et 3 mois post-entraînement. Résultats: Nous avons démontré que (1) les caractéristiques des saccades et des fixations sont déficitaires dans l’hémichamp aveugle; (2) les stratégies saccadiques diffèrent selon les excentricités et les conditions de stimulations; (3) une adaptation saccadique à long terme est possible dans l’hémichamp aveugle si l’on considère le bon cadre de référence; (4) l’amélioration des mouvements oculaires est liée au blindsight. Conclusion(s): L’entraînement multisensoriel conduit à une amélioration de la performance visuelle pour des cibles non perçues, tant en localisation qu’en détection, ce qui est possiblement induit par le développement de la performance oculomotrice.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

But: La perte unilatérale du cortex visuel postérieur engendre une cécité corticale controlatérale à la lésion, qu’on appelle hémianopsie homonyme (HH). Celle-ci est notamment accompagnée de problèmes d’exploration visuelle dans l’hémichamp aveugle dus à des stratégies oculaires déficitaires, qui ont été la cible des thérapies de compensation. Or, cette perte de vision peut s’accompagner d’une perception visuelle inconsciente, appelée blindsight. Notre hypothèse propose que le blindsight soit médié par la voie rétino-colliculaire extrastriée, recrutant le colliculus supérieur (CS), une structure multisensorielle. Notre programme a pour objectif d’évaluer l’impact d’un entraînement multisensoriel (audiovisuel) sur la performance visuelle inconsciente des personnes hémianopsiques et les stratégies oculaires. Nous essayons, ainsi, de démontrer l’implication du CS dans le phénomène de blindsight et la pertinence de la technique de compensation multisensorielle comme thérapie de réadaptation. Méthode: Notre participante, ML, atteinte d’une HH droite a effectué un entraînement d’intégration audiovisuel pour une période de 10 jours. Nous avons évalué la performance visuelle en localisation et en détection ainsi que les stratégies oculaires selon trois comparaisons principales : (1) entre l’hémichamp normal et l’hémichamp aveugle; (2) entre la condition visuelle et les conditions audiovisuelles; (3) entre les sessions de pré-entraînement, post-entraînement et 3 mois post-entraînement. Résultats: Nous avons démontré que (1) les caractéristiques des saccades et des fixations sont déficitaires dans l’hémichamp aveugle; (2) les stratégies saccadiques diffèrent selon les excentricités et les conditions de stimulations; (3) une adaptation saccadique à long terme est possible dans l’hémichamp aveugle si l’on considère le bon cadre de référence; (4) l’amélioration des mouvements oculaires est liée au blindsight. Conclusion(s): L’entraînement multisensoriel conduit à une amélioration de la performance visuelle pour des cibles non perçues, tant en localisation qu’en détection, ce qui est possiblement induit par le développement de la performance oculomotrice.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Recently, we introduced a new 'GLM-beamformer' technique for MEG analysis that enables accurate localisation of both phase-locked and non-phase-locked neuromagnetic effects, and their representation as statistical parametric maps (SPMs). This provides a useful framework for comparison of the full range of MEG responses with fMRI BOLD results. This paper reports a 'proof of principle' study using a simple visual paradigm (static checkerboard). The five subjects each underwent both MEG and fMRI paradigms. We demonstrate, for the first time, the presence of a sustained (DC) field in the visual cortex, and its co-localisation with the visual BOLD response. The GLM-beamformer analysis method is also used to investigate the main non-phase-locked oscillatory effects: an event-related desynchronisation (ERD) in the alpha band (8-13 Hz) and an event-related synchronisation (ERS) in the gamma band (55-70 Hz). We show, using SPMs and virtual electrode traces, the spatio-temporal covariance of these effects with the visual BOLD response. Comparisons between MEG and fMRI data sets generally focus on the relationship between the BOLD response and the transient evoked response. Here, we show that the stationary field and changes in oscillatory power are also important contributors to the BOLD response, and should be included in future studies on the relationship between neuronal activation and the haemodynamic response. © 2005 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Competent navigation in an environment is a major requirement for an autonomous mobile robot to accomplish its mission. Nowadays, many successful systems for navigating a mobile robot use an internal map which represents the environment in a detailed geometric manner. However, building, maintaining and using such environment maps for navigation is difficult because of perceptual aliasing and measurement noise. Moreover, geometric maps require the processing of huge amounts of data which is computationally expensive. This thesis addresses the problem of vision-based topological mapping and localisation for mobile robot navigation. Topological maps are concise and graphical representations of environments that are scalable and amenable to symbolic manipulation. Thus, they are well-suited for basic robot navigation applications, and also provide a representational basis for the procedural and semantic information needed for higher-level robotic tasks. In order to make vision-based topological navigation suitable for inexpensive mobile robots for the mass market we propose to characterise key places of the environment based on their visual appearance through colour histograms. The approach for representing places using visual appearance is based on the fact that colour histograms change slowly as the field of vision sweeps the scene when a robot moves through an environment. Hence, a place represents a region of the environment rather than a single position. We demonstrate in experiments using an indoor data set, that a topological map in which places are characterised using visual appearance augmented with metric clues provides sufficient information to perform continuous metric localisation which is robust to the kidnapped robot problem. Many topological mapping methods build a topological map by clustering visual observations to places. However, due to perceptual aliasing observations from different places may be mapped to the same place representative in the topological map. A main contribution of this thesis is a novel approach for dealing with the perceptual aliasing problem in topological mapping. We propose to incorporate neighbourhood relations for disambiguating places which otherwise are indistinguishable. We present a constraint based stochastic local search method which integrates the approach for place disambiguation in order to induce a topological map. Experiments show that the proposed method is capable of mapping environments with a high degree of perceptual aliasing, and that a small map is found quickly. Moreover, the method of using neighbourhood information for place disambiguation is integrated into a framework for topological off-line simultaneous localisation and mapping which does not require an initial categorisation of visual observations. Experiments on an indoor data set demonstrate the suitability of our method to reliably localise the robot while building a topological map.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To navigate successfully in a novel environment a robot needs to be able to Simultaneously Localize And Map (SLAM) its surroundings. The most successful solutions to this problem so far have involved probabilistic algorithms, but there has been much promising work involving systems based on the workings of part of the rodent brain known as the hippocampus. In this paper we present a biologically plausible system called RatSLAM that uses competitive attractor networks to carry out SLAM in a probabilistic manner. The system can effectively perform parameter self-calibration and SLAM in one dimension. Tests in two dimensional environments revealed the inability of the RatSLAM system to maintain multiple pose hypotheses in the face of ambiguous visual input. These results support recent rat experimentation that suggest current competitive attractor models are not a complete solution to the hippocampal modelling problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recovering position from sensor information is an important problem in mobile robotics, known as localisation. Localisation requires a map or some other description of the environment to provide the robot with a context to interpret sensor data. The mobile robot system under discussion is using an artificial neural representation of position. Building a geometrical map of the environment with a single camera and artificial neural networks is difficult. Instead it would be simpler to learn position as a function of the visual input. Usually when learning images, an intermediate representation is employed. An appropriate starting point for biologically plausible image representation is the complex cells of the visual cortex, which have invariance properties that appear useful for localisation. The effectiveness for localisation of two different complex cell models are evaluated. Finally the ability of a simple neural network with single shot learning to recognise these representations and localise a robot is examined.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes the current state of RatSLAM, a Simultaneous Localisation and Mapping (SLAM) system based on models of the rodent hippocampus. RatSLAM uses a competitive attractor network to fuse visual and odometry information. Energy packets in the network represent pose hypotheses, which are updated by odometry and can be enhanced or inhibited by visual input. This paper shows the effectiveness of the system in real robot tests in unmodified indoor environments using a learning vision system. Results are shown for two test environments; a large corridor loop and the complete floor of an office building.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Simultaneous Localisation And Mapping (SLAM) problem is one of the major challenges in mobile robotics. Probabilistic techniques using high-end range finding devices are well established in the field, but recent work has investigated vision-only approaches. We present an alternative approach to the leading existing techniques, which extracts approximate rotational and translation velocity information from a vehicle-mounted consumer camera, without tracking landmarks. When coupled with an existing SLAM system, the vision module is able to map a 45 metre long indoor loop and a 1.6 km long outdoor road loop, without any parameter or system adjustment between tests. The work serves as a promising pilot study into ground-based vision-only SLAM, with minimal geometric interpretation of the environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Performing reliable localisation and navigation within highly unstructured underwater coral reef environments is a difficult task at the best of times. Typical research and commercial underwater vehicles use expensive acoustic positioning and sonar systems which require significant external infrastructure to operate effectively. This paper is focused on the development of a robust vision-based motion estimation technique using low-cost sensors for performing real-time autonomous and untethered environmental monitoring tasks in the Great Barrier Reef without the use of acoustic positioning. The technique is experimentally shown to provide accurate odometry and terrain profile information suitable for input into the vehicle controller to perform a range of environmental monitoring tasks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Performing reliable localisation and navigation within highly unstructured underwater coral reef environments is a difficult task at the best of times. Typical research and commercial underwater vehicles use expensive acoustic positioning and sonar systems which require significant external infrastructure to operate effectively. This paper is focused on the development of a robust vision-based motion estimation technique using low-cost sensors for performing real-time autonomous and untethered environmental monitoring tasks in the Great Barrier Reef without the use of acoustic positioning. The technique is experimentally shown to provide accurate odometry and terrain profile information suitable for input into the vehicle controller to perform a range of environmental monitoring tasks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Probabilistic robotics, most often applied to the problem of simultaneous localisation and mapping (SLAM), requires measures of uncertainly to accompany observations of the environment. This paper describes how uncertainly can be characterised for a vision system that locates coloured landmark in a typical laboratory environment. The paper describes a model of the uncertainly in segmentation, the internal camera model and the mounting of the camera on the robot. It =plains the implementation of the system on a laboratory robot, and provides experimental results that show the coherence of the uncertainly model,