1000 resultados para odometria visual


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Visual Odometry is the process that estimates camera position and orientation based solely on images and in features (projections of visual landmarks present in the scene) extraced from them. With the increasing advance of Computer Vision algorithms and computer processing power, the subarea known as Structure from Motion (SFM) started to supply mathematical tools composing localization systems for robotics and Augmented Reality applications, in contrast with its initial purpose of being used in inherently offline solutions aiming 3D reconstruction and image based modelling. In that way, this work proposes a pipeline to obtain relative position featuring a previously calibrated camera as positional sensor and based entirely on models and algorithms from SFM. Techniques usually applied in camera localization systems such as Kalman filters and particle filters are not used, making unnecessary additional information like probabilistic models for camera state transition. Experiments assessing both 3D reconstruction quality and camera position estimated by the system were performed, in which image sequences captured in reallistic scenarios were processed and compared to localization data gathered from a mobile robotic platform

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Atualmente os sistemas de pilotagem autónoma de quadricópteros estão a ser desenvolvidos de forma a efetuarem navegação em espaços exteriores, onde o sinal de GPS pode ser utilizado para definir waypoints de navegação, modos de position e altitude hold, returning home, entre outros. Contudo, o problema de navegação autónoma em espaços fechados sem que se utilize um sistema de posicionamento global dentro de uma sala, subsiste como um problema desafiante e sem solução fechada. Grande parte das soluções são baseadas em sensores dispendiosos, como o LIDAR ou como sistemas de posicionamento externos (p.ex. Vicon, Optitrack). Algumas destas soluções reservam a capacidade de processamento de dados dos sensores e dos algoritmos mais exigentes para sistemas de computação exteriores ao veículo, o que também retira a componente de autonomia total que se pretende num veículo com estas características. O objetivo desta tese pretende, assim, a preparação de um sistema aéreo não-tripulado de pequeno porte, nomeadamente um quadricóptero, que integre diferentes módulos que lhe permitam simultânea localização e mapeamento em espaços interiores onde o sinal GPS ´e negado, utilizando, para tal, uma câmara RGB-D, em conjunto com outros sensores internos e externos do quadricóptero, integrados num sistema que processa o posicionamento baseado em visão e com o qual se pretende que efectue, num futuro próximo, planeamento de movimento para navegação. O resultado deste trabalho foi uma arquitetura integrada para análise de módulos de localização, mapeamento e navegação, baseada em hardware aberto e barato e frameworks state-of-the-art disponíveis em código aberto. Foi também possível testar parcialmente alguns módulos de localização, sob certas condições de ensaio e certos parâmetros dos algoritmos. A capacidade de mapeamento da framework também foi testada e aprovada. A framework obtida encontra-se pronta para navegação, necessitando apenas de alguns ajustes e testes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Registration of point clouds captured by depth sensors is an important task in 3D reconstruction applications based on computer vision. In many applications with strict performance requirements, the registration should be executed not only with precision, but also in the same frequency as data is acquired by the sensor. This thesis proposes theuse of the pyramidal sparse optical flow algorithm to incrementally register point clouds captured by RGB-D sensors (e.g. Microsoft Kinect) in real time. The accumulated errorinherent to the process is posteriorly minimized by utilizing a marker and pose graph optimization. Experimental results gathered by processing several RGB-D datasets validatethe system proposed by this thesis in visual odometry and simultaneous localization and mapping (SLAM) applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Os sistemas de perceção existentes nos robôs autónomos, hoje em dia, são bastante complexos. A informação dos vários sensores, existentes em diferentes partes do robôs, necessitam de estar relacionados entre si face ao referencial do robô ou do mundo. Para isso, o conhecimento da atitude (posição e rotação) entre os referenciais dos sensores e o referencial do robô é um fator critico para o desempenho do mesmo. O processo de calibração dessas posições e translações é chamado calibração dos parâmetros extrínsecos. Esta dissertação propõe o desenvolvimento de um método de calibração autónomo para robôs como câmaras direcionais, como é o caso dos robôs da equipa ISePorto. A solução proposta consiste na aquisição de dados da visão, giroscópio e odometria durante uma manobra efetuada pelo robô em torno de um alvo com um padrão conhecido. Esta informação é então processada em conjunto através de um Extended Kalman Filter (EKF) onde são estimados necessários para relacionar os sensores existentes no robô em relação ao referencial do mesmo. Esta solução foi avaliada com recurso a vários testes e os resultados obtidos foram bastante similares aos obtidos pelo método manual, anteriormente utilizado, com um aumento significativo em rapidez e consistência.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

SANTANA, André M.; SANTIAGO, Gutemberg S.; MEDEIROS, Adelardo A. D. Real-Time Visual SLAM Using Pre-Existing Floor Lines as Landmarks and a Single Camera. In: CONGRESSO BRASILEIRO DE AUTOMÁTICA, 2008, Juiz de Fora, MG. Anais... Juiz de Fora: CBA, 2008.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents a cooperative navigation systemof a humanoid robot and a wheeled robot using visual information, aiming to navigate the non-instrumented humanoid robot using information obtained from the instrumented wheeled robot. Despite the humanoid not having sensors to its navigation, it can be remotely controlled by infra-red signals. Thus, the wheeled robot can control the humanoid positioning itself behind him and, through visual information, find it and navigate it. The location of the wheeled robot is obtained merging information from odometers and from landmarks detection, using the Extended Kalman Filter. The marks are visually detected, and their features are extracted by image processing. Parameters obtained by image processing are directly used in the Extended Kalman Filter. Thus, while the wheeled robot locates and navigates the humanoid, it also simultaneously calculates its own location and maps the environment (SLAM). The navigation is done through heuristic algorithms based on errors between the actual and desired pose for each robot. The main contribution of this work was the implementation of a cooperative navigation system for two robots based on visual information, which can be extended to other robotic applications, as the ability to control robots without interfering on its hardware, or attaching communication devices

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development and refinement of techniques that make simultaneous localization and mapping (SLAM) for an autonomous mobile robot and the building of local 3-D maps from a sequence of images, is widely studied in scientific circles. This work presents a monocular visual SLAM technique based on extended Kalman filter, which uses features found in a sequence of images using the SURF descriptor (Speeded Up Robust Features) and determines which features can be used as marks by a technique based on delayed initialization from 3-D straight lines. For this, only the coordinates of the features found in the image and the intrinsic and extrinsic camera parameters are avaliable. Its possible to determine the position of the marks only on the availability of information of depth. Tests have shown that during the route, the mobile robot detects the presence of characteristics in the images and through a proposed technique for delayed initialization of marks, adds new marks to the state vector of the extended Kalman filter (EKF), after estimating the depth of features. With the estimated position of the marks, it was possible to estimate the updated position of the robot at each step, obtaining good results that demonstrate the effectiveness of monocular visual SLAM system proposed in this paper

Relevância:

30.00% 30.00%

Publicador:

Resumo:

SANTANA, André M.; SANTIAGO, Gutemberg S.; MEDEIROS, Adelardo A. D. Real-Time Visual SLAM Using Pre-Existing Floor Lines as Landmarks and a Single Camera. In: CONGRESSO BRASILEIRO DE AUTOMÁTICA, 2008, Juiz de Fora, MG. Anais... Juiz de Fora: CBA, 2008.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

SANTANA, André M.; SANTIAGO, Gutemberg S.; MEDEIROS, Adelardo A. D. Real-Time Visual SLAM Using Pre-Existing Floor Lines as Landmarks and a Single Camera. In: CONGRESSO BRASILEIRO DE AUTOMÁTICA, 2008, Juiz de Fora, MG. Anais... Juiz de Fora: CBA, 2008.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Diabetic Retinopathy (DR) is a complication of diabetes that can lead to blindness if not readily discovered. Automated screening algorithms have the potential to improve identification of patients who need further medical attention. However, the identification of lesions must be accurate to be useful for clinical application. The bag-of-visual-words (BoVW) algorithm employs a maximum-margin classifier in a flexible framework that is able to detect the most common DR-related lesions such as microaneurysms, cotton-wool spots and hard exudates. BoVW allows to bypass the need for pre- and post-processing of the retinographic images, as well as the need of specific ad hoc techniques for identification of each type of lesion. An extensive evaluation of the BoVW model, using three large retinograph datasets (DR1, DR2 and Messidor) with different resolution and collected by different healthcare personnel, was performed. The results demonstrate that the BoVW classification approach can identify different lesions within an image without having to utilize different algorithms for each lesion reducing processing time and providing a more flexible diagnostic system. Our BoVW scheme is based on sparse low-level feature detection with a Speeded-Up Robust Features (SURF) local descriptor, and mid-level features based on semi-soft coding with max pooling. The best BoVW representation for retinal image classification was an area under the receiver operating characteristic curve (AUC-ROC) of 97.8% (exudates) and 93.5% (red lesions), applying a cross-dataset validation protocol. To assess the accuracy for detecting cases that require referral within one year, the sparse extraction technique associated with semi-soft coding and max pooling obtained an AUC of 94.2 ± 2.0%, outperforming current methods. Those results indicate that, for retinal image classification tasks in clinical practice, BoVW is equal and, in some instances, surpasses results obtained using dense detection (widely believed to be the best choice in many vision problems) for the low-level descriptors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The arboreal ant Odontomachus hastatus nests among roots of epiphytic bromeliads in the sandy forest at Cardoso Island (Brazil). Crepuscular and nocturnal foragers travel up to 8m to search for arthropod prey in the canopy, where silhouettes of leaves and branches potentially provide directional information. We investigated the relevance of visual cues (canopy, horizon patterns) during navigation in O. hastatus. Laboratory experiments using a captive ant colony and a round foraging arena revealed that an artificial canopy pattern above the ants and horizon visual marks are effective orientation cues for homing O. hastatus. On the other hand, foragers that were only given a tridimensional landmark (cylinder) or chemical marks were unable to home correctly. Navigation by visual cues in O. hastatus is in accordance with other diurnal arboreal ants. Nocturnal luminosity (moon, stars) is apparently sufficient to produce contrasting silhouettes from the canopy and surrounding vegetation, thus providing orientation cues. Contrary to the plain floor of the round arena, chemical cues may be important for marking bifurcated arboreal routes. This experimental demonstration of the use of visual cues by a predominantly nocturnal arboreal ant provides important information for comparative studies on the evolution of spatial orientation behavior in ants. This article is part of a Special Issue entitled: Neotropical Behaviour.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The goal of this cross-sectional observational study was to quantify the pattern-shift visual evoked potentials (VEP) and the thickness as well as the volume of retinal layers using optical coherence tomography (OCT) across a cohort of Parkinson's disease (PD) patients and age-matched controls. Forty-three PD patients and 38 controls were enrolled. All participants underwent a detailed neurological and ophthalmologic evaluation. Idiopathic PD cases were included. Cases with glaucoma or increased intra-ocular pressure were excluded. Patients were assessed by VEP and high-resolution Fourier-domain OCT, which quantified the inner and outer thicknesses of the retinal layers. VEP latencies and the thicknesses of the retinal layers were the main outcome measures. The mean age, with standard deviation (SD), of the PD patients and controls were 63.1 (7.5) and 62.4 (7.2) years, respectively. The patients were predominantly in the initial Hoehn-Yahr (HY) disease stages (34.8% in stage 1 or 1.5, and 55.8 % in stage 2). The VEP latencies and the thicknesses as well as the volumes of the retinal inner and outer layers of the groups were similar. A negative correlation between the retinal thickness and the age was noted in both groups. The thickness of the retinal nerve fibre layer (RNFL) was 102.7 μm in PD patients vs. 104.2 μm in controls. The thicknesses of retinal layers, VEP, and RNFL of PD patients were similar to those of the controls. Despite the use of a representative cohort of PD patients and high-resolution OCT in this study, further studies are required to establish the validity of using OCT and VEP measurements as the anatomic and functional biomarkers for the evaluation of retinal and visual pathways in PD patients.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The authors conducted a systematic literature review on physical activity interventions for children and youth with visual impairment (VI). Five databases were searched to identify studies involving the population of interest and physical activity practices. After evaluating 2,495 records, the authors found 18 original full-text studies published in English they considered eligible. They identified 8 structured exercise-training studies that yielded overall positive effect on physical-fitness and motor-skill outcomes. Five leisure-time-physical-activity and 5 instructional-strategy interventions were also found with promising proposals to engage and instruct children and youth with VI to lead an active lifestyle. However, the current research on physical activity interventions for children and youth with VI is still limited by an absence of high-quality research designs, low sample sizes, use of nonvalidated outcome measures, and lack of generalizability, which need to be addressed in future studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this study is to introduce a method to evaluate visual functions in infants in the first three months of life. An adaptation of the Guide for the Assessment of Visual Ability in Infants (Gagliardo, 1997) was used. The instrument was a ring with string. It was implemented a pilot study with 33 infants, selected according to the following criteria: neonates well enough to go home within two days of birth; 1 to 3 months of chronological age; monthly evaluation with no absence; subjects living in Campinas/SP metropolitan area. In the first month we observed: visual fixation (93,9%); eye contact (90,9%); horizontal tracking (72,7%); inspects surroundings (97,0%). In the third month, we observed: inspects own hands (42,4%) and increased movements of arms (36,4%). This method allowed the evaluation of visual functions in infants, according to the chronological age. Alterations in this function will facilitate immediate referral to medical services for diagnoses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The effects of ionic strength on ions in aqueous solutions are quite relevant, especially for biochemical systems, in which proteins and amino acids are involved. The teaching of this topic and more specifically, the Debye-Hückel limiting law, is central in chemistry undergraduate courses. In this work, we present a description of an experimental procedure based on the color change of aqueous solutions of bromocresol green (BCG), driven by addition of electrolyte. The contribution of charge product (z+|z-|) to the Debye-Hückel limiting law is demonstrated when the effects of NaCl and Na2SO4 on the color of BCG solutions are compared.