918 resultados para Visual Word-recognition
Resumo:
Purpose: We term the visual field position from which the pupil appears most nearly circular as the pupillary circular axis (PCAx). The aim was to determine and compare the horizontal and vertical co-ordinates of the PCAx and optical axis from pupil shape and refraction information for only the horizontal meridian of the visual field. Method: The PCAx was determined from the changes with visual field angle in the ellipticity and orientation of pupil images out to ±90° from fixation along the horizontal meridian for the right eyes of 30 people. This axis was compared with the optical axis determined from the changes in the astigmatic components of the refractions for field angles out to ±35° in the same meridian. Results: The mean estimated horizontal and vertical field coordinates of the PCAx were (‒5.3±1.9°, ‒3.2±1.5°) compared with (‒4.8±5.1°, ‒1.5±3.4°) for the optical axis. The vertical co-ordinates of the two axes were just significantly different (p =0.03) but there was no significant correlation between them. Only the horizontal coordinate of the PCAx was significantly related to the refraction in the group. Conclusion: On average, the PCAx is displaced from the line-of-sight by about the same angle as the optical axis but there is more inter-subject variation in the position of the optical axis. When modelling the optical performance of the eye, it appears reasonable to assume that the pupil is circular when viewed along the line-of-sight.
Resumo:
This paper presents an online, unsupervised training algorithm enabling vision-based place recognition across a wide range of changing environmental conditions such as those caused by weather, seasons, and day-night cycles. The technique applies principal component analysis to distinguish between aspects of a location’s appearance that are condition-dependent and those that are condition-invariant. Removing the dimensions associated with environmental conditions produces condition-invariant images that can be used by appearance-based place recognition methods. This approach has a unique benefit – it requires training images from only one type of environmental condition, unlike existing data-driven methods that require training images with labelled frame correspondences from two or more environmental conditions. The method is applied to two benchmark variable condition datasets. Performance is equivalent or superior to the current state of the art despite the lesser training requirements, and is demonstrated to generalise to previously unseen locations.
Resumo:
Recently Convolutional Neural Networks (CNNs) have been shown to achieve state-of-the-art performance on various classification tasks. In this paper, we present for the first time a place recognition technique based on CNN models, by combining the powerful features learnt by CNNs with a spatial and sequential filter. Applying the system to a 70 km benchmark place recognition dataset we achieve a 75% increase in recall at 100% precision, significantly outperforming all previous state of the art techniques. We also conduct a comprehensive performance comparison of the utility of features from all 21 layers for place recognition, both for the benchmark dataset and for a second dataset with more significant viewpoint changes.