989 resultados para Identidade visual
Resumo:
Localisation of an AUV is challenging and a range of inspection applications require relatively accurate positioning information with respect to submerged structures. We have developed a vision based localisation method that uses a 3D model of the structure to be inspected. The system comprises a monocular vision system, a spotlight and a low-cost IMU. Previous methods that attempt to solve the problem in a similar way try and factor out the effects of lighting. Effects, such as shading on curved surfaces or specular reflections, are heavily dependent on the light direction and are difficult to deal with when using existing techniques. The novelty of our method is that we explicitly model the light source. Results are shown of an implementation on a small AUV in clear water at night.
Resumo:
Probabilistic robotics, most often applied to the problem of simultaneous localisation and mapping (SLAM), requires measures of uncertainly to accompany observations of the environment. This paper describes how uncertainly can be characterised for a vision system that locates coloured landmark in a typical laboratory environment. The paper describes a model of the uncertainly in segmentation, the internal camera model and the mounting of the camera on the robot. It =plains the implementation of the system on a laboratory robot, and provides experimental results that show the coherence of the uncertainly model,
Resumo:
The cascading appearance-based (CAB) feature extraction technique has established itself as the state-of-the-art in extracting dynamic visual speech features for speech recognition. In this paper, we will focus on investigating the effectiveness of this technique for the related speaker verification application. By investigating the speaker verification ability of each stage of the cascade we will demonstrate that the same steps taken to reduce static speaker and environmental information for the visual speech recognition application also provide similar improvements for visual speaker recognition. A further study is conducted comparing synchronous HMM (SHMM) based fusion of CAB visual features and traditional perceptual linear predictive (PLP) acoustic features to show that higher complexity inherit in the SHMM approach does not appear to provide any improvement in the final audio-visual speaker verification system over simpler utterance level score fusion.
Resumo:
Thomas Young (1773-1829) carried out major pioneering work in many different subjects. In 1800 he gave the Bakerian Lecture of the Royal Society on the topic of the “mechanism of the eye”: this was published in the following year (Young, 1801). Young used his own design of optometer to measure refraction and accommodation, and discovered his own astigmatism. He considered the different possible origins of accommodation and confirmed that it was due to change in shape of the lens rather than to change in shape of the cornea or an increase in axial length. However, the paper also dealt with many other aspects of visual and ophthalmic optics, such as biometric parameters, peripheral refraction, longitudinal chromatic aberration, depth-of-focus and instrument myopia. These aspects of the paper have previously received little attention. We now give detailed consideration to these and other less-familiar features of Young’s work and conclude that his studies remain relevant to many of the topics which currently engage visual scientists.
Resumo:
Purpose: Flickering stimuli increase the metabolic demand of the retina,making it a sensitive perimetric stimulus to the early onset of retinal disease. We determine whether flickering stimuli are a sensitive indicator of vision deficits resulting from to acute, mild systemic hypoxia when compared to standard static perimetry. Methods: Static and flicker visual perimetry were performed in 14 healthy young participants while breathing 12% oxygen (hypoxia) under photopic illumination. The hypoxia visual field data were compared with the field data measured during normoxia. Absolute sensitivities (in dB) were analysed in seven concentric rings at 1°, 3°, 6°, 10°, 15°, 22° and 30° eccentricities as well as mean defect (MD) and pattern defect (PD) were calculated. Preliminary data are reported for mesopic light levels. Results: Under photopic illumination, flicker and static visual field sensitivities at all eccentricities were not significantly different between hypoxia and normoxia conditions. The mean defect and pattern defect were not significantly different for either test between the two oxygenation conditions. Conclusion: Although flicker stimulation increases cellular metabolism, flicker photopic visual field impairment is not detected during mild hypoxia. These findings contrast with electrophysiological flicker tests in young participants that show impairment at photopic illumination during the same levels of mild hypoxia. Potential mechanisms contributing to the difference between the visual fields and electrophysiological flicker tests including variability in perimetric data, neuronal adaptation and vascular autoregulation, are considered. The data have implications for the use of visual perimetry in the detection of ischaemic/hypoxic retinal disorders under photopic and mesopic light levels.
Resumo:
Of the numerous factors that play a role in fatal pedestrian collisions, the time of day, day of the week, and time of year can be significant determinants. More than 60% of all pedestrian collisions in 2007 occurred at night, despite the presumed decrease in both pedestrian and automobile exposure during the night. Although this trend is partially explained by factors such as fatigue and alcohol consumption, prior analysis of the Fatality Analysis Reporting System database suggests that pedestrian fatalities increase as light decreases after controlling for other factors. This study applies graphical cross-tabulation, a novel visual assessment approach, to explore the relationships among collision variables. The results reveal that twilight and the first hour of darkness typically observe the greatest frequency of pedestrian fatal collisions. These hours are not necessarily the most risky on a per mile travelled basis, however, because pedestrian volumes are often still high. Additional analysis is needed to quantify the extent to which pedestrian exposure (walking/crossing activity) in these time periods plays a role in pedestrian crash involvement. Weekly patterns of pedestrian fatal collisions vary by time of year due to the seasonal changes in sunset time. In December, collisions are concentrated around twilight and the first hour of darkness throughout the week while, in June, collisions are most heavily concentrated around twilight and the first hours of darkness on Friday and Saturday. Friday and Saturday nights in June may be the most dangerous times for pedestrians. Knowing when pedestrian risk is highest is critically important for formulating effective mitigation strategies and for efficiently investing safety funds. This applied visual approach is a helpful tool for researchers intending to communicate with policy-makers and to identify relationships that can then be tested with more sophisticated statistical tools.
Resumo:
This paper proposes a generic decoupled imagebased control scheme for cameras obeying the unified projection model. The scheme is based on the spherical projection model. Invariants to rotational motion are computed from this projection and used to control the translational degrees of freedom. Importantly we form invariants which decrease the sensitivity of the interaction matrix to object depth variation. Finally, the proposed results are validated with experiments using a classical perspective camera as well as a fisheye camera mounted on a 6-DOF robotic platform.
Resumo:
We present a novel method for integrating GPS position estimates with position and attitude estimates derived from visual odometry using a scheme similar to a classic loosely-coupled GPS/INS integration. Under such an arrangement, we derive the error dynamics of the system and develop a Kalman Filter for estimating the errors in position and attitude. Using a control-based approach to observability, we show that the errors in both position and attitude (including yaw) are fully observable when there is a component of acceleration perpendicular to the velocity vector in the navigation frame. Numerical simulations are performed to confirm the observability analysis.