215 resultados para Flys Visual-system


Relevância:

30.00% 30.00%

Publicador:

Resumo:

RatSLAM is a navigation system based on the neural processes underlying navigation in the rodent brain, capable of operating with low resolution monocular image data. Seminal experiments using RatSLAM include mapping an entire suburb with a web camera and a long term robot delivery trial. This paper describes OpenRatSLAM, an open-source version of RatSLAM with bindings to the Robot Operating System framework to leverage advantages such as robot and sensor abstraction, networking, data playback, and visualization. OpenRatSLAM comprises connected ROS nodes to represent RatSLAM’s pose cells, experience map, and local view cells, as well as a fourth node that provides visual odometry estimates. The nodes are described with reference to the RatSLAM model and salient details of the ROS implementation such as topics, messages, parameters, class diagrams, sequence diagrams, and parameter tuning strategies. The performance of the system is demonstrated on three publicly available open-source datasets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many state of the art vision-based Simultaneous Localisation And Mapping (SLAM) and place recognition systems compute the salience of visual features in their environment. As computing salience can be problematic in radically changing environments new low resolution feature-less systems have been introduced, such as SeqSLAM, all of which consider the whole image. In this paper, we implement a supervised classifier system (UCS) to learn the salience of image regions for place recognition by feature-less systems. SeqSLAM only slightly benefits from the results of training, on the challenging real world Eynsham dataset, as it already appears to filter less useful regions of a panoramic image. However, when recognition is limited to specific image regions performance improves by more than an order of magnitude by utilising the learnt image region saliency. We then investigate whether the region salience generated from the Eynsham dataset generalizes to another car-based dataset using a perspective camera. The results suggest the general applicability of an image region salience mask for optimizing route-based navigation applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the modern connected world, pervasive computing has become reality. Thanks to the ubiquity of mobile computing devices and emerging cloud-based services, the users permanently stay connected to their data. This introduces a slew of new security challenges, including the problem of multi-device key management and single-sign-on architectures. One solution to this problem is the utilization of secure side-channels for authentication, including the visual channel as vicinity proof. However, existing approaches often assume confidentiality of the visual channel, or provide only insufficient means of mitigating a man-in-the-middle attack. In this work, we introduce QR-Auth, a two-step, 2D barcode based authentication scheme for mobile devices which aims specifically at key management and key sharing across devices in a pervasive environment. It requires minimal user interaction and therefore provides better usability than most existing schemes, without compromising its security. We show how our approach fits in existing authorization delegation and one-time-password generation schemes, and that it is resilient to man-in-the-middle attacks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Driver distraction has recently been defined by Regan as "the diversion of attention away from activities critical for safe driving toward a competing activity, which may result in insufficient or no attention to activities critical for safe driving (Regan, Hallett & Gordon, 2011, p.1780)". One source of distraction is in-vehicle devices, even though they might provide other benefits, e.g. navigation systems. Currently, eco-driving systems have been growing rapidly in popularity. These systems send messages to drivers so that driving performance can be improved in terms of fuel efficiency. However, there remain unanswered questions about whether eco-driving systems endanger drivers by distracting them. In this research, the CARRS-Q advanced driving simulator was used in order to provide safety for participants and meanwhile simulate real world driving. The distraction effects of tasks involving three different in-vehicle systems were investigated: changing a CD, entering a five digit number as a part of navigation task and responding to an eco-driving task. Driving in these scenarios was compared with driving in the absence of these distractions, and while drivers engaged in critical manoeuvres. In order to account for practice effects, the same scenarios were duplicated on a second day. The three in-vehicle systems were not the exact facsimiles of any particular existing system, but were designed to have similar characteristics to those of system available. In general, the results show that drivers’ mental workloads are significantly higher in navigation and CD changing scenarios in comparison to the two other scenarios, which implies that these two tasks impose more visual/manual and cognitive demands. However, eco-driving mental workload is still high enough to be called marginally significant (p ~ .05) across manoeuvres. Similarly, event detection tasks show that drivers miss significantly more events in the navigation and CD changing scenarios in comparison to both the baseline and eco-driving scenario across manoeuvres. Analysis of the practice effect shows that drivers’ baseline scenario and navigation scenario exhibit significantly less demand on the second day. However, the number of missed events across manoeuvres confirmed that drivers can detect significantly more events on the second day for all scenarios. Distraction was also examined separately for five groups of manoeuvres (straight, lane changing, overtaking, braking for intersections and braking for roundabouts), in two locations for each condition. Repeated measures mixed ANOVA results show that reading an eco-driving message can potentially impair driving performance. When comparing the three in–vehicle distractions tested, attending to an eco-driving message is similar in effect to the CD changing task. The navigation task degraded driver performance much more than these other sources of distraction. In lane changing manoeuvres, drivers’ missed response counts degraded when they engaged in reading eco-driving messages at the first location. However, drivers’ event detection abilities deteriorated less at the second lane changing location. In baseline manoeuvres (driving straight), participants’ mean minimum speed degraded more in the CD changing scenario. Drivers’ lateral position shifted more in both CD changing and navigation tasks in comparison with both eco-driving and baseline scenarios, so they were more visually distracting. Participants were better at event detection in baseline manoeuvres in comparison with other manoeuvres. When approaching an intersection, the navigation task caused more events to be missed by participants, whereas eco-driving messages seemed to make drivers less distracted. The eco-driving message scenario was significantly less distracting than the navigation system scenario (fewer missed responses) when participants commenced braking for roundabouts. To sum up, in spite of the finding that two other in-vehicle tasks are more distracting than the eco-driving task, the results indicate that even reading a simple message while driving could potentially lead to missing an important event, especially when executing critical manoeuvres. This suggests that in-vehicle eco-driving systems have the potential to contribute to increased crash risk through distraction. However, there is some evidence of a practice effect which suggests that future research should focus on performance with habitual rather than novel tasks. It is recommended that eco-driving messages be delivered to drivers off-line when possible.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Vision-based SLAM is mostly a solved problem providing clear, sharp images can be obtained. However, in outdoor environments a number of factors such as rough terrain, high speeds and hardware limitations can result in these conditions not being met. High speed transit on rough terrain can lead to image blur and under/over exposure, problems that cannot easily be dealt with using low cost hardware. Furthermore, recently there has been a growth in interest in lifelong autonomy for robots, which brings with it the challenge in outdoor environments of dealing with a moving sun and lack of constant artificial lighting. In this paper, we present a lightweight approach to visual localization and visual odometry that addresses the challenges posed by perceptual change and low cost cameras. The approach combines low resolution imagery with the SLAM algorithm, RatSLAM. We test the system using a cheap consumer camera mounted on a small vehicle in a mixed urban and vegetated environment, at times ranging from dawn to dusk and in conditions ranging from sunny weather to rain. We first show that the system is able to provide reliable mapping and recall over the course of the day and incrementally incorporate new visual scenes from different times into an existing map. We then restrict the system to only learning visual scenes at one time of day, and show that the system is still able to localize and map at other times of day. The results demonstrate the viability of the approach in situations where image quality is poor and environmental or hardware factors preclude the use of visual features.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents practical vision-based collision avoidance for objects approximating a single point feature. Using a spherical camera model, a visual predictive control scheme guides the aircraft around the object along a conical spiral trajectory. Visibility, state and control constraints are considered explicitly in the controller design by combining image and vehicle dynamics in the process model, and solving the nonlinear optimization problem over the resulting state space. Importantly, range is not required. Instead, the principles of conical spiral motion are used to design an objective function that simultaneously guides the aircraft along the avoidance trajectory, whilst providing an indication of the appropriate point to stop the spiral behaviour. Our approach is aimed at providing a potential solution to the See and Avoid problem for unmanned aircraft and is demonstrated through a series.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The performance of visual speech recognition (VSR) systems are significantly influenced by the accuracy of the visual front-end. The current state-of-the-art VSR systems use off-the-shelf face detectors such as Viola- Jones (VJ) which has limited reliability for changes in illumination and head poses. For a VSR system to perform well under these conditions, an accurate visual front end is required. This is an important problem to be solved in many practical implementations of audio visual speech recognition systems, for example in automotive environments for an efficient human-vehicle computer interface. In this paper, we re-examine the current state-of-the-art VSR by comparing off-the-shelf face detectors with the recently developed Fourier Lucas-Kanade (FLK) image alignment technique. A variety of image alignment and visual speech recognition experiments are performed on a clean dataset as well as with a challenging automotive audio-visual speech dataset. Our results indicate that the FLK image alignment technique can significantly outperform off-the shelf face detectors, but requires frequent fine-tuning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we present SMART (Sequence Matching Across Route Traversals): a vision- based place recognition system that uses whole image matching techniques and odometry information to improve the precision-recall performance, latency and general applicability of the SeqSLAM algorithm. We evaluate the system’s performance on challenging day and night journeys over several kilometres at widely varying vehicle velocities from 0 to 60 km/h, compare performance to the current state-of- the-art SeqSLAM algorithm, and provide parameter studies that evaluate the effectiveness of each system component. Using 30-metre sequences, SMART achieves place recognition performance of 81% recall at 100% precision, outperforming SeqSLAM, and is robust to significant degradations in odometry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ability to automate forced landings in an emergency such as engine failure is an essential ability to improve the safety of Unmanned Aerial Vehicles operating in General Aviation airspace. By using active vision to detect safe landing zones below the aircraft, the reliability and safety of such systems is vastly improved by gathering up-to-the-minute information about the ground environment. This paper presents the Site Detection System, a methodology utilising a downward facing camera to analyse the ground environment in both 2D and 3D, detect safe landing sites and characterise them according to size, shape, slope and nearby obstacles. A methodology is presented showing the fusion of landing site detection from 2D imagery with a coarse Digital Elevation Map and dense 3D reconstructions using INS-aided Structure-from-Motion to improve accuracy. Results are presented from an experimental flight showing the precision/recall of landing sites in comparison to a hand-classified ground truth, and improved performance with the integration of 3D analysis from visual Structure-from-Motion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: Changes in pupil size and shape are relevant for peripheral imagery by affecting aberrations and how much light enters and/or exits the eye. The purpose of this study is to model the pattern of pupil shape across the complete horizontal visual field and to show how the pattern is influenced by refractive error. Methods: Right eyes of thirty participants were dilated with 1% cyclopentolate and images were captured using a modified COAS-HD aberrometer alignment camera along the horizontal visual field to ±90°. A two lens relay system enabled fixation at targets mounted on the wall 3m from the eye. Participants placed their heads on a rotatable chin rest and eye rotations were kept to less than 30°. Best-fit elliptical dimensions of pupils were determined. Ratios of minimum to maximum axis diameters were plotted against visual field angle. Results: Participants’ data were well fitted by cosine functions, with maxima at (–)1° to (–)9° in the temporal visual field and widths 9% to 15% greater than predicted by the cosine of the field angle . Mean functions were 0.99cos[( + 5.3)/1.121], R2 0.99 for the whole group and 0.99cos[( + 6.2)/1.126], R2 0.99 for the 13 emmetropes. The function peak became less temporal, and the width became smaller, with increase in myopia. Conclusion: Off-axis pupil shape changes are well described by a cosine function which is both decentered by a few degrees and flatter by about 12% than the cosine of the viewing angle, with minor influences of refraction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An on-road study was conducted to evaluate a complementary tactile navigation signal on driving behaviour and eye movements for drivers with hearing loss (HL) compared to drivers with normal hearing (NH). 32 participants (16 HL and 16 NH) performed two preprogrammed navigation tasks. In one, participants received only visual information, while the other also included a vibration in the seat to guide them in the correct direction. SMI glasses were used for eye tracking, recording the point of gaze within the scene. Analysis was performed on predefined regions. A questionnaire examined participant's experience of the navigation systems. Hearing loss was associated with lower speed, higher satisfaction with the tactile signal and more glances in the rear view mirror. Additionally, tactile support led to less time spent viewing the navigation display.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we present a novel place recognition algorithm inspired by recent discoveries in human visual neuroscience. The algorithm combines intolerant but fast low resolution whole image matching with highly tolerant, sub-image patch matching processes. The approach does not require prior training and works on single images (although we use a cohort normalization score to exploit temporal frame information), alleviating the need for either a velocity signal or image sequence, differentiating it from current state of the art methods. We demonstrate the algorithm on the challenging Alderley sunny day – rainy night dataset, which has only been previously solved by integrating over 320 frame long image sequences. The system is able to achieve 21.24% recall at 100% precision, matching drastically different day and night-time images of places while successfully rejecting match hypotheses between highly aliased images of different places. The results provide a new benchmark for single image, condition-invariant place recognition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Long-term autonomy in robotics requires perception systems that are resilient to unusual but realistic conditions that will eventually occur during extended missions. For example, unmanned ground vehicles (UGVs) need to be capable of operating safely in adverse and low-visibility conditions, such as at night or in the presence of smoke. The key to a resilient UGV perception system lies in the use of multiple sensor modalities, e.g., operating at different frequencies of the electromagnetic spectrum, to compensate for the limitations of a single sensor type. In this paper, visual and infrared imaging are combined in a Visual-SLAM algorithm to achieve localization. We propose to evaluate the quality of data provided by each sensor modality prior to data combination. This evaluation is used to discard low-quality data, i.e., data most likely to induce large localization errors. In this way, perceptual failures are anticipated and mitigated. An extensive experimental evaluation is conducted on data sets collected with a UGV in a range of environments and adverse conditions, including the presence of smoke (obstructing the visual camera), fire, extreme heat (saturating the infrared camera), low-light conditions (dusk), and at night with sudden variations of artificial light. A total of 240 trajectory estimates are obtained using five different variations of data sources and data combination strategies in the localization method. In particular, the proposed approach for selective data combination is compared to methods using a single sensor type or combining both modalities without preselection. We show that the proposed framework allows for camera-based localization resilient to a large range of low-visibility conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work aims to contribute to the reliability and integrity of perceptual systems of unmanned ground vehicles (UGV). A method is proposed to evaluate the quality of sensor data prior to its use in a perception system by utilising a quality metric applied to heterogeneous sensor data such as visual and infrared camera images. The concept is illustrated specifically with sensor data that is evaluated prior to the use of the data in a standard SIFT feature extraction and matching technique. The method is then evaluated using various experimental data sets that were collected from a UGV in challenging environmental conditions, represented by the presence of airborne dust and smoke. In the first series of experiments, a motionless vehicle is observing a ’reference’ scene, then the method is extended to the case of a moving vehicle by compensating for its motion. This paper shows that it is possible to anticipate degradation of a perception algorithm by evaluating the input data prior to any actual execution of the algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes a novel obstacle detection system for autonomous robots in agricultural field environments that uses a novelty detector to inform stereo matching. Stereo vision alone erroneously detects obstacles in environments with ambiguous appearance and ground plane such as in broad-acre crop fields with harvested crop residue. The novelty detector estimates the probability density in image descriptor space and incorporates image-space positional understanding to identify potential regions for obstacle detection using dense stereo matching. The results demonstrate that the system is able to detect obstacles typical to a farm at day and night. This system was successfully used as the sole means of obstacle detection for an autonomous robot performing a long term two hour coverage task travelling 8.5 km.