214 resultados para Vision, Monocular.
Resumo:
This paper describes the development and experimental evaluation of a novel vision-based Autonomous Surface Vehicle with the purpose of performing coordinated docking manoeuvres with a target, such as an Autonomous Underwater Vehicle, on the water’s surface. The system architecture integrates two small processor units; the first performs vehicle control and implements a virtual force obstacle avoidance and docking strategy, with the second performing vision-based target segmentation and tracking. Furthermore, the architecture utilises wireless sensor network technology allowing the vehicle to be observed by, and even integrated within an ad-hoc sensor network. The system performance is demonstrated through real-world experiments.
Resumo:
Hot metal carriers (HMCs) are large forklift-type vehicles used to move molten metal in aluminum smelters. This paper reports on field experiments that demonstrate that HMCs can operate autonomously and in particular can use vision as a primary sensor to locate the load of aluminum. We present our complete system but focus on the vision system elements and also detail experiments demonstrating reliable operation of the materials handling task. Two key experiments are described, lasting 2 and 5 h, in which the HMC traveled 15 km in total and handled the load 80 times.
Resumo:
We propose a method for learning specific object representations that can be applied (and reused) in visual detection and identification tasks. A machine learning technique called Cartesian Genetic Programming (CGP) is used to create these models based on a series of images. Our research investigates how manipulation actions might allow for the development of better visual models and therefore better robot vision. This paper describes how visual object representations can be learned and improved by performing object manipulation actions, such as, poke, push and pick-up with a humanoid robot. The improvement can be measured and allows for the robot to select and perform the `right' action, i.e. the action with the best possible improvement of the detector.
Resumo:
Purpose To investigate the frequency of convergence and accommodation anomalies in an optometric clinical setting in Mashhad, Iran, and to determine tests with highest accuracy in diagnosing these anomalies. Methods From 261 patients who came to the optometric clinics of Mashhad University of Medical Sciences during a month, 83 of them were included in the study based on the inclusion criteria. Near point of convergence (NPC), near and distance heterophoria, monocular and binocular accommodative facility (MAF and BAF, respectively), lag of accommodation, positive and negative fusional vergences (PFV and NFV, respectively), AC/A ratio, relative accommodation, and amplitude of accommodation (AA) were measured to diagnose the convergence and accommodation anomalies. The results were also compared between symptomatic and asymptomatic patients. The accuracy of these tests was explored using sensitivity (S), specificity (Sp), and positive and negative likelihood ratios (LR+, LR−). Results Mean age of the patients was 21.3 ± 3.5 years and 14.5% of them had specific binocular and accommodative symptoms. Convergence and accommodative anomalies were found in 19.3% of the patients; accommodative excess (4.8%) and convergence insufficiency (3.6%) were the most common accommodative and convergence disorders, respectively. Symptomatic patients showed lower values for BAF (p = .003), MAF (p = .001), as well as AA (p = .001) compared with asymptomatic patients. Moreover, BAF (S = 75%, Sp = 62%) and MAF (S = 62%, Sp = 89%) were the most accurate tests for detecting accommodative and convergence disorders in terms of both sensitivity and specificity. Conclusions Convergence and accommodative anomalies are the most common binocular disorders in optometric patients. Including tests of monocular and binocular accommodative facility in routine eye examinations as accurate tests to diagnose these anomalies requires further investigation.
Resumo:
The mining industry is highly suitable for the application of robotics and automation technology since the work is both arduous and dangerous. Visual servoing is a means of integrating noncontact visual sensing with machine control to augment or replace operator based control. This article describes two of our current mining automation projects in order to demonstrate some, perhaps unusual, applications of visual servoing, and also to illustrate some very real problems with robust computer vision
Resumo:
The International Journal of Robotics Research (IJRR) has a long history of publishing the state-of-the-art in the field of robotic vision. This is the fourth special issue devoted to the topic. Previous special issues were published in 2012 (Volume 31, No. 4), 2010 (Volume 29, Nos 2–3) and 2007 (Volume 26, No. 7, jointly with the International Journal of Computer Vision). In a closely related field was the special issue on Visual Servoing published in IJRR, 2003 (Volume 22, Nos 10–11). These issues nicely summarize the highlights and progress of the past 12 years of research devoted to the use of visual perception for robotics.
Resumo:
Over the past several decades there has been a sharp increase in the number of studies focused on the relationship between vision and driving. The intensified attention to this topic has most likely been stimulated by the lack of an evidence basis for determining vision standards for driving licensure and a poor understanding about how vision impairment impacts driver safety and performance. Clinicians depend on the literature on vision and driving to advise visually impaired patients appropriately about driving fitness. Policy makers also depend on the scientific literature in order to develop guidelines that are evidence-based and are thus fair to persons who are visually impaired. Thus it is important for clinicians and policy makers alike to understand how various study designs and measurement methods should be interpreted so that the conclusions and recommendations they make are not overly broad, too narrowly constrained, or even misguided. We offer a methodological framework to guide interpretations of studies on vision and driving that can also serve as a heuristic for researchers in the area. Here, we discuss research designs and general measurement methods for the study of vision as they relate to driver safety, driver performance, and driver-centered (self-reported) outcomes.
Resumo:
Falls are the leading cause of injury-related morbidity and mortality among older adults. In addition to the resulting physical injury and potential disability after a fall, there are also important psychological consequences, including depression, anxiety, activity restriction, and fear of falling. Fear of falling affects 20 to 43% of community-dwelling older adults and is not limited to those who have previously experienced a fall. About half of older adults who experience fear of falling subsequently restrict their physical and everyday activities, which can lead to functional decline, depression, increased falls risk, and reduced quality of life. Although there is clear evidence that older adults with visual impairment have higher falls risk, only a limited number of studies have investigated fear of falling in older adults with visual impairment and the findings have been mixed. Recent studies suggest increased levels of fear of falling among older adults with various eye conditions, including glaucoma and age-related macular degeneration, whereas other studies have failed to find differences. Interventions, which are still in their infancy in the general population, are also largely unexplored in those with visual impairment. The major aims of this review were to provide an overview of the literature on fear of falling, its measurement, and risk factors among older populations, with specific focus on older adults with visual impairment, and to identify directions for future research in this area.
Resumo:
Purpose There have been only a limited number of studies examining the accommodative response that occurs when the two eyes are provided with disparate accommodative stimuli, and the results from these studies to date have been equivocal. In this study, we therefore aimed to examine the capacity of the visual system to aniso-accommodate by objectively measuring the interocular difference in the accommodation response between fellow dominant and non-dominant eyes under controlled monocular and binocular viewing conditions during short-term exposure to aniso-accommodative stimuli. Methods The accommodative response of each eye of sixteen young isometropic adults (mean age 22 ± 2 years) with normal binocular vision was measured using an open-field autorefractor during a range of testing conditions; monocularly (accommodative demands ranging from 1.32 to 4.55 D) and binocularly while altering the accommodation demand for each eye (aniso-accommodative stimuli ranging from 0.24 to 2.05 D). Results Under monocular viewing conditions, the dominant and non-dominant eyes displayed a highly symmetric accommodative response; mean interocular difference in spherical equivalent 0.01 ± 0.06 D (relative) and 0.22 ± 0.06 D (absolute) (p>0.05). During binocular viewing, the dominant eye displayed a greater accommodative response (0.11 ± 0.34 D relative and 0.24 ± 0.26 D absolute) irrespective of whether the demand of the dominant or non-dominant eye was altered (p = 0.01). Astigmatic power vectors J0 and J45 did not vary between eyes or with increasing accommodation demands under monocular or binocular viewing conditions (p>0.05). Conclusion The dominant and non-dominant eyes of young isometropic individuals display a similar consensual lag of accommodation under both monocular and binocular viewing conditions, with the dominant eye showing a small but significantly greater (by 0.12 to 0.25 D) accommodative response. Evidence of short-term aniso-accommodation in response to asymmetric accommodation demands was not observed.
Resumo:
The mining industry presents us with a number of ideal applications for sensor based machine control because of the unstructured environment that exists within each mine. The aim of the research presented here is to increase the productivity of existing large compliant mining machines by retrofitting with enhanced sensing and control technology. The current research focusses on the automatic control of the swing motion cycle of a dragline and an automated roof bolting system. We have achieved: * closed-loop swing control of an one-tenth scale model dragline; * single degree of freedom closed-loop visual control of an electro-hydraulic manipulator in the lab developed from standard components.
Resumo:
This paper details the design and performance assessment of a unique collision avoidance decision and control strategy for autonomous vision-based See and Avoid systems. The general approach revolves around re-positioning a collision object in the image using image-based visual servoing, without estimating range or time to collision. The decision strategy thus involves determining where to move the collision object, to induce a safe avoidance manuever, and when to cease the avoidance behaviour. These tasks are accomplished by exploiting human navigation models, spiral motion properties, expected image feature uncertainty and the rules of the air. The result is a simple threshold based system that can be tuned and statistically evaluated by extending performance assessment techniques derived for alerting systems. Our results demonstrate how autonomous vision-only See and Avoid systems may be designed under realistic problem constraints, and then evaluated in a manner consistent to aviation expectations.
Resumo:
This paper presents a visual SLAM method for temporary satellite dropout navigation, here applied on fixed- wing aircraft. It is designed for flight altitudes beyond typical stereo ranges, but within the range of distance measurement sensors. The proposed visual SLAM method consists of a common localization step with monocular camera resectioning, and a mapping step which incorporates radar altimeter data for absolute scale estimation. With that, there will be no scale drift of the map and the estimated flight path. The method does not require simplifications like known landmarks and it is thus suitable for unknown and nearly arbitrary terrain. The method is tested with sensor datasets from a manned Cessna 172 aircraft. With 5% absolute scale error from radar measurements causing approximately 2-6% accumulation error over the flown distance, stable positioning is achieved over several minutes of flight time. The main limitations are flight altitudes above the radar range of 750 m where the monocular method will suffer from scale drift, and, depending on the flight speed, flights below 50 m where image processing gets difficult with a downwards-looking camera due to the high optical flow rates and the low image overlap.
Resumo:
There is limited research on the driving performance and safety of bioptic drivers and even less regarding the driving skills that are most challenging for those learning to drive with bioptic telescopes. This research consisted of case studies of five trainee bioptic drivers whose driving skills were compared with those of a group of licensed bioptic drivers (n = 23) while they drove along city, suburban, and controlled-access highways in an instrumented dual-brake vehicle. A certified driver rehabilitation specialist was positioned in the front passenger seat to monitor safety and two backseat evaluators independently rated driving using a standardized scoring system. Other aspects of performance were assessed through vehicle instrumentation and video recordings. Results demonstrate that while sign recognition, lane keeping, steering steadiness, gap judgments and speed choices were significantly worse in trainees, some driving behaviors and skills, including pedestrian detection and traffic light recognition were not significantly different to those of the licensed drivers. These data provide useful insights into the skill challenges encountered by a small sample of trainee bioptic drivers which, while not generalizable because of the small sample size, provide valuable insights beyond that of previous studies and can be used as a basis to guide training strategies.
Resumo:
Although robotics research has seen advances over the last decades robots are still not in widespread use outside industrial applications. Yet a range of proposed scenarios have robots working together, helping and coexisting with humans in daily life. In all these a clear need to deal with a more unstructured, changing environment arises. I herein present a system that aims to overcome the limitations of highly complex robotic systems, in terms of autonomy and adaptation. The main focus of research is to investigate the use of visual feedback for improving reaching and grasping capabilities of complex robots. To facilitate this a combined integration of computer vision and machine learning techniques is employed. From a robot vision point of view the combination of domain knowledge from both imaging processing and machine learning techniques, can expand the capabilities of robots. I present a novel framework called Cartesian Genetic Programming for Image Processing (CGP-IP). CGP-IP can be trained to detect objects in the incoming camera streams and successfully demonstrated on many different problem domains. The approach requires only a few training images (it was tested with 5 to 10 images per experiment) is fast, scalable and robust yet requires very small training sets. Additionally, it can generate human readable programs that can be further customized and tuned. While CGP-IP is a supervised-learning technique, I show an integration on the iCub, that allows for the autonomous learning of object detection and identification. Finally this dissertation includes two proof-of-concepts that integrate the motion and action sides. First, reactive reaching and grasping is shown. It allows the robot to avoid obstacles detected in the visual stream, while reaching for the intended target object. Furthermore the integration enables us to use the robot in non-static environments, i.e. the reaching is adapted on-the- fly from the visual feedback received, e.g. when an obstacle is moved into the trajectory. The second integration highlights the capabilities of these frameworks, by improving the visual detection by performing object manipulation actions.