979 resultados para Night vision devices
Resumo:
Purpose: Students often read for long periods and prolonged reading practice may be important for developing reading skills. For students with low vision, reading at a close working distance imposes high demands on their near visual functions, which might make it difficult to sustain prolonged reading. The aim of this study was to investigate the performance of a prolonged reading task by students with low vision. Method: Forty students with low vision, aged from eight to 20 years and without any intellectual, reading or learning disability, were recruited through the Paediatric Low Vision Clinic, Buranda, Queensland. Following a preliminary vision examination, reading performance measures—critical print size (CPS), maximum oral reading rates (MORR), near text visual acuity— were recorded using the Bailey-Lovie text reading charts before and after a 30-minute prolonged reading task. Results: The mean age of the participants was 13.03 ± 3 years. The distance and near visual acuities ranged between -0.1 to 1.24 logMAR and 0.0 to 1.60 logMAR, respectively. The mean working distance of the participants was 11.2 ± 5.8 cm. Most of the participants (65 per cent) in this study were able to complete the prolonged reading task. Overall, there was no significant change in CPS, MORR and near text visual acuity following the prolonged task (p > 0.05). MORR was significantly correlated to age and near text visual acuity (p < 0.05). Conclusions: In this study, students with low vision were able to maintain their reading performance over a 30-minute prolonged reading task. Overall, there was no significant increase or decrease in reading performance following a prolonged reading task performed at their habitual close working distances but there were wide individual variations within the group.
Resumo:
Competent navigation in an environment is a major requirement for an autonomous mobile robot to accomplish its mission. Nowadays, many successful systems for navigating a mobile robot use an internal map which represents the environment in a detailed geometric manner. However, building, maintaining and using such environment maps for navigation is difficult because of perceptual aliasing and measurement noise. Moreover, geometric maps require the processing of huge amounts of data which is computationally expensive. This thesis addresses the problem of vision-based topological mapping and localisation for mobile robot navigation. Topological maps are concise and graphical representations of environments that are scalable and amenable to symbolic manipulation. Thus, they are well-suited for basic robot navigation applications, and also provide a representational basis for the procedural and semantic information needed for higher-level robotic tasks. In order to make vision-based topological navigation suitable for inexpensive mobile robots for the mass market we propose to characterise key places of the environment based on their visual appearance through colour histograms. The approach for representing places using visual appearance is based on the fact that colour histograms change slowly as the field of vision sweeps the scene when a robot moves through an environment. Hence, a place represents a region of the environment rather than a single position. We demonstrate in experiments using an indoor data set, that a topological map in which places are characterised using visual appearance augmented with metric clues provides sufficient information to perform continuous metric localisation which is robust to the kidnapped robot problem. Many topological mapping methods build a topological map by clustering visual observations to places. However, due to perceptual aliasing observations from different places may be mapped to the same place representative in the topological map. A main contribution of this thesis is a novel approach for dealing with the perceptual aliasing problem in topological mapping. We propose to incorporate neighbourhood relations for disambiguating places which otherwise are indistinguishable. We present a constraint based stochastic local search method which integrates the approach for place disambiguation in order to induce a topological map. Experiments show that the proposed method is capable of mapping environments with a high degree of perceptual aliasing, and that a small map is found quickly. Moreover, the method of using neighbourhood information for place disambiguation is integrated into a framework for topological off-line simultaneous localisation and mapping which does not require an initial categorisation of visual observations. Experiments on an indoor data set demonstrate the suitability of our method to reliably localise the robot while building a topological map.
Resumo:
We investigated the relative importance of vision and proprioception in estimating target and hand locations in a dynamic environment. Subjects performed a position estimation task in which a target moved horizontally on a screen at a constant velocity and then disappeared. They were asked to estimate the position of the invisible target under two conditions: passively observing and manually tracking. The tracking trials included three visual conditions with a cursor representing the hand position: always visible, disappearing simultaneously with target disappearance, and always invisible. The target’s invisible displacement was systematically underestimated during passive observation. In active conditions, tracking with the visible cursor significantly decreased the extent of underestimation. Tracking of the invisible target became much more accurate under this condition and was not affected by cursor disappearance. In a second experiment, subjects were asked to judge the position of their unseen hand instead of the target during tracking movements. Invisible hand displacements were also underestimated when compared with the actual displacement. Continuous or brief presentation of the cursor reduced the extent of underestimation. These results suggest that vision–proprioception interactions are critical for representing exact target–hand spatial relationships, and that such sensorimotor representation of hand kinematics serves a cognitive function in predicting target position. We propose a hypothesis that the central nervous system can utilize information derived from proprioception and/or efference copy for sensorimotor prediction of dynamic target and hand positions, but that effective use of this information for conscious estimation requires that it be presented in a form that corresponds to that used for the estimations.
Resumo:
The research described in this paper is directed toward increasing productivity of draglines through automation. In particular, it focuses on the swing-to-dump, dump, and return-to-dig phases of the dragline operational cycle by developing a swing automation system. In typical operation the dragline boom can be in motion for up to 80% of the total cycle time. This provides considerable scope for improving cycle time through automated or partially automated boom motion control. This paper describes machine vision based sensor technology and control algorithms under development to solve the problem of continuous real time bucket location and control. Incorporation of this capability into existing dragline control systems will then enable true automation of dragline swing and dump operations.
Resumo:
This paper, which serves as an introduction to the mini-symposium on Real-Time Vision, Tracking and Control, provides a broad sketch of visual servoing, the application of real-time vision, tracking and control for robot guidance. It outlines the basic theoretical approaches to the problem, describes a typical architecture, and discusses major milestones, applications and the significant vision sub-problems that must be solved.
Resumo:
Machine vision represents a particularly attractive solution for sensing and detecting potential collision-course targets due to the relatively low cost, size, weight, and power requirements of the sensors involved (as opposed to radar). This paper describes the development and evaluation of a vision-based collision detection algorithm suitable for fixed-wing aerial robotics. The system was evaluated using highly realistic vision data of the moments leading up to a collision. Based on the collected data, our detection approaches were able to detect targets at distances ranging from 400m to about 900m. These distances (with some assumptions about closing speeds and aircraft trajectories) translate to an advanced warning of between 8-10 seconds ahead of impact, which approaches the 12.5 second response time recommended for human pilots. We make use of the enormous potential of graphic processing units to achieve processing rates of 30Hz (for images of size 1024-by- 768). Currently, integration in the final platform is under way.
Resumo:
This paper describes the real time global vision system for the robot soccer team the RoboRoos. It has a highly optimised pipeline that includes thresholding, segmenting, colour normalising, object recognition and perspective and lens correction. It has a fast ‘paint’ colour calibration system that can calibrate in any face of the YUV or HSI cube. It also autonomously selects both an appropriate camera gain and colour gains robot regions across the field to achieve colour uniformity. Camera geometry calibration is performed automatically from selection of keypoints on the field. The system achieves a position accuracy of better than 15mm over a 4m × 5.5m field, and orientation accuracy to within 1°. It processes 614 × 480 pixels at 60Hz on a 2.0GHz Pentium 4 microprocessor.
Resumo:
The Simultaneous Localisation And Mapping (SLAM) problem is one of the major challenges in mobile robotics. Probabilistic techniques using high-end range finding devices are well established in the field, but recent work has investigated vision-only approaches. We present an alternative approach to the leading existing techniques, which extracts approximate rotational and translation velocity information from a vehicle-mounted consumer camera, without tracking landmarks. When coupled with an existing SLAM system, the vision module is able to map a 45 metre long indoor loop and a 1.6 km long outdoor road loop, without any parameter or system adjustment between tests. The work serves as a promising pilot study into ground-based vision-only SLAM, with minimal geometric interpretation of the environment.
Resumo:
Conventional cameras have limited dynamic range, and as a result vision-based robots cannot effectively view an environment made up of both sunny outdoor areas and darker indoor areas. This paper presents an approach to extend the effective dynamic range of a camera, achieved by changing the exposure level of the camera in real-time to form a sequence of images which collectively cover a wide range of radiance. Individual control algorithms for each image have been developed to maximize the viewable area across the sequence. Spatial discrepancies between images, caused by the moving robot, are improved by a real-time image registration process. The sequence is then combined by merging color and contour information. By integrating these techniques it becomes possible to operate a vision-based robot in wide radiance range scenes.
Resumo:
DIRECTOR’S OVERVIEW by Professor Mark Pearcy This report for 2009 is the first full year report for MERF. The development of our activities in 2009 has been remarkable and is testament to the commitment of the staff to the vision of MERF as a premier training and research facility. From the beginnings in 2003, when a need was identified for the provision of specialist research and training facilities to enable close collaboration between researchers and clinicians, to the realisation of the vision in 2009 has been an amazing journey. However, we have learnt that there is much more that can be achieved and the emphasis will be on working with the university, government and external partners to realise the full potential of MERF by further development of the Facility. In 2009 we conducted 28 workshops in the Anatomical and Surgical Skills Laboratory providing training for surgeons in the latest techniques. This was an excellent achievement for the first full year as our reputation for delivering first class facilities and support grows. The highlight, perhaps, was a course run via our video link by a surgeon in the USA directing the participants in MERF. In addition, we have continued to run a small number of workshops in the operating theatre and this promises to be an avenue that will be of growing interest. Final approval was granted for the QUT Body Bequest Program late in 2009 following the granting of an Anatomical Accepting Licence. This will enable us to expand our capabilities by provide better material for the workshops. The QUT Body Bequest Program will be launched early in 2010. The Biological Research Facility (BRF) conducted over 270 procedures in 2009. This is a wonderful achievement considering less then 40 were performed in 2008. The staff of the BRF worked very hard to improve the state of the old animal house and this resulted in approval for expanded use by the ethics committees of both QUT and the University of Queensland. An external agency conducted an Occupational Health and Safety Audit of MERF in 2009. While there were a number of small issues that require attention, the auditor congratulated the staff of MERF on achieving a good result, particularly for such an early stage in the development of MERF. The journey from commissioning of MERF in 2008 to the full implementation of its activities in 2009 has demonstrated the potential of this facility and 2010 will be an exciting year as its activities are recognised and further expanded building development is pursued.
Resumo:
In this column, Dr. Peter Corke of CSIRO, Australia, gives us a description of MATLAB Toolboxes he has developed. He has been passionately developing tools to enable students and teachers to better understand the theoretical concepts behind classical robotics and computer vision through easy and intuitive simulation and visualization. The results of this labor of love have been packaged as MATLAB Toolboxes: the Robotics Toolbox and the Vision Toolbox. –Daniela Rus, RAS Education Cochair
Resumo:
We present a technique for high-dynamic range stereo for outdoor mobile robot applications. Stereo pairs are captured at a number of different exposures (exposure bracketing), and combined by projecting the 3D points into a common coordinate frame, and building a 3D occupancy map. We present experimental results for static scenes with constant and dynamic lighting as well as outdoor operation with variable and high contrast lighting conditions.