994 resultados para Vision Tests.


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: To evaluate the on-road driving performance of persons with homonymous hemianopia or quadrantanopia in comparison to age-matched controls with normal visual fields. Methods: Participants were 22 hemianopes and eight quadrantanopes (mean age 53 years) and 30 persons with normal visual fields (mean age 52 years) and were either current drivers or aiming to resume driving. All participants completed a battery of tests of vision (ETDRS visual acuity, Pelli-Robson letter contrast sensitivity, Humphrey visual fields), cognitive tests (trials A and B, Mini Mental State Examination, Digit Symbol Substitution) and an on-road driving assessment. Driving performance was assessed in a dual-brake vehicle with safety monitored by a certified driving rehabilitation specialist. Backseat evaluators masked to the clinical characteristics of participants independently rated driving performance along a 22.7 kilometre route involving urban and interstate driving. Results: Seventy-three per cent of the hemianopes, 88 per cent of quadrantanopes and all of the drivers with normal fields received safe driving ratings. Those hemianopic and quadrantanopic drivers rated as unsafe tended to have problems with maintaining appropriate lane position, steering steadiness and gap judgment compared to controls. Unsafe driving was associated with slower visual processing speed and impairments in contrast sensitivity, visual field sensitivity and executive function. Conclusions: Our findings suggest that some drivers with hemianopia or quadrantanopia are capable of safe driving performance, when compared to those of the same age with normal visual fields. This finding has important implications for the assessment of fitness to drive in this population.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Competent navigation in an environment is a major requirement for an autonomous mobile robot to accomplish its mission. Nowadays, many successful systems for navigating a mobile robot use an internal map which represents the environment in a detailed geometric manner. However, building, maintaining and using such environment maps for navigation is difficult because of perceptual aliasing and measurement noise. Moreover, geometric maps require the processing of huge amounts of data which is computationally expensive. This thesis addresses the problem of vision-based topological mapping and localisation for mobile robot navigation. Topological maps are concise and graphical representations of environments that are scalable and amenable to symbolic manipulation. Thus, they are well-suited for basic robot navigation applications, and also provide a representational basis for the procedural and semantic information needed for higher-level robotic tasks. In order to make vision-based topological navigation suitable for inexpensive mobile robots for the mass market we propose to characterise key places of the environment based on their visual appearance through colour histograms. The approach for representing places using visual appearance is based on the fact that colour histograms change slowly as the field of vision sweeps the scene when a robot moves through an environment. Hence, a place represents a region of the environment rather than a single position. We demonstrate in experiments using an indoor data set, that a topological map in which places are characterised using visual appearance augmented with metric clues provides sufficient information to perform continuous metric localisation which is robust to the kidnapped robot problem. Many topological mapping methods build a topological map by clustering visual observations to places. However, due to perceptual aliasing observations from different places may be mapped to the same place representative in the topological map. A main contribution of this thesis is a novel approach for dealing with the perceptual aliasing problem in topological mapping. We propose to incorporate neighbourhood relations for disambiguating places which otherwise are indistinguishable. We present a constraint based stochastic local search method which integrates the approach for place disambiguation in order to induce a topological map. Experiments show that the proposed method is capable of mapping environments with a high degree of perceptual aliasing, and that a small map is found quickly. Moreover, the method of using neighbourhood information for place disambiguation is integrated into a framework for topological off-line simultaneous localisation and mapping which does not require an initial categorisation of visual observations. Experiments on an indoor data set demonstrate the suitability of our method to reliably localise the robot while building a topological map.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Typical high strength steels (HSS) have exceptional high strengths with improved weldability making the material attractive in modern steel constructions. However, due to lack of understanding, most of the current steel design standards are limited to conventional low strength steels (LSS, i.e. fy ≤ 450 MPa). This paper presents the details of full-scale experimental tests on short beams fabricated from BISPLATE80 HSS materials (nominal fy = 690 MPa). The various slenderness ratios of the plate elements in the test specimens were chosen in the range near the current yield limit (AS4100-1998, etc.). The experimental studies presented in this paper have produced a better understanding of the structural behaviour of HSS members subjected to local instabilities. Comparisons have also presented in the paper regarding to the design predictions from the current steel standards (AS4100-1998). This study has enabled to provide a series of proposals for proper assessment of plate slenderness limits for structural members made of representative HSS materials. This research work also enables the inclusion of further versions in the steel design specifications for typical HSS materials to be used in buildings and bridges. This paper also presents a distribution model of residual stresses in the longitudinal direction for typical HSS I-sections.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We investigated the relative importance of vision and proprioception in estimating target and hand locations in a dynamic environment. Subjects performed a position estimation task in which a target moved horizontally on a screen at a constant velocity and then disappeared. They were asked to estimate the position of the invisible target under two conditions: passively observing and manually tracking. The tracking trials included three visual conditions with a cursor representing the hand position: always visible, disappearing simultaneously with target disappearance, and always invisible. The target’s invisible displacement was systematically underestimated during passive observation. In active conditions, tracking with the visible cursor significantly decreased the extent of underestimation. Tracking of the invisible target became much more accurate under this condition and was not affected by cursor disappearance. In a second experiment, subjects were asked to judge the position of their unseen hand instead of the target during tracking movements. Invisible hand displacements were also underestimated when compared with the actual displacement. Continuous or brief presentation of the cursor reduced the extent of underestimation. These results suggest that vision–proprioception interactions are critical for representing exact target–hand spatial relationships, and that such sensorimotor representation of hand kinematics serves a cognitive function in predicting target position. We propose a hypothesis that the central nervous system can utilize information derived from proprioception and/or efference copy for sensorimotor prediction of dynamic target and hand positions, but that effective use of this information for conscious estimation requires that it be presented in a form that corresponds to that used for the estimations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cold-formed steel members can be assembled in various combinations to provide cost-efficient and safe light gauge floor systems for buildings. Such Light gauge Steel Framing (LSF) systems are widely accepted in industrial and commercial building construction. An example application is in floor-ceiling systems. Light gauge steel floor-ceiling systems must be designed to serve as fire compartment boundaries and provide adequate fire resistance. Fire-rated floor-ceiling assemblies formed with new materials and construction methodologies have been increasingly used in buildings. However, limited research has been undertaken in the past and hence a thorough understanding of their fire resistance behaviour is not available. Recently a new composite floor-ceiling system has been developed to provide higher fire rating under standard fire conditions. But its increased fire rating could not be determined using the currently available design methods. Therefore a research project was carried out to investigate its structural and fire resistance behaviour under standard fire conditions. In this research project full scale experimental tests of the new LSF floor system based on a composite ceiling unit were undertaken using a gas furnace at the Queensland University of Technology. Both the conventional and the new steel floor-ceiling systems were tested under structural and fire loads. Full scale fire tests provided a good understanding of the fire behaviour of the LSF floor-ceiling systems and confirmed the superior performance of the new composite system. This paper presents the details of this research into the structural and fire behaviour of light gauge steel floor systems protected by the new composite panel, and the results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The research described in this paper is directed toward increasing productivity of draglines through automation. In particular, it focuses on the swing-to-dump, dump, and return-to-dig phases of the dragline operational cycle by developing a swing automation system. In typical operation the dragline boom can be in motion for up to 80% of the total cycle time. This provides considerable scope for improving cycle time through automated or partially automated boom motion control. This paper describes machine vision based sensor technology and control algorithms under development to solve the problem of continuous real time bucket location and control. Incorporation of this capability into existing dragline control systems will then enable true automation of dragline swing and dump operations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper, which serves as an introduction to the mini-symposium on Real-Time Vision, Tracking and Control, provides a broad sketch of visual servoing, the application of real-time vision, tracking and control for robot guidance. It outlines the basic theoretical approaches to the problem, describes a typical architecture, and discusses major milestones, applications and the significant vision sub-problems that must be solved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Machine vision represents a particularly attractive solution for sensing and detecting potential collision-course targets due to the relatively low cost, size, weight, and power requirements of the sensors involved (as opposed to radar). This paper describes the development and evaluation of a vision-based collision detection algorithm suitable for fixed-wing aerial robotics. The system was evaluated using highly realistic vision data of the moments leading up to a collision. Based on the collected data, our detection approaches were able to detect targets at distances ranging from 400m to about 900m. These distances (with some assumptions about closing speeds and aircraft trajectories) translate to an advanced warning of between 8-10 seconds ahead of impact, which approaches the 12.5 second response time recommended for human pilots. We make use of the enormous potential of graphic processing units to achieve processing rates of 30Hz (for images of size 1024-by- 768). Currently, integration in the final platform is under way.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Machine vision represents a particularly attractive solution for sensing and detecting potential collision-course targets due to the relatively low cost, size, weight, and power requirements of vision sensors (as opposed to radar and TCAS). This paper describes the development and evaluation of a real-time vision-based collision detection system suitable for fixed-wing aerial robotics. Using two fixed-wing UAVs to recreate various collision-course scenarios, we were able to capture highly realistic vision (from an onboard camera perspective) of the moments leading up to a collision. This type of image data is extremely scarce and was invaluable in evaluating the detection performance of two candidate target detection approaches. Based on the collected data, our detection approaches were able to detect targets at distances ranging from 400m to about 900m. These distances (with some assumptions about closing speeds and aircraft trajectories) translate to an advanced warning of between 8-10 seconds ahead of impact, which approaches the 12.5 second response time recommended for human pilots. We overcame the challenge of achieving real-time computational speeds by exploiting the parallel processing architectures of graphics processing units found on commercially-off-the-shelf graphics devices. Our chosen GPU device suitable for integration onto UAV platforms can be expected to handle real-time processing of 1024 by 768 pixel image frames at a rate of approximately 30Hz. Flight trials using manned Cessna aircraft where all processing is performed onboard will be conducted in the near future, followed by further experiments with fully autonomous UAV platforms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes the current state of RatSLAM, a Simultaneous Localisation and Mapping (SLAM) system based on models of the rodent hippocampus. RatSLAM uses a competitive attractor network to fuse visual and odometry information. Energy packets in the network represent pose hypotheses, which are updated by odometry and can be enhanced or inhibited by visual input. This paper shows the effectiveness of the system in real robot tests in unmodified indoor environments using a learning vision system. Results are shown for two test environments; a large corridor loop and the complete floor of an office building.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes the real time global vision system for the robot soccer team the RoboRoos. It has a highly optimised pipeline that includes thresholding, segmenting, colour normalising, object recognition and perspective and lens correction. It has a fast ‘paint’ colour calibration system that can calibrate in any face of the YUV or HSI cube. It also autonomously selects both an appropriate camera gain and colour gains robot regions across the field to achieve colour uniformity. Camera geometry calibration is performed automatically from selection of keypoints on the field. The system achieves a position accuracy of better than 15mm over a 4m × 5.5m field, and orientation accuracy to within 1°. It processes 614 × 480 pixels at 60Hz on a 2.0GHz Pentium 4 microprocessor.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Simultaneous Localisation And Mapping (SLAM) problem is one of the major challenges in mobile robotics. Probabilistic techniques using high-end range finding devices are well established in the field, but recent work has investigated vision-only approaches. We present an alternative approach to the leading existing techniques, which extracts approximate rotational and translation velocity information from a vehicle-mounted consumer camera, without tracking landmarks. When coupled with an existing SLAM system, the vision module is able to map a 45 metre long indoor loop and a 1.6 km long outdoor road loop, without any parameter or system adjustment between tests. The work serves as a promising pilot study into ground-based vision-only SLAM, with minimal geometric interpretation of the environment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Simultaneous Localization And Mapping (SLAM) is one of the major challenges in mobile robotics. Probabilistic techniques using high-end range finding devices are well established in the field, but recent work has investigated vision only approaches. This paper presents a method for generating approximate rotational and translation velocity information from a single vehicle-mounted consumer camera, without the computationally expensive process of tracking landmarks. The method is tested by employing it to provide the odometric and visual information for the RatSLAM system while mapping a complex suburban road network. RatSLAM generates a coherent map of the environment during an 18 km long trip through suburban traffic at speeds of up to 60 km/hr. This result demonstrates the potential of ground based vision-only SLAM using low cost sensing and computational hardware.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Conventional cameras have limited dynamic range, and as a result vision-based robots cannot effectively view an environment made up of both sunny outdoor areas and darker indoor areas. This paper presents an approach to extend the effective dynamic range of a camera, achieved by changing the exposure level of the camera in real-time to form a sequence of images which collectively cover a wide range of radiance. Individual control algorithms for each image have been developed to maximize the viewable area across the sequence. Spatial discrepancies between images, caused by the moving robot, are improved by a real-time image registration process. The sequence is then combined by merging color and contour information. By integrating these techniques it becomes possible to operate a vision-based robot in wide radiance range scenes.