389 resultados para Asynchronous vision sensor
Resumo:
Next-generation autonomous underwater vehicles (AUVs) will be required to robustly identify underwater targets for tasks such as inspection, localization, and docking. Given their often unstructured operating environments, vision offers enormous potential in underwater navigation over more traditional methods; however, reliable target segmentation often plagues these systems. This paper addresses robust vision-based target recognition by presenting a novel scale and rotationally invariant target design and recognition routine based on self-similar landmarks that enables robust target pose estimation with respect to a single camera. These algorithms are applied to an AUV with controllers developed for vision-based docking with the target. Experimental results show that the system performs exceptionally on limited processing power and demonstrates how the combined vision and controller system enables robust target identification and docking in a variety of operating conditions.
Resumo:
Purpose. To compare the on-road driving performance of visually impaired drivers using bioptic telescopes with age-matched controls. Methods. Participants included 23 persons (mean age = 33 ± 12 years) with visual acuity of 20/63 to 20/200 who were legally licensed to drive through a state bioptic driving program, and 23 visually normal age-matched controls (mean age = 33 ± 12 years). On-road driving was assessed in an instrumented dual-brake vehicle along 14.6 miles of city, suburban, and controlled-access highways. Two backseat evaluators independently rated driving performance using a standardized scoring system. Vehicle control was assessed through vehicle instrumentation and video recordings used to evaluate head movements, lane-keeping, pedestrian detection, and frequency of bioptic telescope use. Results. Ninety-six percent (22/23) of bioptic drivers and 100% (23/23) of controls were rated as safe to drive by the evaluators. There were no group differences for pedestrian detection, or ratings for scanning, speed, gap judgments, braking, indicator use, or obeying signs/signals. Bioptic drivers received worse ratings than controls for lane position and steering steadiness and had lower rates of correct sign and traffic signal recognition. Bioptic drivers made significantly more right head movements, drove more often over the right-hand lane marking, and exhibited more sudden braking than controls. Conclusions. Drivers with central vision loss who are licensed to drive through a bioptic driving program can display proficient on-road driving skills. This raises questions regarding the validity of denying such drivers a license without the opportunity to train with a bioptic telescope and undergo on-road evaluation.
Resumo:
Intravitreal injections of GABA antagonists, dopamine agonists and brief periods of normal vision have been shown separately to inhibit form-deprivation myopia (FDM). Our study had three aims: (i) establish whether GABAergic agents modify the myopia protective effect of normal vision, (ii) investigate the receptor sub-type specificity of any observed effect, and (iii) consider an interaction with the dopamine (DA) system. Prior to the period of normal vision GABAergic agents were applied either (i) individually, (ii) in combination with other GABAergic agents (an agonist with an antagonist), or (iii) in combination with DA agonists and antagonists. Water injections were given to groups not receiving drug treatments so that all experimental eyes received intravitreal injections. As shown previously, constant form-deprivation resulted in high myopia and when diffusers were removed for 2 h per day the period of normal vision greatly reduced the FDM that developed. GABA agonists inhibited the protective effect of normal vision whereas antagonists had the opposite effect. GABAA/C agonists and D2 DA antagonists when used in combination were additive in suppressing the protective effect of normal vision. A D2 DA agonist restored some of the protective effect of normal vision that was inhibited by a GABA agonist (muscimol). The protective effect of normal vision against form-deprivation is modifiable by both the GABAergic and DAergic pathways.
Resumo:
The purpose of this study is to determine visual performance in water, including the influence of pupil size. The water en-vironment was simulated by placing a goggle filled with saline in front of eyes, with apertures placed at the front of the goggle. Correction factors were determined for the different magnification under this condition in order to to estimate vision in water. Experiments were conducted on letter visual acuity (7 participants), grating resolution (8 participants), and grating contrast sensitivity (1 participant). For letter acuity, mean loss in vision in water, compared to corrected vision in air, varied between 1.1 log minutes of arc resolution (logMAR) for a 1mm aperture to 2.2 logMAR for a 7mm aperture. The vision in minutes of arc was described well by a linear relationship with pupil size. For grating acuity, mean loss varied between 1.1 logMAR for a 2mm aperture to 1.2 logMAR for a 6mm aperture. Contrast sensitivity for a 2mm aperture dete-riorated as spatial frequency increased, with 2 log unit loss by 3 cycles/degree. Superimposed on this deterioration were depressions (notches) in sensitivity, with the first three notches occurring at 0.45, 0.8 and 1.3 cycles/degree with esti-mates for water of 0.39, 0.70 and 1.13 cycles/degree. In conclusion, vision in water is poor. It becomes worse as pupil size increases, but the effects are much more marked for letter targets than for grating targets.
Resumo:
Purpose: To determine visual performance in water, including the influence of pupil size. Method: The water environment was simulated by placing a goggle filled with saline in front of eyes, with apertures placed at the front of the goggle. Correction factors were determined for the different magnification under this condition to estimate vision in water. Experiments were conducted on letter visual acuity (7 participants), grating resolution (8 participants), and grating contrast sensitivity (1 participant). Results: For letter acuity, mean loss in vision in water, compared to corrected vision in air, varied between 1.1 log minutes of arc resolution (logMAR) for a 1mm aperture to 2.2 logMAR for a 7mm aperture. The vision in minutes of arc was described well by a linear relationship with pupil size. For grating acuity, mean loss varied between 1.1 logMAR for a 2mm aperture to 1.2 logMAR for a 6mm aperture. Contrast sensitivity for a 2mm aperture deteriorated as spatial frequency increased, with 2 log unit loss by 3 cycles/degree. Superimposed on this deterioration were depressions (notches) in sensitivity, with the first three notches occurring at 0.45, 0.8 and 1.3 cycles/degree and with estimates for water of 0.39, 0.70 and 1.13 cycles/degree. Conclusion: Vision in water is poor. It becomes worse as pupil size increases, but the effects are much more marked for letter targets than for grating targets.
Resumo:
The process of translating research into policy and practice is not well understood. This paper uses a case study approach to interpret an example of translation with respect to theoretical approaches identified in the literature. The case study concerns research into “biological motion” or “biomotion”: when lights are placed on the moveable joints of the body and the person moves in a dark setting, there is immediate and accurate recognition of the human form although only the lights can be seen. QUT was successful in gaining Australian Research Council funding with the support of the predecessors of the Queensland Department of Transport and Main Roads (TMR) to research the biomotion effect in road worker clothing using reflective tape rather than lights, and this resulted in the incorporation of biomotion marking into AS/NZS 4602.1 2011. The most promising approach to understanding the success of this translation, SWOV’s “knowledge utilisation approach” provided some insights but was more descriptive than predictive and provided “necessary but not sufficient” conditions for translation. In particular, the supportive efforts of TMR staff engaged in the review and promulgation of national standards were critical in this case. A model of the conclusions is presented. The experiences gained in this case should provide insights into the processes involved in effectively translating research into practice.
Resumo:
Novel computer vision techniques have been developed for automatic monitoring of crowed environments such as airports, railway stations and shopping malls. Using video feeds from multiple cameras, the techniques enable crowd counting, crowd flow monitoring, queue monitoring and abnormal event detection. The outcome of the research is useful for surveillance applications and for obtaining operational metrics to improve business efficiency.
Resumo:
Machine vision is emerging as a viable sensing approach for mid-air collision avoidance (particularly for small to medium aircraft such as unmanned aerial vehicles). In this paper, using relative entropy rate concepts, we propose and investigate a new change detection approach that uses hidden Markov model filters to sequentially detect aircraft manoeuvres from morphologically processed image sequences. Experiments using simulated and airborne image sequences illustrate the performance of our proposed algorithm in comparison to other sequential change detection approaches applied to this application.
Resumo:
In this paper we present a method for autonomously tuning the threshold between learning and recognizing a place in the world, based on both how the rodent brain is thought to process and calibrate multisensory data and the pivoting movement behaviour that rodents perform in doing so. The approach makes no assumptions about the number and type of sensors, the robot platform, or the environment, relying only on the ability of a robot to perform two revolutions on the spot. In addition, it self-assesses the quality of the tuning process in order to identify situations in which tuning may have failed. We demonstrate the autonomous movement-driven threshold tuning on a Pioneer 3DX robot in eight locations spread over an office environment and a building car park, and then evaluate the mapping capability of the system on journeys through these environments. The system is able to pick a place recognition threshold that enables successful environment mapping in six of the eight locations while also autonomously flagging the tuning failure in the remaining two locations. We discuss how the method, in combination with parallel work on autonomous weighting of individual sensors, moves the parameter dependent RatSLAM system significantly closer to sensor, platform and environment agnostic operation.
Resumo:
This paper presents a pose estimation approach that is resilient to typical sensor failure and suitable for low cost agricultural robots. Guiding large agricultural machinery with highly accurate GPS/INS systems has become standard practice, however these systems are inappropriate for smaller, lower-cost robots. Our positioning system estimates pose by fusing data from a low-cost global positioning sensor, low-cost inertial sensors and a new technique for vision-based row tracking. The results first demonstrate that our positioning system will accurately guide a robot to perform a coverage task across a 6 hectare field. The results then demonstrate that our vision-based row tracking algorithm improves the performance of the positioning system despite long periods of precision correction signal dropout and intermittent dropouts of the entire GPS sensor.
Resumo:
This paper introduces an improved line tracker using IMU and vision data for visual servoing tasks. We utilize an Image Jacobian which describes motion of a line feature to corresponding camera movements. These camera motions are estimated using an IMU. We demonstrate impacts of the proposed method in challenging environments: maximum angular rate ~160 0/s, acceleration ~6m /s2 and in cluttered outdoor scenes. Simulation and quantitative tracking performance comparison with the Visual Servoing Platform (ViSP) are also presented.
Resumo:
The use of Wireless Sensor Networks (WSNs) for Structural Health Monitoring (SHM) has become a promising approach due to many advantages such as low cost, fast and flexible deployment. However, inherent technical issues such as data synchronization error and data loss have prevented these distinct systems from being extensively used. Recently, several SHM-oriented WSNs have been proposed and believed to be able to overcome a large number of technical uncertainties. Nevertheless, there is limited research examining effects of uncertainties of generic WSN platform and verifying the capability of SHM-oriented WSNs, particularly on demanding SHM applications like modal analysis and damage identification of real civil structures. This article first reviews the major technical uncertainties of both generic and SHM-oriented WSN platforms and efforts of SHM research community to cope with them. Then, effects of the most inherent WSN uncertainty on the first level of a common Output-only Modal-based Damage Identification (OMDI) approach are intensively investigated. Experimental accelerations collected by a wired sensory system on a benchmark civil structure are initially used as clean data before being contaminated with different levels of data pollutants to simulate practical uncertainties in both WSN platforms. Statistical analyses are comprehensively employed in order to uncover the distribution pattern of the uncertainty influence on the OMDI approach. The result of this research shows that uncertainties of generic WSNs can cause serious impact for level 1 OMDI methods utilizing mode shapes. It also proves that SHM-WSN can substantially lessen the impact and obtain truly structural information without having used costly computation solutions.
Resumo:
For industrial wireless sensor networks, maintaining the routing path for a high packet delivery ratio is one of the key objectives in network operations. It is important to both provide the high data delivery rate at the sink node and guarantee a timely delivery of the data packet at the sink node. Most proactive routing protocols for sensor networks are based on simple periodic updates to distribute the routing information. A faulty link causes packet loss and retransmission at the source until periodic route update packets are issued and the link has been identified as broken. We propose a new proactive route maintenance process where periodic update is backed-up with a secondary layer of local updates repeating with shorter periods for timely discovery of broken links. Proposed route maintenance scheme improves reliability of the network by decreasing the packet loss due to delayed identification of broken links. We show by simulation that proposed mechanism behaves better than the existing popular routing protocols (AODV, AOMDV and DSDV) in terms of end-to-end delay, routing overhead, packet reception ratio.
Resumo:
This paper proposes an experimental study of quality metrics that can be applied to visual and infrared images acquired from cameras onboard an unmanned ground vehicle (UGV). The relevance of existing metrics in this context is discussed and a novel metric is introduced. Selected metrics are evaluated on data collected by a UGV in clear and challenging environmental conditions, represented in this paper by the presence of airborne dust or smoke. An example of application is given with monocular SLAM estimating the pose of the UGV while smoke is present in the environment. It is shown that the proposed novel quality metric can be used to anticipate situations where the quality of the pose estimate will be significantly degraded due to the input image data. This leads to decisions of advantageously switching between data sources (e.g. using infrared images instead of visual images).
Resumo:
Camera-laser calibration is necessary for many robotics and computer vision applications. However, existing calibration toolboxes still require laborious effort from the operator in order to achieve reliable and accurate results. This paper proposes algorithms that augment two existing trustful calibration methods with an automatic extraction of the calibration object from the sensor data. The result is a complete procedure that allows for automatic camera-laser calibration. The first stage of the procedure is automatic camera calibration which is useful in its own right for many applications. The chessboard extraction algorithm it provides is shown to outperform openly available techniques. The second stage completes the procedure by providing automatic camera-laser calibration. The procedure has been verified by extensive experimental tests with the proposed algorithms providing a major reduction in time required from an operator in comparison to manual methods.