814 resultados para Vision Chip
Resumo:
The problem of estimating pseudobearing rate information of an airborne target based on measurements from a vision sensor is considered. Novel image speed and heading angle estimators are presented that exploit image morphology, hidden Markov model (HMM) filtering, and relative entropy rate (RER) concepts to allow pseudobearing rate information to be determined before (or whilst) the target track is being estimated from vision information.
Resumo:
Executive Summary This project has commenced an exploration of learning and information experiences in the QUT Cube. Understanding learning in this environment has the potential to inform current implementations and future project development. In this report, we present early findings from the first phase of an investigation into what makes learning possible in the context of a giant interactive multi-media display such as the QUT Cube, which is an award-winning configuration that hosts several projects.
Resumo:
Next-generation autonomous underwater vehicles (AUVs) will be required to robustly identify underwater targets for tasks such as inspection, localization, and docking. Given their often unstructured operating environments, vision offers enormous potential in underwater navigation over more traditional methods; however, reliable target segmentation often plagues these systems. This paper addresses robust vision-based target recognition by presenting a novel scale and rotationally invariant target design and recognition routine based on self-similar landmarks that enables robust target pose estimation with respect to a single camera. These algorithms are applied to an AUV with controllers developed for vision-based docking with the target. Experimental results show that the system performs exceptionally on limited processing power and demonstrates how the combined vision and controller system enables robust target identification and docking in a variety of operating conditions.
Resumo:
Purpose. To compare the on-road driving performance of visually impaired drivers using bioptic telescopes with age-matched controls. Methods. Participants included 23 persons (mean age = 33 ± 12 years) with visual acuity of 20/63 to 20/200 who were legally licensed to drive through a state bioptic driving program, and 23 visually normal age-matched controls (mean age = 33 ± 12 years). On-road driving was assessed in an instrumented dual-brake vehicle along 14.6 miles of city, suburban, and controlled-access highways. Two backseat evaluators independently rated driving performance using a standardized scoring system. Vehicle control was assessed through vehicle instrumentation and video recordings used to evaluate head movements, lane-keeping, pedestrian detection, and frequency of bioptic telescope use. Results. Ninety-six percent (22/23) of bioptic drivers and 100% (23/23) of controls were rated as safe to drive by the evaluators. There were no group differences for pedestrian detection, or ratings for scanning, speed, gap judgments, braking, indicator use, or obeying signs/signals. Bioptic drivers received worse ratings than controls for lane position and steering steadiness and had lower rates of correct sign and traffic signal recognition. Bioptic drivers made significantly more right head movements, drove more often over the right-hand lane marking, and exhibited more sudden braking than controls. Conclusions. Drivers with central vision loss who are licensed to drive through a bioptic driving program can display proficient on-road driving skills. This raises questions regarding the validity of denying such drivers a license without the opportunity to train with a bioptic telescope and undergo on-road evaluation.
Resumo:
Intravitreal injections of GABA antagonists, dopamine agonists and brief periods of normal vision have been shown separately to inhibit form-deprivation myopia (FDM). Our study had three aims: (i) establish whether GABAergic agents modify the myopia protective effect of normal vision, (ii) investigate the receptor sub-type specificity of any observed effect, and (iii) consider an interaction with the dopamine (DA) system. Prior to the period of normal vision GABAergic agents were applied either (i) individually, (ii) in combination with other GABAergic agents (an agonist with an antagonist), or (iii) in combination with DA agonists and antagonists. Water injections were given to groups not receiving drug treatments so that all experimental eyes received intravitreal injections. As shown previously, constant form-deprivation resulted in high myopia and when diffusers were removed for 2 h per day the period of normal vision greatly reduced the FDM that developed. GABA agonists inhibited the protective effect of normal vision whereas antagonists had the opposite effect. GABAA/C agonists and D2 DA antagonists when used in combination were additive in suppressing the protective effect of normal vision. A D2 DA agonist restored some of the protective effect of normal vision that was inhibited by a GABA agonist (muscimol). The protective effect of normal vision against form-deprivation is modifiable by both the GABAergic and DAergic pathways.
Resumo:
The purpose of this study is to determine visual performance in water, including the influence of pupil size. The water en-vironment was simulated by placing a goggle filled with saline in front of eyes, with apertures placed at the front of the goggle. Correction factors were determined for the different magnification under this condition in order to to estimate vision in water. Experiments were conducted on letter visual acuity (7 participants), grating resolution (8 participants), and grating contrast sensitivity (1 participant). For letter acuity, mean loss in vision in water, compared to corrected vision in air, varied between 1.1 log minutes of arc resolution (logMAR) for a 1mm aperture to 2.2 logMAR for a 7mm aperture. The vision in minutes of arc was described well by a linear relationship with pupil size. For grating acuity, mean loss varied between 1.1 logMAR for a 2mm aperture to 1.2 logMAR for a 6mm aperture. Contrast sensitivity for a 2mm aperture dete-riorated as spatial frequency increased, with 2 log unit loss by 3 cycles/degree. Superimposed on this deterioration were depressions (notches) in sensitivity, with the first three notches occurring at 0.45, 0.8 and 1.3 cycles/degree with esti-mates for water of 0.39, 0.70 and 1.13 cycles/degree. In conclusion, vision in water is poor. It becomes worse as pupil size increases, but the effects are much more marked for letter targets than for grating targets.
Resumo:
Purpose: To determine visual performance in water, including the influence of pupil size. Method: The water environment was simulated by placing a goggle filled with saline in front of eyes, with apertures placed at the front of the goggle. Correction factors were determined for the different magnification under this condition to estimate vision in water. Experiments were conducted on letter visual acuity (7 participants), grating resolution (8 participants), and grating contrast sensitivity (1 participant). Results: For letter acuity, mean loss in vision in water, compared to corrected vision in air, varied between 1.1 log minutes of arc resolution (logMAR) for a 1mm aperture to 2.2 logMAR for a 7mm aperture. The vision in minutes of arc was described well by a linear relationship with pupil size. For grating acuity, mean loss varied between 1.1 logMAR for a 2mm aperture to 1.2 logMAR for a 6mm aperture. Contrast sensitivity for a 2mm aperture deteriorated as spatial frequency increased, with 2 log unit loss by 3 cycles/degree. Superimposed on this deterioration were depressions (notches) in sensitivity, with the first three notches occurring at 0.45, 0.8 and 1.3 cycles/degree and with estimates for water of 0.39, 0.70 and 1.13 cycles/degree. Conclusion: Vision in water is poor. It becomes worse as pupil size increases, but the effects are much more marked for letter targets than for grating targets.
Resumo:
The process of translating research into policy and practice is not well understood. This paper uses a case study approach to interpret an example of translation with respect to theoretical approaches identified in the literature. The case study concerns research into “biological motion” or “biomotion”: when lights are placed on the moveable joints of the body and the person moves in a dark setting, there is immediate and accurate recognition of the human form although only the lights can be seen. QUT was successful in gaining Australian Research Council funding with the support of the predecessors of the Queensland Department of Transport and Main Roads (TMR) to research the biomotion effect in road worker clothing using reflective tape rather than lights, and this resulted in the incorporation of biomotion marking into AS/NZS 4602.1 2011. The most promising approach to understanding the success of this translation, SWOV’s “knowledge utilisation approach” provided some insights but was more descriptive than predictive and provided “necessary but not sufficient” conditions for translation. In particular, the supportive efforts of TMR staff engaged in the review and promulgation of national standards were critical in this case. A model of the conclusions is presented. The experiences gained in this case should provide insights into the processes involved in effectively translating research into practice.
Resumo:
Novel computer vision techniques have been developed for automatic monitoring of crowed environments such as airports, railway stations and shopping malls. Using video feeds from multiple cameras, the techniques enable crowd counting, crowd flow monitoring, queue monitoring and abnormal event detection. The outcome of the research is useful for surveillance applications and for obtaining operational metrics to improve business efficiency.
Resumo:
Machine vision is emerging as a viable sensing approach for mid-air collision avoidance (particularly for small to medium aircraft such as unmanned aerial vehicles). In this paper, using relative entropy rate concepts, we propose and investigate a new change detection approach that uses hidden Markov model filters to sequentially detect aircraft manoeuvres from morphologically processed image sequences. Experiments using simulated and airborne image sequences illustrate the performance of our proposed algorithm in comparison to other sequential change detection approaches applied to this application.
Resumo:
This paper introduces an improved line tracker using IMU and vision data for visual servoing tasks. We utilize an Image Jacobian which describes motion of a line feature to corresponding camera movements. These camera motions are estimated using an IMU. We demonstrate impacts of the proposed method in challenging environments: maximum angular rate ~160 0/s, acceleration ~6m /s2 and in cluttered outdoor scenes. Simulation and quantitative tracking performance comparison with the Visual Servoing Platform (ViSP) are also presented.
Resumo:
This thesis developed a method for real-time and handheld 3D temperature mapping using a combination of off-the-shelf devices and efficient computer algorithms. It contributes a new sensing and data processing framework to the science of 3D thermography, unlocking its potential for application areas such as building energy auditing and industrial monitoring. New techniques for the precise calibration of multi-sensor configurations were developed, along with several algorithms that ensure both accurate and comprehensive surface temperature estimates can be made for rich 3D models as they are generated by a non-expert user.
Resumo:
This paper describes the development of a novel vision-based autonomous surface vehicle with the purpose of performing coordinated docking manoeuvres with a target, such as an autonomous underwater vehicle, at the water's surface. The system architecture integrates two small processor units; the first performs vehicle control and implements a virtual force based docking strategy, with the second performing vision-based target segmentation and tracking. Furthermore, the architecture utilises wireless sensor network technology allowing the vehicle to be observed by, and even integrated within an ad-hoc sensor network. Simulated and experimental results are presented demonstrating the autonomous vision- based docking strategy on a proof-of-concept vehicle.
Resumo:
We present a pole inspection system for outdoor environments comprising a high-speed camera on a vertical take-off and landing (VTOL) aerial platform. The pole inspection task requires a vehicle to fly close to a structure while maintaining a fixed stand-off distance from it. Typical GPS errors make GPS-based navigation unsuitable for this task however. When flying outdoors a vehicle is also affected by aerodynamics disturbances such as wind gusts, so the onboard controller must be robust to these disturbances in order to maintain the stand-off distance. Two problems must therefor be addressed: fast and accurate state estimation without GPS, and the design of a robust controller. We resolve these problems by a) performing visual + inertial relative state estimation and b) using a robust line tracker and a nested controller design. Our state estimation exploits high-speed camera images (100Hz) and 70Hz IMU data fused in an Extended Kalman Filter (EKF). We demonstrate results from outdoor experiments for pole-relative hovering, and pole circumnavigation where the operator provides only yaw commands. Lastly, we show results for image-based 3D reconstruction and texture mapping of a pole to demonstrate the usefulness for inspection tasks.
Resumo:
In this paper, we introduce a vision called Smart Material Interfaces (SMIs), which takes advantage of the latest generation of engineered materials that has a special property defined “smart”. They are capable of changing their physical properties, such as shape, size and color, and can be controlled by using certain stimuli (light, potential difference, temperature and so on). We describe SMIs in relation to Tangible User Interfaces (TUIs) to convey the usefulness and a better understanding of SMIs.