978 resultados para Color vision
Resumo:
This paper introduces a high-speed, 100Hz, visionbased state estimator that is suitable for quadrotor control in close quarters manoeuvring applications. We describe the hardware and algorithms for estimating the state of the quadrotor. Experimental results for position, velocity and yaw angle estimators are presented and compared with motion capture data. Quantitative performance comparison with state-of-the-art achievements are also presented.
Resumo:
The future emergence of many types of airborne vehicles and unpiloted aircraft in the national airspace means collision avoidance is of primary concern in an uncooperative airspace environment. The ability to replicate a pilot’s see and avoid capability using cameras coupled with vision based avoidance control is an important part of an overall collision avoidance strategy. But unfortunately without range collision avoidance has no direct way to guarantee a level of safety. Collision scenario flight tests with two aircraft and a monocular camera threat detection and tracking system were used to study the accuracy of image-derived angle measurements. The effect of image-derived angle errors on reactive vision-based avoidance performance was then studied by simulation. The results show that whilst large angle measurement errors can significantly affect minimum ranging characteristics across a variety of initial conditions and closing speeds, the minimum range is always bounded and a collision never occurs.
Resumo:
Purpose: Photoreceptor interactions reduce the temporal bandwidth of the visual system under mesopic illumination. The dynamics of these interactions are not clear. This study investigated cone-cone and rod-cone interactions when the rod (R) and three cone (L, M, S) photoreceptor classes contribute to vision via shared post-receptoral pathways. Methods: A four-primary photostimulator independently controlled photoreceptor activity in human observers. To determine the temporal dynamics of receptoral (L, S, R) and post-receptoral (LMS, LMSR, +L-M) pathways (5 Td, 7° eccentricity) in Experiment 1, ON-pathway sensitivity was assayed with an incremental probe (25ms) presented relative to onset of an incremental sawtooth conditioning pulse (1000ms). To define the post-receptoral pathways mediating the rod stimulus, Experiment 2 matched the color appearance of increased rod activation (30% contrast, 25-1000ms; constant cone excitation) with cone stimuli (variable L+M, L/L+M, S/L+M; constant rod excitation). Results: Cone-cone interactions with luminance stimuli (LMS, LMSR, L-cone) reduced Weber contrast sensitivity by 13% and the time course of adaptation was 23.7±1ms (μ±SE). With chromatic stimuli (+L-M, S), cone pathway sensitivity was also reduced and recovery was slower (+L-M 8%, 2.9±0.1ms; S 38%, 1.5±0.3ms). Threshold patterns at ON-conditioning pulse onset were monophasic for luminance and biphasic for chromatic stimuli. Rod-rod interactions increased sensitivity(19%) with a recovery time of 0.7±0.2ms. Compared to cone-cone interactions, rod-cone interactions with luminance stimuli reduced sensitivity to a lesser degree (5%) with faster recovery (42.9±0.7ms). Rod-cone interactions were absent with chromatic stimuli. Experiment 2 showed that rod activation generated luminance (L+M) signals at all durations, and chromatic signals (L/L+M, S/L+M) for durations >75ms. Conclusions: Temporal dynamics of cone-cone interactions are consistent with contrast sensitivity loss in the MC pathway for luminance stimuli and chromatically opponent responses in the PC and KC pathway with chromatic stimuli. Rod-cone interactions limit contrast sensitivity loss during dynamic illumination changes and increase the speed of mesopic light adaptation. The change in relative weighting of the temporal rod signal within the major post-receptoral pathways modifies the sensitivity and dynamics of photoreceptor interactions.
Resumo:
This case study report describes the stages involved in the translation of research on night-time visibility into standards for the safety clothing worn by roadworkers. Vision research demonstrates that when lights are placed on the moveable joints of the body and the person moves in a dark setting, the phenomenon known as “biological motion or biomotion” occurs, enabling rapid and accurate recognition of the human form although only the lights can be seen. QUT was successful in gaining funding from the Australian Research Council for a Linkage grant due to the support of the predecessors of the Queensland Department of Transport and Main Roads (TMR) to research the biomotion effect in on-road settings using materials that feature in roadworker clothing. Although positive results were gained, the process of translating the research results into policy, practices and standards relied strongly on the supportive efforts of TMR staff engaged in the review and promulgation of national standards. The ultimate result was the incorporation of biomotion marking into AS/NZS 4602.1 2011. The experiences gained in this case provide insights into the processes involved in translating research into practice.
Resumo:
This work presents a collision avoidance approach based on omnidirectional cameras that does not require the estimation of range between two platforms to resolve a collision encounter. Our method achieves minimum separation between the two vehicles involved by maximising the view-angle given by the omnidirectional sensor. Only visual information is used to achieve avoidance under a bearing- only visual servoing approach. We provide theoretical problem formulation, as well as results from real flights using small quadrotors
Resumo:
This paper presents a mapping and navigation system for a mobile robot, which uses vision as its sole sensor modality. The system enables the robot to navigate autonomously, plan paths and avoid obstacles using a vision based topometric map of its environment. The map consists of a globally-consistent pose-graph with a local 3D point cloud attached to each of its nodes. These point clouds are used for direction independent loop closure and to dynamically generate 2D metric maps for locally optimal path planning. Using this locally semi-continuous metric space, the robot performs shortest path planning instead of following the nodes of the graph --- as is done with most other vision-only navigation approaches. The system exploits the local accuracy of visual odometry in creating local metric maps, and uses pose graph SLAM, visual appearance-based place recognition and point clouds registration to create the topometric map. The ability of the framework to sustain vision-only navigation is validated experimentally, and the system is provided as open-source software.
Resumo:
Monitoring and estimation of marine populations is of paramount importance for the conservation and management of sea species. Regular surveys are used to this purpose followed often by a manual counting process. This paper proposes an algorithm for automatic detection of dugongs from imagery taken in aerial surveys. Our algorithm exploits the fact that dugongs are rare in most images, therefore we determine regions of interest partially based on color rarity. This simple observation makes the system robust to changes in illumination. We also show that by applying the extended-maxima transform on red-ratio images, submerged dugongs with very fuzzy edges can be detected. Performance figures obtained here are promising in terms of degree of confidence in the detection of marine species, but more importantly our approach represents a significant step in automating this type of surveys.
Resumo:
In this paper we use the algorithm SeqSLAM to address the question, how little and what quality of visual information is needed to localize along a familiar route? We conduct a comprehensive investigation of place recognition performance on seven datasets while varying image resolution (primarily 1 to 512 pixel images), pixel bit depth, field of view, motion blur, image compression and matching sequence length. Results confirm that place recognition using single images or short image sequences is poor, but improves to match or exceed current benchmarks as the matching sequence length increases. We then present place recognition results from two experiments where low-quality imagery is directly caused by sensor limitations; in one, place recognition is achieved along an unlit mountain road by using noisy, long-exposure blurred images, and in the other, two single pixel light sensors are used to localize in an indoor environment. We also show failure modes caused by pose variance and sequence aliasing, and discuss ways in which they may be overcome. By showing how place recognition along a route is feasible even with severely degraded image sequences, we hope to provoke a re-examination of how we develop and test future localization and mapping systems.
Resumo:
How can we reach out to institutions, artists and audiences with sometimes radically different agendas to encourage them to see, participate in and support the development of new practices and programs in the performing arts? In this paper, based on a plenary panel at PSi#18 Performance Culture Industry at the University of Leeds, Clarissa Ruiz (Columbia), AnuradhaKapur (India) and Sheena Wrigley (England) together with interloctorBree Hadley (Australia) speak about their work in as policy-makers, managers and producers in the performing arts in Europe, Asia and America over the past several decades. Acknowledged trailblazers in their fields, Ruiz, Kapur and Wrigley all have a commitment to creating a vital, viable and sustainable performing arts ecologies. Each has extensive experience in performance, politics, and the challenging process of managing histories, visions, stakeholders, and sometimes scarce resources to generate lasting benefits for the various communities have worked for, with and within. Their work, cultivating new initiatives, programs or policy has made them expert at brokering relationships in and in between private, public and political spheres to elevate the status of and support for performing arts as a socially and economically beneficial activity everyone can participate in. Each gives examples from their own practice to provide insight into how to negotiate the interests of artistic, government, corporate, community and education partners, and the interests of audiences, to create aesthetic, cultural and / or economic value. Together, their views offer a compelling set of perspectives on the changing meanings of the ‘value of the arts’ and the effects this has had for the artists that make and arts organisations that produce and present work in a range of different regional, national and cross-national contexts.
Resumo:
In this paper, we present a monocular vision based autonomous navigation system for Micro Aerial Vehicles (MAVs) in GPS-denied environments. The major drawback of monocular systems is that the depth scale of the scene can not be determined without prior knowledge or other sensors. To address this problem, we minimize a cost function consisting of a drift-free altitude measurement and up-to-scale position estimate obtained using the visual sensor. We evaluate the scale estimator, state estimator and controller performance by comparing with ground truth data acquired using a motion capture system. All resources including source code, tutorial documentation and system models are available online.
Resumo:
The problem of estimating pseudobearing rate information of an airborne target based on measurements from a vision sensor is considered. Novel image speed and heading angle estimators are presented that exploit image morphology, hidden Markov model (HMM) filtering, and relative entropy rate (RER) concepts to allow pseudobearing rate information to be determined before (or whilst) the target track is being estimated from vision information.
Resumo:
Executive Summary This project has commenced an exploration of learning and information experiences in the QUT Cube. Understanding learning in this environment has the potential to inform current implementations and future project development. In this report, we present early findings from the first phase of an investigation into what makes learning possible in the context of a giant interactive multi-media display such as the QUT Cube, which is an award-winning configuration that hosts several projects.
Resumo:
Next-generation autonomous underwater vehicles (AUVs) will be required to robustly identify underwater targets for tasks such as inspection, localization, and docking. Given their often unstructured operating environments, vision offers enormous potential in underwater navigation over more traditional methods; however, reliable target segmentation often plagues these systems. This paper addresses robust vision-based target recognition by presenting a novel scale and rotationally invariant target design and recognition routine based on self-similar landmarks that enables robust target pose estimation with respect to a single camera. These algorithms are applied to an AUV with controllers developed for vision-based docking with the target. Experimental results show that the system performs exceptionally on limited processing power and demonstrates how the combined vision and controller system enables robust target identification and docking in a variety of operating conditions.
Resumo:
Purpose. To compare the on-road driving performance of visually impaired drivers using bioptic telescopes with age-matched controls. Methods. Participants included 23 persons (mean age = 33 ± 12 years) with visual acuity of 20/63 to 20/200 who were legally licensed to drive through a state bioptic driving program, and 23 visually normal age-matched controls (mean age = 33 ± 12 years). On-road driving was assessed in an instrumented dual-brake vehicle along 14.6 miles of city, suburban, and controlled-access highways. Two backseat evaluators independently rated driving performance using a standardized scoring system. Vehicle control was assessed through vehicle instrumentation and video recordings used to evaluate head movements, lane-keeping, pedestrian detection, and frequency of bioptic telescope use. Results. Ninety-six percent (22/23) of bioptic drivers and 100% (23/23) of controls were rated as safe to drive by the evaluators. There were no group differences for pedestrian detection, or ratings for scanning, speed, gap judgments, braking, indicator use, or obeying signs/signals. Bioptic drivers received worse ratings than controls for lane position and steering steadiness and had lower rates of correct sign and traffic signal recognition. Bioptic drivers made significantly more right head movements, drove more often over the right-hand lane marking, and exhibited more sudden braking than controls. Conclusions. Drivers with central vision loss who are licensed to drive through a bioptic driving program can display proficient on-road driving skills. This raises questions regarding the validity of denying such drivers a license without the opportunity to train with a bioptic telescope and undergo on-road evaluation.