184 resultados para Vision Disparity
Resumo:
This paper discusses the findings of a research study that used semi-structured interviews to explore the views of primary school principals on inclusive education in New South Wales, Australia. Content analysis of the transcript data indicates that principals’ attitudes towards inclusive education and their success in engineering inclusive practices within their school are significantly affected by their own conception of what “inclusion” means, as well as the characteristics of the school community, and the attitudes and capacity of staff. In what follows, we present two parallel conversations that arose from the interview data to illustrate the main conceptual divisions existing between our participants’ conceptions of inclusion. First, we discuss the act of “being inclusive” which was perceived mainly as an issue of culture and pedagogy. Second, we consider the mechanics of “including,” which reflected a more instrumentalist position based on perceptions of individual student deficit, the level of support they may require and the amount of funding they can attract.
Resumo:
This paper introduces a high-speed, 100Hz, visionbased state estimator that is suitable for quadrotor control in close quarters manoeuvring applications. We describe the hardware and algorithms for estimating the state of the quadrotor. Experimental results for position, velocity and yaw angle estimators are presented and compared with motion capture data. Quantitative performance comparison with state-of-the-art achievements are also presented.
Resumo:
The future emergence of many types of airborne vehicles and unpiloted aircraft in the national airspace means collision avoidance is of primary concern in an uncooperative airspace environment. The ability to replicate a pilot’s see and avoid capability using cameras coupled with vision based avoidance control is an important part of an overall collision avoidance strategy. But unfortunately without range collision avoidance has no direct way to guarantee a level of safety. Collision scenario flight tests with two aircraft and a monocular camera threat detection and tracking system were used to study the accuracy of image-derived angle measurements. The effect of image-derived angle errors on reactive vision-based avoidance performance was then studied by simulation. The results show that whilst large angle measurement errors can significantly affect minimum ranging characteristics across a variety of initial conditions and closing speeds, the minimum range is always bounded and a collision never occurs.
Resumo:
This case study report describes the stages involved in the translation of research on night-time visibility into standards for the safety clothing worn by roadworkers. Vision research demonstrates that when lights are placed on the moveable joints of the body and the person moves in a dark setting, the phenomenon known as “biological motion or biomotion” occurs, enabling rapid and accurate recognition of the human form although only the lights can be seen. QUT was successful in gaining funding from the Australian Research Council for a Linkage grant due to the support of the predecessors of the Queensland Department of Transport and Main Roads (TMR) to research the biomotion effect in on-road settings using materials that feature in roadworker clothing. Although positive results were gained, the process of translating the research results into policy, practices and standards relied strongly on the supportive efforts of TMR staff engaged in the review and promulgation of national standards. The ultimate result was the incorporation of biomotion marking into AS/NZS 4602.1 2011. The experiences gained in this case provide insights into the processes involved in translating research into practice.
Resumo:
This work presents a collision avoidance approach based on omnidirectional cameras that does not require the estimation of range between two platforms to resolve a collision encounter. Our method achieves minimum separation between the two vehicles involved by maximising the view-angle given by the omnidirectional sensor. Only visual information is used to achieve avoidance under a bearing- only visual servoing approach. We provide theoretical problem formulation, as well as results from real flights using small quadrotors
Resumo:
This paper presents a mapping and navigation system for a mobile robot, which uses vision as its sole sensor modality. The system enables the robot to navigate autonomously, plan paths and avoid obstacles using a vision based topometric map of its environment. The map consists of a globally-consistent pose-graph with a local 3D point cloud attached to each of its nodes. These point clouds are used for direction independent loop closure and to dynamically generate 2D metric maps for locally optimal path planning. Using this locally semi-continuous metric space, the robot performs shortest path planning instead of following the nodes of the graph --- as is done with most other vision-only navigation approaches. The system exploits the local accuracy of visual odometry in creating local metric maps, and uses pose graph SLAM, visual appearance-based place recognition and point clouds registration to create the topometric map. The ability of the framework to sustain vision-only navigation is validated experimentally, and the system is provided as open-source software.
Resumo:
In this paper we use the algorithm SeqSLAM to address the question, how little and what quality of visual information is needed to localize along a familiar route? We conduct a comprehensive investigation of place recognition performance on seven datasets while varying image resolution (primarily 1 to 512 pixel images), pixel bit depth, field of view, motion blur, image compression and matching sequence length. Results confirm that place recognition using single images or short image sequences is poor, but improves to match or exceed current benchmarks as the matching sequence length increases. We then present place recognition results from two experiments where low-quality imagery is directly caused by sensor limitations; in one, place recognition is achieved along an unlit mountain road by using noisy, long-exposure blurred images, and in the other, two single pixel light sensors are used to localize in an indoor environment. We also show failure modes caused by pose variance and sequence aliasing, and discuss ways in which they may be overcome. By showing how place recognition along a route is feasible even with severely degraded image sequences, we hope to provoke a re-examination of how we develop and test future localization and mapping systems.
Resumo:
How can we reach out to institutions, artists and audiences with sometimes radically different agendas to encourage them to see, participate in and support the development of new practices and programs in the performing arts? In this paper, based on a plenary panel at PSi#18 Performance Culture Industry at the University of Leeds, Clarissa Ruiz (Columbia), AnuradhaKapur (India) and Sheena Wrigley (England) together with interloctorBree Hadley (Australia) speak about their work in as policy-makers, managers and producers in the performing arts in Europe, Asia and America over the past several decades. Acknowledged trailblazers in their fields, Ruiz, Kapur and Wrigley all have a commitment to creating a vital, viable and sustainable performing arts ecologies. Each has extensive experience in performance, politics, and the challenging process of managing histories, visions, stakeholders, and sometimes scarce resources to generate lasting benefits for the various communities have worked for, with and within. Their work, cultivating new initiatives, programs or policy has made them expert at brokering relationships in and in between private, public and political spheres to elevate the status of and support for performing arts as a socially and economically beneficial activity everyone can participate in. Each gives examples from their own practice to provide insight into how to negotiate the interests of artistic, government, corporate, community and education partners, and the interests of audiences, to create aesthetic, cultural and / or economic value. Together, their views offer a compelling set of perspectives on the changing meanings of the ‘value of the arts’ and the effects this has had for the artists that make and arts organisations that produce and present work in a range of different regional, national and cross-national contexts.
Resumo:
In this paper, we present a monocular vision based autonomous navigation system for Micro Aerial Vehicles (MAVs) in GPS-denied environments. The major drawback of monocular systems is that the depth scale of the scene can not be determined without prior knowledge or other sensors. To address this problem, we minimize a cost function consisting of a drift-free altitude measurement and up-to-scale position estimate obtained using the visual sensor. We evaluate the scale estimator, state estimator and controller performance by comparing with ground truth data acquired using a motion capture system. All resources including source code, tutorial documentation and system models are available online.
Resumo:
AIM: Zhi Zhu Wan (ZZW) is a classical Chinese medical formulation used for the treatment of functional dyspepsia that attributed to Spleen-deficiency Syndrome. ZZW contains Atractylodes Rhizome and Fructus Citrus Immaturus, the later originates from both Citrus aurantium L. (BZZW) and Citrus sinensis Osbeck (RZZW). The present study is designed to elucidate disparities in the clinical efficacy of two ZZW varieties based on the pharmacokinetics of naringenin and hesperetin. MEHTOD: After oral administration of ZZWs, blood sample was collected from healthy volunteers at designed time points. Naringenin and hesperetin were detected in plasma by RP-HPLC, pharmacokinetic parameters were processed using mode-independent methods with WINNONLIN. RESULTS: After oral administration of BZZW, both naringenin and hesperetin were detected in plasma, and demonstrated similar pharmacokinetic parameters. Ka was 0.384+/-0.165 and 0.401+/-0.159, T(1/2(ke))(h) was 5.491+/-3.926 and 5.824+/-3.067, the AUC (mg/Lh) was 34.886+/-22.199 and 39.407+/-19.535 for naringenin and hesperetin, respectively. However, in the case of RZZW, only hesperetin was found in plasma, but the pharmacokinetic properties for hesperetin in RZZW was different from that in BZZW. T(max) for hesperetin in RZZW is about 8.515h, and its C(max) is much larger than that of BZZW. Moreover, it was eliminated slowly as it possessed a much larger AUC value. CONCLUSION: The distinct therapeutic orientations of the Chinese medical formula ZZWs with different Fructus Citrus Immaturus could be elucidated based on the pharmacokinetic parameters of constituents after oral administration.
Resumo:
The problem of estimating pseudobearing rate information of an airborne target based on measurements from a vision sensor is considered. Novel image speed and heading angle estimators are presented that exploit image morphology, hidden Markov model (HMM) filtering, and relative entropy rate (RER) concepts to allow pseudobearing rate information to be determined before (or whilst) the target track is being estimated from vision information.
Resumo:
Executive Summary This project has commenced an exploration of learning and information experiences in the QUT Cube. Understanding learning in this environment has the potential to inform current implementations and future project development. In this report, we present early findings from the first phase of an investigation into what makes learning possible in the context of a giant interactive multi-media display such as the QUT Cube, which is an award-winning configuration that hosts several projects.
Resumo:
Next-generation autonomous underwater vehicles (AUVs) will be required to robustly identify underwater targets for tasks such as inspection, localization, and docking. Given their often unstructured operating environments, vision offers enormous potential in underwater navigation over more traditional methods; however, reliable target segmentation often plagues these systems. This paper addresses robust vision-based target recognition by presenting a novel scale and rotationally invariant target design and recognition routine based on self-similar landmarks that enables robust target pose estimation with respect to a single camera. These algorithms are applied to an AUV with controllers developed for vision-based docking with the target. Experimental results show that the system performs exceptionally on limited processing power and demonstrates how the combined vision and controller system enables robust target identification and docking in a variety of operating conditions.
Resumo:
Purpose. To compare the on-road driving performance of visually impaired drivers using bioptic telescopes with age-matched controls. Methods. Participants included 23 persons (mean age = 33 ± 12 years) with visual acuity of 20/63 to 20/200 who were legally licensed to drive through a state bioptic driving program, and 23 visually normal age-matched controls (mean age = 33 ± 12 years). On-road driving was assessed in an instrumented dual-brake vehicle along 14.6 miles of city, suburban, and controlled-access highways. Two backseat evaluators independently rated driving performance using a standardized scoring system. Vehicle control was assessed through vehicle instrumentation and video recordings used to evaluate head movements, lane-keeping, pedestrian detection, and frequency of bioptic telescope use. Results. Ninety-six percent (22/23) of bioptic drivers and 100% (23/23) of controls were rated as safe to drive by the evaluators. There were no group differences for pedestrian detection, or ratings for scanning, speed, gap judgments, braking, indicator use, or obeying signs/signals. Bioptic drivers received worse ratings than controls for lane position and steering steadiness and had lower rates of correct sign and traffic signal recognition. Bioptic drivers made significantly more right head movements, drove more often over the right-hand lane marking, and exhibited more sudden braking than controls. Conclusions. Drivers with central vision loss who are licensed to drive through a bioptic driving program can display proficient on-road driving skills. This raises questions regarding the validity of denying such drivers a license without the opportunity to train with a bioptic telescope and undergo on-road evaluation.