263 resultados para Robotic vision
Resumo:
Butterflies and primates are interesting for comparative color vision studies, because both have evolved middle- (M) and long-wavelength- (L) sensitive photopigments with overlapping absorbance spectrum maxima (lambda(max) values). Although positive selection is important for the maintenance of spectral variation within the primate pigments, it remains an open question whether it contributes similarly to the diversification of butterfly pigments. To examine this issue, we performed epimicrospectrophotometry on the eyes of five Limenitis butterfly species and found a 31-nm range of variation in the lambda(max) values of the L-sensitive photopigments (514-545 nm). We cloned partial Limenitis L opsin gene sequences and found a significant excess of replacement substitutions relative to polymorphisms among species. Mapping of these L photopigment lambda(max) values onto a phylogeny revealed two instances within Lepidoptera of convergently evolved L photopigment lineages whose lambda(max) values were blue-shifted. A codon-based maximum-likelihood analysis indicated that, associated with the two blue spectral shifts, four amino acid sites (Ile17Met, Ala64Ser, Asn70Ser, and Ser137Ala) have evolved substitutions in parallel and exhibit significant d(N)/d(S) >1. Homology modeling of the full-length Limenitis arthemis astyanax L opsin placed all four substitutions within the chromophore-binding pocket. Strikingly, the Ser137Ala substitution is in the same position as a site that in primates is responsible for a 5- to 7-nm blue spectral shift. Our data show that some of the same amino acid sites are under positive selection in the photopigments of both butterflies and primates, spanning an evolutionary distance >500 million years.
Resumo:
A fundamental problem faced by stereo vision algorithms is that of determining correspondences between two images which comprise a stereo pair. This paper presents work towards the development of a new matching algorithm, based on the rank transform. This algorithm makes use of both area-based and edge-based information, and is therefore referred to as a hybrid algorithm. In addition, this algorithm uses a number of matching constraints,including the novel rank constraint. Results obtained using a number of test pairs show that the matching algorithm is capable of removing a significant proportion of invalid matches. The accuracy of matching in the vicinity of edges is also improved.
Resumo:
The aim of children's vision screenings is to detect visual problems that are common in this age category through valid and reliable tests. Nevertheless, the cost effectiveness of paediatric vision screenings, the nature of the tests included in the screening batteries and the ideal screening age has been the cause of much debate in Australia and worldwide. Therefore, the purpose of this review is to report on the current practice of children's vision screenings in Australia and other countries, as well as to evaluate the evidence for and against the provision of such screenings. This was undertaken through a detailed investigation of peer-reviewed publications on this topic. The current review demonstrates that there is no agreed vision screening protocol for children in Australia. This appears to be a result of the lack of strong evidence supporting the benefit of such screenings. While amblyopia, strabismus and, to a lesser extent refractive error, are targeted by many screening programs during pre-school and at school entry, there is less agreement regarding the value of screening for other visual conditions, such as binocular vision disorders, ocular health problems and refractive errors that are less likely to reduce distance visual acuity. In addition, in Australia, little agreement exists in the frequency and coverage of screening programs between states and territories and the screening programs that are offered are ad hoc and poorly documented. Australian children stand to benefit from improved cohesion and communication between jurisdictions and health professionals to enable an equitable provision of validated vision screening services that have the best chance of early detection and intervention for a range of paediatric visual problems.
Resumo:
The Cross-Entropy (CE) is an efficient method for the estimation of rare-event probabilities and combinatorial optimization. This work presents a novel approach of the CE for optimization of a Soft-Computing controller. A Fuzzy controller was designed to command an unmanned aerial system (UAS) for avoiding collision task. The only sensor used to accomplish this task was a forward camera. The CE is used to reach a near-optimal controller by modifying the scaling factors of the controller inputs. The optimization was realized using the ROS-Gazebo simulation system. In order to evaluate the optimization a big amount of tests were carried out with a real quadcopter.
Resumo:
This paper discusses the findings of a research study that used semi-structured interviews to explore the views of primary school principals on inclusive education in New South Wales, Australia. Content analysis of the transcript data indicates that principals’ attitudes towards inclusive education and their success in engineering inclusive practices within their school are significantly affected by their own conception of what “inclusion” means, as well as the characteristics of the school community, and the attitudes and capacity of staff. In what follows, we present two parallel conversations that arose from the interview data to illustrate the main conceptual divisions existing between our participants’ conceptions of inclusion. First, we discuss the act of “being inclusive” which was perceived mainly as an issue of culture and pedagogy. Second, we consider the mechanics of “including,” which reflected a more instrumentalist position based on perceptions of individual student deficit, the level of support they may require and the amount of funding they can attract.
Resumo:
This paper introduces a high-speed, 100Hz, visionbased state estimator that is suitable for quadrotor control in close quarters manoeuvring applications. We describe the hardware and algorithms for estimating the state of the quadrotor. Experimental results for position, velocity and yaw angle estimators are presented and compared with motion capture data. Quantitative performance comparison with state-of-the-art achievements are also presented.
Resumo:
The future emergence of many types of airborne vehicles and unpiloted aircraft in the national airspace means collision avoidance is of primary concern in an uncooperative airspace environment. The ability to replicate a pilot’s see and avoid capability using cameras coupled with vision based avoidance control is an important part of an overall collision avoidance strategy. But unfortunately without range collision avoidance has no direct way to guarantee a level of safety. Collision scenario flight tests with two aircraft and a monocular camera threat detection and tracking system were used to study the accuracy of image-derived angle measurements. The effect of image-derived angle errors on reactive vision-based avoidance performance was then studied by simulation. The results show that whilst large angle measurement errors can significantly affect minimum ranging characteristics across a variety of initial conditions and closing speeds, the minimum range is always bounded and a collision never occurs.
Resumo:
This case study report describes the stages involved in the translation of research on night-time visibility into standards for the safety clothing worn by roadworkers. Vision research demonstrates that when lights are placed on the moveable joints of the body and the person moves in a dark setting, the phenomenon known as “biological motion or biomotion” occurs, enabling rapid and accurate recognition of the human form although only the lights can be seen. QUT was successful in gaining funding from the Australian Research Council for a Linkage grant due to the support of the predecessors of the Queensland Department of Transport and Main Roads (TMR) to research the biomotion effect in on-road settings using materials that feature in roadworker clothing. Although positive results were gained, the process of translating the research results into policy, practices and standards relied strongly on the supportive efforts of TMR staff engaged in the review and promulgation of national standards. The ultimate result was the incorporation of biomotion marking into AS/NZS 4602.1 2011. The experiences gained in this case provide insights into the processes involved in translating research into practice.
Resumo:
This work presents a collision avoidance approach based on omnidirectional cameras that does not require the estimation of range between two platforms to resolve a collision encounter. Our method achieves minimum separation between the two vehicles involved by maximising the view-angle given by the omnidirectional sensor. Only visual information is used to achieve avoidance under a bearing- only visual servoing approach. We provide theoretical problem formulation, as well as results from real flights using small quadrotors
Resumo:
Stereo-based visual odometry algorithms are heavily dependent on an accurate calibration of the rigidly fixed stereo pair. Even small shifts in the rigid transform between the cameras can impact on feature matching and 3D scene triangulation, adversely affecting pose estimates and applications dependent on long-term autonomy. In many field-based scenarios where vibration, knocks and pressure change affect a robotic vehicle, maintaining an accurate stereo calibration cannot be guaranteed over long periods. This paper presents a novel method of recalibrating overlapping stereo camera rigs from online visual data while simultaneously providing an up-to-date and up-to-scale pose estimate. The proposed technique implements a novel form of partitioned bundle adjustment that explicitly includes the homogeneous transform between a stereo camera pair to generate an optimal calibration. Pose estimates are computed in parallel to the calibration, providing online recalibration which seamlessly integrates into a stereo visual odometry framework. We present results demonstrating accurate performance of the algorithm on both simulated scenarios and real data gathered from a wide-baseline stereo pair on a ground vehicle traversing urban roads.
Resumo:
This study presents a segmentation pipeline that fuses colour and depth information to automatically separate objects of interest in video sequences captured from a quadcopter. Many approaches assume that cameras are static with known position, a condition which cannot be preserved in most outdoor robotic applications. In this study, the authors compute depth information and camera positions from a monocular video sequence using structure from motion and use this information as an additional cue to colour for accurate segmentation. The authors model the problem similarly to standard segmentation routines as a Markov random field and perform the segmentation using graph cuts optimisation. Manual intervention is minimised and is only required to determine pixel seeds in the first frame which are then automatically reprojected into the remaining frames of the sequence. The authors also describe an automated method to adjust the relative weights for colour and depth according to their discriminative properties in each frame. Experimental results are presented for two video sequences captured using a quadcopter. The quality of the segmentation is compared to a ground truth and other state-of-the-art methods with consistently accurate results.
Resumo:
In this paper we propose a method to generate a large scale and accurate dense 3D semantic map of street scenes. A dense 3D semantic model of the environment can significantly improve a number of robotic applications such as autonomous driving, navigation or localisation. Instead of using offline trained classifiers for semantic segmentation, our approach employs a data-driven, nonparametric method to parse scenes which easily scale to a large environment and generalise to different scenes. We use stereo image pairs collected from cameras mounted on a moving car to produce dense depth maps which are combined into a global 3D reconstruction using camera poses from stereo visual odometry. Simultaneously, 2D automatic semantic segmentation using a nonparametric scene parsing method is fused into the 3D model. Furthermore, the resultant 3D semantic model is improved with the consideration of moving objects in the scene. We demonstrate our method on the publicly available KITTI dataset and evaluate the performance against manually generated ground truth.
Resumo:
This paper presents a mapping and navigation system for a mobile robot, which uses vision as its sole sensor modality. The system enables the robot to navigate autonomously, plan paths and avoid obstacles using a vision based topometric map of its environment. The map consists of a globally-consistent pose-graph with a local 3D point cloud attached to each of its nodes. These point clouds are used for direction independent loop closure and to dynamically generate 2D metric maps for locally optimal path planning. Using this locally semi-continuous metric space, the robot performs shortest path planning instead of following the nodes of the graph --- as is done with most other vision-only navigation approaches. The system exploits the local accuracy of visual odometry in creating local metric maps, and uses pose graph SLAM, visual appearance-based place recognition and point clouds registration to create the topometric map. The ability of the framework to sustain vision-only navigation is validated experimentally, and the system is provided as open-source software.