940 resultados para Vision-based


Relevância:

100.00% 100.00%

Publicador:

Resumo:

To navigate successfully in a previously unexplored environment, a mobile robot must be able to estimate the spatial relationships of the objects of interest accurately. A Simultaneous Localization and Mapping (SLAM) sys- tem employs its sensors to build incrementally a map of its surroundings and to localize itself in the map simultaneously. The aim of this research project is to develop a SLAM system suitable for self propelled household lawnmowers. The proposed bearing-only SLAM system requires only an omnidirec- tional camera and some inexpensive landmarks. The main advantage of an omnidirectional camera is the panoramic view of all the landmarks in the scene. Placing landmarks in a lawn field to define the working domain is much easier and more flexible than installing the perimeter wire required by existing autonomous lawnmowers. The common approach of existing bearing-only SLAM methods relies on a motion model for predicting the robot’s pose and a sensor model for updating the pose. In the motion model, the error on the estimates of object positions is cumulated due mainly to the wheel slippage. Quantifying accu- rately the uncertainty of object positions is a fundamental requirement. In bearing-only SLAM, the Probability Density Function (PDF) of landmark position should be uniform along the observed bearing. Existing methods that approximate the PDF with a Gaussian estimation do not satisfy this uniformity requirement. This thesis introduces both geometric and proba- bilistic methods to address the above problems. The main novel contribu- tions of this thesis are: 1. A bearing-only SLAM method not requiring odometry. The proposed method relies solely on the sensor model (landmark bearings only) without relying on the motion model (odometry). The uncertainty of the estimated landmark positions depends on the vision error only, instead of the combination of both odometry and vision errors. 2. The transformation of the spatial uncertainty of objects. This thesis introduces a novel method for translating the spatial un- certainty of objects estimated from a moving frame attached to the robot into the global frame attached to the static landmarks in the environment. 3. The characterization of an improved PDF for representing landmark position in bearing-only SLAM. The proposed PDF is expressed in polar coordinates, and the marginal probability on range is constrained to be uniform. Compared to the PDF estimated from a mixture of Gaussians, the PDF developed here has far fewer parameters and can be easily adopted in a probabilistic framework, such as a particle filtering system. The main advantages of our proposed bearing-only SLAM system are its lower production cost and flexibility of use. The proposed system can be adopted in other domestic robots as well, such as vacuum cleaners or robotic toys when terrain is essentially 2D.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes the development and preliminary experimental evaluation of a visionbased docking system to allow an Autonomous Underwater Vehicle (AUV) to identify and attach itself to a set of uniquely identifiable targets. These targets, docking poles, are detected using Haar rectangular features and rotation of integral images. A non-holonomic controller allows the Starbug AUV to orient itself with respect to the target whilst maintaining visual contact during the manoeuvre. Experimental results show the proposed vision system is capable of robustly identifying a pair of docking poles simultaneously in a variety of orientations and lighting conditions. Experiments in an outdoor pool show that this vision system enables the AUV to dock autonomously from a distance of up to 4m with relatively low visibility.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: Computer vision has been widely used in the inspection of electronic components. This paper proposes a computer vision system for the automatic detection, localisation, and segmentation of solder joints on Printed Circuit Boards (PCBs) under different illumination conditions. Design/methodology/approach: An illumination normalization approach is applied to an image, which can effectively and efficiently eliminate the effect of uneven illumination while keeping the properties of the processed image the same as in the corresponding image under normal lighting conditions. Consequently special lighting and instrumental setup can be reduced in order to detect solder joints. These normalised images are insensitive to illumination variations and are used for the subsequent solder joint detection stages. In the segmentation approach, the PCB image is transformed from an RGB color space to a YIQ color space for the effective detection of solder joints from the background. Findings: The segmentation results show that the proposed approach improves the performance significantly for images under varying illumination conditions. Research limitations/implications: This paper proposes a front-end system for the automatic detection, localisation, and segmentation of solder joint defects. Further research is required to complete the full system including the classification of solder joint defects. Practical implications: The methodology presented in this paper can be an effective method to reduce cost and improve quality in production of PCBs in the manufacturing industry. Originality/value: This research proposes the automatic location, identification and segmentation of solder joints under different illumination conditions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Competent navigation in an environment is a major requirement for an autonomous mobile robot to accomplish its mission. Nowadays, many successful systems for navigating a mobile robot use an internal map which represents the environment in a detailed geometric manner. However, building, maintaining and using such environment maps for navigation is difficult because of perceptual aliasing and measurement noise. Moreover, geometric maps require the processing of huge amounts of data which is computationally expensive. This thesis addresses the problem of vision-based topological mapping and localisation for mobile robot navigation. Topological maps are concise and graphical representations of environments that are scalable and amenable to symbolic manipulation. Thus, they are well-suited for basic robot navigation applications, and also provide a representational basis for the procedural and semantic information needed for higher-level robotic tasks. In order to make vision-based topological navigation suitable for inexpensive mobile robots for the mass market we propose to characterise key places of the environment based on their visual appearance through colour histograms. The approach for representing places using visual appearance is based on the fact that colour histograms change slowly as the field of vision sweeps the scene when a robot moves through an environment. Hence, a place represents a region of the environment rather than a single position. We demonstrate in experiments using an indoor data set, that a topological map in which places are characterised using visual appearance augmented with metric clues provides sufficient information to perform continuous metric localisation which is robust to the kidnapped robot problem. Many topological mapping methods build a topological map by clustering visual observations to places. However, due to perceptual aliasing observations from different places may be mapped to the same place representative in the topological map. A main contribution of this thesis is a novel approach for dealing with the perceptual aliasing problem in topological mapping. We propose to incorporate neighbourhood relations for disambiguating places which otherwise are indistinguishable. We present a constraint based stochastic local search method which integrates the approach for place disambiguation in order to induce a topological map. Experiments show that the proposed method is capable of mapping environments with a high degree of perceptual aliasing, and that a small map is found quickly. Moreover, the method of using neighbourhood information for place disambiguation is integrated into a framework for topological off-line simultaneous localisation and mapping which does not require an initial categorisation of visual observations. Experiments on an indoor data set demonstrate the suitability of our method to reliably localise the robot while building a topological map.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Machine vision represents a particularly attractive solution for sensing and detecting potential collision-course targets due to the relatively low cost, size, weight, and power requirements of the sensors involved (as opposed to radar). This paper describes the development and evaluation of a vision-based collision detection algorithm suitable for fixed-wing aerial robotics. The system was evaluated using highly realistic vision data of the moments leading up to a collision. Based on the collected data, our detection approaches were able to detect targets at distances ranging from 400m to about 900m. These distances (with some assumptions about closing speeds and aircraft trajectories) translate to an advanced warning of between 8-10 seconds ahead of impact, which approaches the 12.5 second response time recommended for human pilots. We make use of the enormous potential of graphic processing units to achieve processing rates of 30Hz (for images of size 1024-by- 768). Currently, integration in the final platform is under way.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Machine vision represents a particularly attractive solution for sensing and detecting potential collision-course targets due to the relatively low cost, size, weight, and power requirements of vision sensors (as opposed to radar and TCAS). This paper describes the development and evaluation of a real-time vision-based collision detection system suitable for fixed-wing aerial robotics. Using two fixed-wing UAVs to recreate various collision-course scenarios, we were able to capture highly realistic vision (from an onboard camera perspective) of the moments leading up to a collision. This type of image data is extremely scarce and was invaluable in evaluating the detection performance of two candidate target detection approaches. Based on the collected data, our detection approaches were able to detect targets at distances ranging from 400m to about 900m. These distances (with some assumptions about closing speeds and aircraft trajectories) translate to an advanced warning of between 8-10 seconds ahead of impact, which approaches the 12.5 second response time recommended for human pilots. We overcame the challenge of achieving real-time computational speeds by exploiting the parallel processing architectures of graphics processing units found on commercially-off-the-shelf graphics devices. Our chosen GPU device suitable for integration onto UAV platforms can be expected to handle real-time processing of 1024 by 768 pixel image frames at a rate of approximately 30Hz. Flight trials using manned Cessna aircraft where all processing is performed onboard will be conducted in the near future, followed by further experiments with fully autonomous UAV platforms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

If mobile robots are to perform useful tasks in the real-world they will require a catalog of fundamental navigation competencies and a means to select between them. In this paper we describe our work on strongly vision-based competencies: road-following, person or vehicle following, pose and position stabilization. Results from experiments on an outdoor autonomous tractor, a car-like vehicle, are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The article described an open-source toolbox for machine vision called Machine Vision Toolbox (MVT). MVT includes more than 60 functions including image file reading and writing, acquisition, display, filtering, blob, point and line feature extraction, mathematical morphology, homographies, visual Jacobians, camera calibration, and color space conversion. MVT can be used for research into machine vision but is also versatile enough to be usable for real-time work and even control. MVT, combined with MATLAB and a model workstation computer, is a useful and convenient environment for the investigation of machine vision algorithms. The article illustrated the use of a subset of toolbox functions for some typical problems and described MVT operations including the simulation of a complete image-based visual servo system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ensuring the long term viability of reef environments requires essential monitoring of many aspects of these ecosystems. However, the sheer size of these unstructured environments (for example Australia’s Great Barrier Reef pose a number of challenges for current monitoring platforms which are typically remote operated and required significant resources and infrastructure. Therefore, a primary objective of the CSIRO robotic reef monitoring project is to develop and deploy a large number of AUV teams to perform broadscale reef surveying. In order to achieve this, the platforms must be cheap, even possibly disposable. This paper presents the results of a preliminary investigation into the performance of a low-cost sensor suite and associated processing techniques for vision and inertial-based navigation within a highly unstructured reef environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Uninhabited aerial vehicles (UAVs) are a cutting-edge technology that is at the forefront of aviation/aerospace research and development worldwide. Many consider their current military and defence applications as just a token of their enormous potential. Unlocking and fully exploiting this potential will see UAVs in a multitude of civilian applications and routinely operating alongside piloted aircraft. The key to realising the full potential of UAVs lies in addressing a host of regulatory, public relation, and technological challenges never encountered be- fore. Aircraft collision avoidance is considered to be one of the most important issues to be addressed, given its safety critical nature. The collision avoidance problem can be roughly organised into three areas: 1) Sense; 2) Detect; and 3) Avoid. Sensing is concerned with obtaining accurate and reliable information about other aircraft in the air; detection involves identifying potential collision threats based on available information; avoidance deals with the formulation and execution of appropriate manoeuvres to maintain safe separation. This thesis tackles the detection aspect of collision avoidance, via the development of a target detection algorithm that is capable of real-time operation onboard a UAV platform. One of the key challenges of the detection problem is the need to provide early warning. This translates to detecting potential threats whilst they are still far away, when their presence is likely to be obscured and hidden by noise. Another important consideration is the choice of sensors to capture target information, which has implications for the design and practical implementation of the detection algorithm. The main contributions of the thesis are: 1) the proposal of a dim target detection algorithm combining image morphology and hidden Markov model (HMM) filtering approaches; 2) the novel use of relative entropy rate (RER) concepts for HMM filter design; 3) the characterisation of algorithm detection performance based on simulated data as well as real in-flight target image data; and 4) the demonstration of the proposed algorithm's capacity for real-time target detection. We also consider the extension of HMM filtering techniques and the application of RER concepts for target heading angle estimation. In this thesis we propose a computer-vision based detection solution, due to the commercial-off-the-shelf (COTS) availability of camera hardware and the hardware's relatively low cost, power, and size requirements. The proposed target detection algorithm adopts a two-stage processing paradigm that begins with an image enhancement pre-processing stage followed by a track-before-detect (TBD) temporal processing stage that has been shown to be effective in dim target detection. We compare the performance of two candidate morphological filters for the image pre-processing stage, and propose a multiple hidden Markov model (MHMM) filter for the TBD temporal processing stage. The role of the morphological pre-processing stage is to exploit the spatial features of potential collision threats, while the MHMM filter serves to exploit the temporal characteristics or dynamics. The problem of optimising our proposed MHMM filter has been examined in detail. Our investigation has produced a novel design process for the MHMM filter that exploits information theory and entropy related concepts. The filter design process is posed as a mini-max optimisation problem based on a joint RER cost criterion. We provide proof that this joint RER cost criterion provides a bound on the conditional mean estimate (CME) performance of our MHMM filter, and this in turn establishes a strong theoretical basis connecting our filter design process to filter performance. Through this connection we can intelligently compare and optimise candidate filter models at the design stage, rather than having to resort to time consuming Monte Carlo simulations to gauge the relative performance of candidate designs. Moreover, the underlying entropy concepts are not constrained to any particular model type. This suggests that the RER concepts established here may be generalised to provide a useful design criterion for multiple model filtering approaches outside the class of HMM filters. In this thesis we also evaluate the performance of our proposed target detection algorithm under realistic operation conditions, and give consideration to the practical deployment of the detection algorithm onboard a UAV platform. Two fixed-wing UAVs were engaged to recreate various collision-course scenarios to capture highly realistic vision (from an onboard camera perspective) of the moments leading up to a collision. Based on this collected data, our proposed detection approach was able to detect targets out to distances ranging from about 400m to 900m. These distances, (with some assumptions about closing speeds and aircraft trajectories) translate to an advanced warning ahead of impact that approaches the 12.5 second response time recommended for human pilots. Furthermore, readily available graphic processing unit (GPU) based hardware is exploited for its parallel computing capabilities to demonstrate the practical feasibility of the proposed target detection algorithm. A prototype hardware-in- the-loop system has been found to be capable of achieving data processing rates sufficient for real-time operation. There is also scope for further improvement in performance through code optimisations. Overall, our proposed image-based target detection algorithm offers UAVs a cost-effective real-time target detection capability that is a step forward in ad- dressing the collision avoidance issue that is currently one of the most significant obstacles preventing widespread civilian applications of uninhabited aircraft. We also highlight that the algorithm development process has led to the discovery of a powerful multiple HMM filtering approach and a novel RER-based multiple filter design process. The utility of our multiple HMM filtering approach and RER concepts, however, extend beyond the target detection problem. This is demonstrated by our application of HMM filters and RER concepts to a heading angle estimation problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Inspection of solder joints has been a critical process in the electronic manufacturing industry to reduce manufacturing cost, improve yield, and ensure product quality and reliability. The solder joint inspection problem is more challenging than many other visual inspections because of the variability in the appearance of solder joints. Although many research works and various techniques have been developed to classify defect in solder joints, these methods have complex systems of illumination for image acquisition and complicated classification algorithms. An important stage of the analysis is to select the right method for the classification. Better inspection technologies are needed to fill the gap between available inspection capabilities and industry systems. This dissertation aims to provide a solution that can overcome some of the limitations of current inspection techniques. This research proposes two inspection steps for automatic solder joint classification system. The “front-end” inspection system includes illumination normalisation, localization and segmentation. The illumination normalisation approach can effectively and efficiently eliminate the effect of uneven illumination while keeping the properties of the processed image. The “back-end” inspection involves the classification of solder joints by using Log Gabor filter and classifier fusion. Five different levels of solder quality with respect to the amount of solder paste have been defined. Log Gabor filter has been demonstrated to achieve high recognition rates and is resistant to misalignment. Further testing demonstrates the advantage of Log Gabor filter over both Discrete Wavelet Transform and Discrete Cosine Transform. Classifier score fusion is analysed for improving recognition rate. Experimental results demonstrate that the proposed system improves performance and robustness in terms of classification rates. This proposed system does not need any special illumination system, and the images are acquired by an ordinary digital camera. In fact, the choice of suitable features allows one to overcome the problem given by the use of non complex illumination systems. The new system proposed in this research can be incorporated in the development of an automated non-contact, non-destructive and low cost solder joint quality inspection system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis addresses the problem of detecting and describing the same scene points in different wide-angle images taken by the same camera at different viewpoints. This is a core competency of many vision-based localisation tasks including visual odometry and visual place recognition. Wide-angle cameras have a large field of view that can exceed a full hemisphere, and the images they produce contain severe radial distortion. When compared to traditional narrow field of view perspective cameras, more accurate estimates of camera egomotion can be found using the images obtained with wide-angle cameras. The ability to accurately estimate camera egomotion is a fundamental primitive of visual odometry, and this is one of the reasons for the increased popularity in the use of wide-angle cameras for this task. Their large field of view also enables them to capture images of the same regions in a scene taken at very different viewpoints, and this makes them suited for visual place recognition. However, the ability to estimate the camera egomotion and recognise the same scene in two different images is dependent on the ability to reliably detect and describe the same scene points, or ‘keypoints’, in the images. Most algorithms used for this purpose are designed almost exclusively for perspective images. Applying algorithms designed for perspective images directly to wide-angle images is problematic as no account is made for the image distortion. The primary contribution of this thesis is the development of two novel keypoint detectors, and a method of keypoint description, designed for wide-angle images. Both reformulate the Scale- Invariant Feature Transform (SIFT) as an image processing operation on the sphere. As the image captured by any central projection wide-angle camera can be mapped to the sphere, applying these variants to an image on the sphere enables keypoints to be detected in a manner that is invariant to image distortion. Each of the variants is required to find the scale-space representation of an image on the sphere, and they differ in the approaches they used to do this. Extensive experiments using real and synthetically generated wide-angle images are used to validate the two new keypoint detectors and the method of keypoint description. The best of these two new keypoint detectors is applied to vision based localisation tasks including visual odometry and visual place recognition using outdoor wide-angle image sequences. As part of this work, the effect of keypoint coordinate selection on the accuracy of egomotion estimates using the Direct Linear Transform (DLT) is investigated, and a simple weighting scheme is proposed which attempts to account for the uncertainty of keypoint positions during detection. A word reliability metric is also developed for use within a visual ‘bag of words’ approach to place recognition.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The following paper proposes a novel application of Skid-to-Turn maneuvers for fixed wing Unmanned Aerial Vehicles (UAVs) inspecting locally linear infrastructure. Fixed wing UAVs, following the design of manned aircraft, commonly employ Bank-to-Turn ma- neuvers to change heading and thus direction of travel. Whilst effective, banking an aircraft during the inspection of ground based features hinders data collection, with body fixed sen- sors angled away from the direction of turn and a panning motion induced through roll rate that can reduce data quality. By adopting Skid-to-Turn maneuvers, the aircraft can change heading whilst maintaining wings level flight, thus allowing body fixed sensors to main- tain a downward facing orientation. An Image-Based Visual Servo controller is developed to directly control the position of features as captured by onboard inspection sensors. This improves on the indirect approach taken by other tracking controllers where a course over ground directly above the feature is assumed to capture it centered in the field of view. Performance of the proposed controller is compared against that of a Bank-to-Turn tracking controller driven by GPS derived cross track error in a simulation environment developed to replicate the field of view of a body fixed camera.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Machine vision represents a particularly attractive solution for sensing and detecting potential collision-course targets due to the relatively low cost, size, weight, and power requirements of the sensors involved. This paper describes the development of detection algorithms and the evaluation of a real-time flight ready hardware implementation of a vision-based collision detection system suitable for fixed-wing small/medium size UAS. In particular, this paper demonstrates the use of Hidden Markov filter to track and estimate the elevation (β) and bearing (α) of the target, compares several candidate graphic processing hardware choices, and proposes an image based visual servoing approach to achieve collision avoidance

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents the development of a low-cost sensor platform for use in ground-based visual pose estimation and scene mapping tasks. We seek to develop a technical solution using low-cost vision hardware that allows us to accurately estimate robot position for SLAM tasks. We present results from the application of a vision based pose estimation technique to simultaneously determine camera poses and scene structure. The results are generated from a dataset gathered traversing a local road at the St Lucia Campus of the University of Queensland. We show the accuracy of the pose estimation over a 1.6km trajectory in relation to GPS ground truth.