883 resultados para vision-based place recognition


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of UAVs for remote sensing tasks; e.g. agriculture, search and rescue is increasing. The ability for UAVs to autonomously find a target and perform on-board decision making, such as descending to a new altitude or landing next to a target is a desired capability. Computer-vision functionality allows the Unmanned Aerial Vehicle (UAV) to follow a designated flight plan, detect an object of interest, and change its planned path. In this paper we describe a low cost and an open source system where all image processing is achieved on-board the UAV using a Raspberry Pi 2 microprocessor interfaced with a camera. The Raspberry Pi and the autopilot are physically connected through serial and communicate via MAVProxy. The Raspberry Pi continuously monitors the flight path in real time through USB camera module. The algorithm checks whether the target is captured or not. If the target is detected, the position of the object in frame is represented in Cartesian coordinates and converted into estimate GPS coordinates. In parallel, the autopilot receives the target location approximate GPS and makes a decision to guide the UAV to a new location. This system also has potential uses in the field of Precision Agriculture, plant pest detection and disease outbreaks which cause detrimental financial damage to crop yields if not detected early on. Results show the algorithm is accurate to detect 99% of object of interest and the UAV is capable of navigation and doing on-board decision making.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Detect and Avoid (DAA) technology is widely acknowledged as a critical enabler for unsegregated Remote Piloted Aircraft (RPA) operations, particularly Beyond Visual Line of Sight (BVLOS). Image-based DAA, in the visible spectrum, is a promising technological option for addressing the challenges DAA presents. Two impediments to progress for this approach are the scarcity of available video footage to train and test algorithms, in conjunction with testing regimes and specifications which facilitate repeatable, statistically valid, performance assessment. This paper includes three key contributions undertaken to address these impediments. In the first instance, we detail our progress towards the creation of a large hybrid collision and near-collision encounter database. Second, we explore the suitability of techniques employed by the biometric research community (Speaker Verification and Language Identification), for DAA performance optimisation and assessment. These techniques include Detection Error Trade-off (DET) curves, Equal Error Rates (EER), and the Detection Cost Function (DCF). Finally, the hybrid database and the speech-based techniques are combined and employed in the assessment of a contemporary, image based DAA system. This system includes stabilisation, morphological filtering and a Hidden Markov Model (HMM) temporal filter.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Modern smart phones often come with a significant amount of computational power and an integrated digital camera making them an ideal platform for intelligents assistants. This work is restricted to retail environments, where users could be provided with for example navigational in- structions to desired products or information about special offers within their close proximity. This kind of applications usually require information about the user's current location in the domain environment, which in our case corresponds to a retail store. We propose a vision based positioning approach that recognizes products the user's mobile phone's camera is currently pointing at. The products are related to locations within the store, which enables us to locate the user by pointing the mobile phone's camera to a group of products. The first step of our method is to extract meaningful features from digital images. We use the Scale- Invariant Feature Transform SIFT algorithm, which extracts features that are highly distinctive in the sense that they can be correctly matched against a large database of features from many images. We collect a comprehensive set of images from all meaningful locations within our domain and extract the SIFT features from each of these images. As the SIFT features are of high dimensionality and thus comparing individual features is infeasible, we apply the Bags of Keypoints method which creates a generic representation, visual category, from all features extracted from images taken from a specific location. A category for an unseen image can be deduced by extracting the corresponding SIFT features and by choosing the category that best fits the extracted features. We have applied the proposed method within a Finnish supermarket. We consider grocery shelves as categories which is a sufficient level of accuracy to help users navigate or to provide useful information about nearby products. We achieve a 40% accuracy which is quite low for commercial applications while significantly outperforming the random guess baseline. Our results suggest that the accuracy of the classification could be increased with a deeper analysis on the domain and by combining existing positioning methods with ours.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper is concerned with grasping biological cells in aqueous medium with miniature grippers that can also help estimate forces using vision-based displacement measurement and computation. We present the design, fabrication, and testing of three single-piece, compliant miniature grippers with parallel and angular jaw motions. Two grippers were designed using experience and intuition, while the third one was designed using topology optimization with implicit manufacturing constraints. These grippers were fabricated using different manufacturing techniques using spring steel and polydimethylsiloxane ( PDMS). The grippers also serve the purpose of a force sensor. Toward this, we present a vision-based force-sensing technique by solving Cauchy's problem in elasticity using an improved algorithm. We validated this technique at the macroscale, where there was an independent method to estimate the force. In this study, the gripper was used to hold a yeast ball and a zebrafish egg cell of less than 1 mm in diameter. The forces involved were estimated to be about 30 and 10 mN for the yeast ball and the zebrafish egg cell, respectively.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A micro-newton static force sensor is presented here as a packaged product. The sensor, which is based on the mechanics of deformable objects, consists of a compliant mechanism that amplifies the displacement caused by the force that is to be measured. The output displacement, captured using a digital microscope and analyzed using image processing techniques, is used to calculate the force using precalibrated force-displacement curve. Images are scanned in real time at a frequency of 15 frames per second and sampled at around half the scanning frequency. The sensor was built, packaged, calibrated, and tested. It has simulated and measured stiffness values of 2.60N/m and 2.57N/m, respectively. The smallest force it can reliably measure in the presence of noise is about 2 mu N over a range of 1.4mN. The off-the-shelf digital microscope aside, all of its other components are purely mechanical; they are inexpensive and can be easily made using simple machines. Another highlight of the sensor is that its movable and delicate components are easily replaceable. The sensor can be used in aqueous environment as it does not use electric, magnetic, thermal, or any other fields. Currently, it can only measure static forces or forces that vary at less than 1Hz because its response time and bandwidth are limited by the speed of imaging with a camera. With a universal serial bus (USB) connection of its digital microscope, custom-developed graphical user interface (GUI), and related software, the sensor is fully developed as a readily usable product.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This Chapter presents a vision-based system for touch-free interaction with a display at a distance. A single camera is fixed on top of the screen and is pointing towards the user. An attention mechanism allows the user to start the interaction and control a screen pointer by moving their hand in a fist pose directed at the camera. On-screen items can be chosen by a selection mechanism. Current sample applications include browsing video collections as well as viewing a gallery of 3D objects, which the user can rotate with their hand motion. We have included an up-to-date review of hand tracking methods, and comment on the merits and shortcomings of previous approaches. The proposed tracker uses multiple cues, appearance, color, and motion, for robustness. As the space of possible observation models is generally too large for exhaustive online search, we select models that are suitable for the particular tracking task at hand. During a training stage, various off-the-shelf trackers are evaluated. From this data differentmethods of fusing them online are investigated, including parallel and cascaded tracker evaluation. For the case of fist tracking, combining a small number of observers in a cascade results in an efficient algorithm that is used in our gesture interface. The system has been on public display at conferences where over a hundred users have engaged with it. © 2010 Springer-Verlag Berlin Heidelberg.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Estimating the fundamental matrix (F), to determine the epipolar geometry between a pair of images or video frames, is a basic step for a wide variety of vision-based functions used in construction operations, such as camera-pair calibration, automatic progress monitoring, and 3D reconstruction. Currently, robust methods (e.g., SIFT + normalized eight-point algorithm + RANSAC) are widely used in the construction community for this purpose. Although they can provide acceptable accuracy, the significant amount of required computational time impedes their adoption in real-time applications, especially video data analysis with many frames per second. Aiming to overcome this limitation, this paper presents and evaluates the accuracy of a solution to find F by combining the use of two speedy and consistent methods: SURF for the selection of a robust set of point correspondences and the normalized eight-point algorithm. This solution is tested extensively on construction site image pairs including changes in viewpoint, scale, illumination, rotation, and moving objects. The results demonstrate that this method can be used for real-time applications (5 image pairs per second with the resolution of 640 × 480) involving scenes of the built environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Manually inspecting concrete surface defects (e.g., cracks and air pockets) is not always reliable. Also, it is labor-intensive. In order to overcome these limitations, automated inspection using image processing techniques was proposed. However, the current work can only detect defects in an image without the ability of evaluating them. This paper presents a novel approach for automatically assessing the impact of two common surface defects (i.e., air pockets and discoloration). These two defects are first located using the developed detection methods. Their attributes, such as the number of air pockets and the area of discoloration regions, are then retrieved to calculate defects’ visual impact ratios (VIRs). The appropriate threshold values for these VIRs are selected through a manual rating survey. This way, for a given concrete surface image, its quality in terms of air pockets and discoloration can be automatically measured by judging whether their VIRs are below the threshold values or not. The method presented in this paper was implemented in C++ and a database of concrete surface images was tested to validate its performance. Read More: http://ascelibrary.org/doi/abs/10.1061/%28ASCE%29CO.1943-7862.0000126?journalCode=jcemd4