944 resultados para Scale invariant feature transform
Resumo:
This paper proposes a parallel hardware architecture for image feature detection based on the Scale Invariant Feature Transform algorithm and applied to the Simultaneous Localization And Mapping problem. The work also proposes specific hardware optimizations considered fundamental to embed such a robotic control system on-a-chip. The proposed architecture is completely stand-alone; it reads the input data directly from a CMOS image sensor and provides the results via a field-programmable gate array coupled to an embedded processor. The results may either be used directly in an on-chip application or accessed through an Ethernet connection. The system is able to detect features up to 30 frames per second (320 x 240 pixels) and has accuracy similar to a PC-based implementation. The achieved system performance is at least one order of magnitude better than a PC-based solution, a result achieved by investigating the impact of several hardware-orientated optimizations oil performance, area and accuracy.
Resumo:
Wide-angle images exhibit significant distortion for which existing scale-space detectors such as the scale-invariant feature transform (SIFT) are inappropriate. The required scale-space images for feature detection are correctly obtained through the convolution of the image, mapped to the sphere, with the spherical Gaussian. A new visual key-point detector, based on this principle, is developed and several computational approaches to the convolution are investigated in both the spatial and frequency domain. In particular, a close approximation is developed that has comparable computation time to conventional SIFT but with improved matching performance. Results are presented for monocular wide-angle outdoor image sequences obtained using fisheye and equiangular catadioptric cameras. We evaluate the overall matching performance (recall versus 1-precision) of these methods compared to conventional SIFT. We also demonstrate the use of the technique for variable frame-rate visual odometry and its application to place recognition.
Resumo:
Watermarking aims to hide particular information into some carrier but does not change the visual cognition of the carrier itself. Local features are good candidates to address the watermark synchronization error caused by geometric distortions and have attracted great attention for content-based image watermarking. This paper presents a novel feature point-based image watermarking scheme against geometric distortions. Scale invariant feature transform (SIFT) is first adopted to extract feature points and to generate a disk for each feature point that is invariant to translation and scaling. For each disk, orientation alignment is then performed to achieve rotation invariance. Finally, watermark is embedded in middle-frequency discrete Fourier transform (DFT) coefficients of each disk to improve the robustness against common image processing operations. Extensive experimental results and comparisons with some representative image watermarking methods confirm the excellent performance of the proposed method in robustness against various geometric distortions as well as common image processing operations.
Resumo:
This thesis addresses the problem of detecting and describing the same scene points in different wide-angle images taken by the same camera at different viewpoints. This is a core competency of many vision-based localisation tasks including visual odometry and visual place recognition. Wide-angle cameras have a large field of view that can exceed a full hemisphere, and the images they produce contain severe radial distortion. When compared to traditional narrow field of view perspective cameras, more accurate estimates of camera egomotion can be found using the images obtained with wide-angle cameras. The ability to accurately estimate camera egomotion is a fundamental primitive of visual odometry, and this is one of the reasons for the increased popularity in the use of wide-angle cameras for this task. Their large field of view also enables them to capture images of the same regions in a scene taken at very different viewpoints, and this makes them suited for visual place recognition. However, the ability to estimate the camera egomotion and recognise the same scene in two different images is dependent on the ability to reliably detect and describe the same scene points, or ‘keypoints’, in the images. Most algorithms used for this purpose are designed almost exclusively for perspective images. Applying algorithms designed for perspective images directly to wide-angle images is problematic as no account is made for the image distortion. The primary contribution of this thesis is the development of two novel keypoint detectors, and a method of keypoint description, designed for wide-angle images. Both reformulate the Scale- Invariant Feature Transform (SIFT) as an image processing operation on the sphere. As the image captured by any central projection wide-angle camera can be mapped to the sphere, applying these variants to an image on the sphere enables keypoints to be detected in a manner that is invariant to image distortion. Each of the variants is required to find the scale-space representation of an image on the sphere, and they differ in the approaches they used to do this. Extensive experiments using real and synthetically generated wide-angle images are used to validate the two new keypoint detectors and the method of keypoint description. The best of these two new keypoint detectors is applied to vision based localisation tasks including visual odometry and visual place recognition using outdoor wide-angle image sequences. As part of this work, the effect of keypoint coordinate selection on the accuracy of egomotion estimates using the Direct Linear Transform (DLT) is investigated, and a simple weighting scheme is proposed which attempts to account for the uncertainty of keypoint positions during detection. A word reliability metric is also developed for use within a visual ‘bag of words’ approach to place recognition.
Resumo:
Modern smart phones often come with a significant amount of computational power and an integrated digital camera making them an ideal platform for intelligents assistants. This work is restricted to retail environments, where users could be provided with for example navigational in- structions to desired products or information about special offers within their close proximity. This kind of applications usually require information about the user's current location in the domain environment, which in our case corresponds to a retail store. We propose a vision based positioning approach that recognizes products the user's mobile phone's camera is currently pointing at. The products are related to locations within the store, which enables us to locate the user by pointing the mobile phone's camera to a group of products. The first step of our method is to extract meaningful features from digital images. We use the Scale- Invariant Feature Transform SIFT algorithm, which extracts features that are highly distinctive in the sense that they can be correctly matched against a large database of features from many images. We collect a comprehensive set of images from all meaningful locations within our domain and extract the SIFT features from each of these images. As the SIFT features are of high dimensionality and thus comparing individual features is infeasible, we apply the Bags of Keypoints method which creates a generic representation, visual category, from all features extracted from images taken from a specific location. A category for an unseen image can be deduced by extracting the corresponding SIFT features and by choosing the category that best fits the extracted features. We have applied the proposed method within a Finnish supermarket. We consider grocery shelves as categories which is a sufficient level of accuracy to help users navigate or to provide useful information about nearby products. We achieve a 40% accuracy which is quite low for commercial applications while significantly outperforming the random guess baseline. Our results suggest that the accuracy of the classification could be increased with a deeper analysis on the domain and by combining existing positioning methods with ours.
Resumo:
Moving shadow detection and removal from the extracted foreground regions of video frames, aim to limit the risk of misconsideration of moving shadows as a part of moving objects. This operation thus enhances the rate of accuracy in detection and classification of moving objects. With a similar reasoning, the present paper proposes an efficient method for the discrimination of moving object and moving shadow regions in a video sequence, with no human intervention. Also, it requires less computational burden and works effectively under dynamic traffic road conditions on highways (with and without marking lines), street ways (with and without marking lines). Further, we have used scale-invariant feature transform-based features for the classification of moving vehicles (with and without shadow regions), which enhances the effectiveness of the proposed method. The potentiality of the method is tested with various data sets collected from different road traffic scenarios, and its superiority is compared with the existing methods. (C) 2013 Elsevier GmbH. All rights reserved.
Resumo:
Nesta dissertação, foi utilizada a técnica SIFT (Scale Invariant Feature Transform) para o reconhecimento de imagens da área dos olhos (região periorbital). Foi implementada uma classificação das imagens em subgrupos internos ao banco de dados, utilizando-se das informações estatísticas provenientes dos padrões invariantes produzidos pela técnica SIFT. Procedeu-se a uma busca categorizada pelo banco de dados, ao invés da procura de um determinado padrão apresentado, através da comparação deste com cada padrão presente no banco de dados. A tais padrões foi aplicada uma abordagem estatística, através da geração da matriz de covariâncias dos padrões gerados, sendo esta utilizada para a categorização, tendo por base uma rede neural híbrida. A rede neural classifica e categoriza o banco de dados de imagens, criando uma topologia de busca. Foram obtidos resultados corretos de classificação de 76,3% pela rede neural híbrida, sendo que um algoritmo auxiliar determina uma hierarquia de busca, onde, ocorrendo uma errônea classificação, a busca segue em grupos de pesquisas mais prováveis.
Resumo:
Esta dissertação apresenta um aperfeiçoamento para o Sistema de Imagens Tridimensional Híbrido (SITH) que é utilizado para obtenção de uma superfície tridimensional do relevo de uma determinada região a partir de dois aerofotogramas consecutivos da mesma. A fotogrametria é a ciência e tecnologia utilizada para obter informações confiáveis a partir de imagens adquiridas por sensores. O aperfeiçoamento do SITH consistirá na automatização da obtenção dos pontos através da técnica de Transformada de Características Invariantes a Escala (SIFT - Scale Invariant Feature Transform) dos pares de imagens estereoscópicas obtidos por câmeras aéreas métricas, e na utilização de técnicas de interpolação por splines cúbicos para suavização das superfícies tridimensionais obtidas pelo mesmo, proporcionando uma visualização mais clara dos detalhes da área estudada e auxiliando em prevenções contra deslizamentos em locais de risco a partir de um planejamento urbano adequado. Os resultados computacionais mostram que a incorporação destes métodos ao programa SITH apresentaram bons resultados.
Resumo:
Neste trabalho é estudada a viabilidade de uma implementação em paralelo do algoritmo scale invariant feature transform (SIFT) para identificação de íris. Para a implementação do código foi utilizada a arquitetura para computação paralela compute unified device architecture (CUDA) e a linguagem OpenGL shading language (GLSL). O algoritmo foi testado utilizando três bases de dados de olhos e íris, o noisy visible wavelength iris image Database (UBIRIS), Michal-Libor e CASIA. Testes foram feitos para determinar o tempo de processamento para verificação da presença ou não de um indivíduo em um banco de dados, determinar a eficiência dos algoritmos de busca implementados em GLSL e CUDA e buscar valores de calibração que melhoram o posicionamento e a distribuição dos pontos-chave na região de interesse (íris) e a robustez do programa final.
Resumo:
A scale invariant feature transform (SIFT) based mean shift algorithm is presented for object tracking in real scenarios. SIFT features are used to correspond the region of interests across frames. Meanwhile, mean shift is applied to conduct similarity search via color histograms. The probability distributions from these two measurements are evaluated in an expectation–maximization scheme so as to achieve maximum likelihood estimation of similar regions. This mutual support mechanism can lead to consistent tracking performance if one of the two measurements becomes unstable. Experimental work demonstrates that the proposed mean shift/SIFT strategy improves the tracking performance of the classical mean shift and SIFT tracking algorithms in complicated real scenarios.
Resumo:
In this paper, a novel motion-tracking scheme using scale-invariant features is proposed for automatic cell motility analysis in gray-scale microscopic videos, particularly for the live-cell tracking in low-contrast differential interference contrast (DIC) microscopy. In the proposed approach, scale-invariant feature transform (SIFT) points around live cells in the microscopic image are detected, and a structure locality preservation (SLP) scheme using Laplacian Eigenmap is proposed to track the SIFT feature points along successive frames of low-contrast DIC videos. Experiments on low-contrast DIC microscopic videos of various live-cell lines shows that in comparison with principal component analysis (PCA) based SIFT tracking, the proposed Laplacian-SIFT can significantly reduce the error rate of SIFT feature tracking. With this enhancement, further experimental results demonstrate that the proposed scheme is a robust and accurate approach to tackling the challenge of live-cell tracking in DIC microscopy.
Resumo:
Ear recognition, as a biometric, has several advantages. In particular, ears can be measured remotely and are also relatively static in size and structure for each individual. Unfortunately, at present, good recognition rates require controlled conditions. For commercial use, these systems need to be much more robust. In particular, ears have to be recognized from different angles ( poses), under different lighting conditions, and with different cameras. It must also be possible to distinguish ears from background clutter and identify them when partly occluded by hair, hats, or other objects. The purpose of this paper is to suggest how progress toward such robustness might be achieved through a technique that improves ear registration. The approach focuses on 2-D images, treating the ear as a planar surface that is registered to a gallery using a homography transform calculated from scale-invariant feature-transform feature matches. The feature matches reduce the gallery size and enable a precise ranking using a simple 2-D distance algorithm. Analysis on a range of data sets demonstrates the technique to be robust to background clutter, viewing angles up to +/- 13 degrees, and up to 18% occlusion. In addition, recognition remains accurate with masked ear images as small as 20 x 35 pixels.
Resumo:
Shoeprint evidence collected from crime scenes can play an important role in forensic investigations. Usually, the analysis of shoeprints is carried out manually and is based on human expertise and knowledge. As well as being error prone, such a manual process can also be time consuming; thus affecting the usability and suitability of shoeprint evidence in a court of law. Thus, an automatic system for classification and retrieval of shoeprints has the potential to be a valuable tool. This paper presents a solution for the automatic retrieval of shoeprints which is considerably more robust than existing solutions in the presence of geometric distortions such as scale, rotation and scale distortions. It addresses the issue of classifying partial shoeprints in the presence of rotation, scale and noise distortions and relies on the use of two local point-of-interest detectors whose matching scores are combined. In this work, multiscale Harris and Hessian detectors are used to select corners and blob-like structures in a scale-space representation for scale invariance, while Scale Invariant Feature Transform (SIFT) descriptor is employed to achieve rotation invariance. The proposed technique is based on combining the matching scores of the two detectors at the score level. Our evaluation has shown that it outperforms both detectors in most of our extended experiments when retrieving partial shoeprints with geometric distortions, and is clearly better than similar work published in the literature. We also demonstrate improved performance in the face of wear and tear. As matter of fact, whilst the proposed work outperforms similar algorithms in the literature, it is shown that achieving good retrieval performance is not constrained by acquiring a full print from a scene of crime as a partial print can still be used to attain comparable retrieval results to those of using the full print. This gives crime investigators more flexibility is choosing the parts of a print to search for in a database of footwear.