118 resultados para SIFT keypoints
Resumo:
The present review articulates the syntheses and properties of industrially important disulfide and tetrasulfide polymers. The diselenide and ditelluride polymers have also been reviewed, for the first time, so that a comprehensive view on the polymers containing group VIA elements can be obtained. The latter two polymers are gaining considerable current attention due to their semi-conducting properties. The emphasis has been made to sift through the developments in the last ten years or so to get the latest flavour in these rapidly developing polymers. We have also attempted to bring to the fore several contradicting results, like, for example, the crystallinity of ditelluride polymers, to clear the mist in such reports. We hope that this review will help those working in the field to assess the progress achieved in this area and that it may also provide useful orientation for those who wish to become involved.
Resumo:
P bodies are 100-300 nm sized organelles involved in mRNA silencing and degradation. A total of 60 human proteins have been reported to localize to P bodies. Several human SNPs contribute to complex diseases by altering the structure and function of the proteins. Also, SNPs alter various transcription factors binding, splicing and miRNA regulatory sites. Owing to the essential functions of P bodies in mRNA regulation, we explored computationally the functional significance of SNPs in 7 P body components such as XRN1, DCP2, EDC3, CPEB1, GEMIN5, STAU1 and TRIM71. Computational analyses of non-synonymous SNPs of these components was initiated using well utilized publicly available software programs such as the SIFT, followed by PolyPhen, PANTHER, MutPred, I-Mutant-2.0 and PhosSNP 1.0. Functional significance of noncoding SNPs in the regulatory regions were analysed using FastSNP. Utilizing miRSNP database, we explored the role of SNPs in the context that alters the miRNA binding sites in the above mentioned genes. Our in silico studies have identified various deleterious SNPs and this cataloguing is essential and gives first hand information for further analysis by in vitro and in vivo methods for a better understanding of maintenance, assembly and functional aspects of P bodies in both health and disease. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
Representing images and videos in the form of compact codes has emerged as an important research interest in the vision community, in the context of web scale image/video search. Recently proposed Vector of Locally Aggregated Descriptors (VLAD), has been shown to outperform the existing retrieval techniques, while giving a desired compact representation. VLAD aggregates the local features of an image in the feature space. In this paper, we propose to represent the local features extracted from an image, as sparse codes over an over-complete dictionary, which is obtained by K-SVD based dictionary training algorithm. The proposed VLAD aggregates the residuals in the space of these sparse codes, to obtain a compact representation for the image. Experiments are performed over the `Holidays' database using SIFT features. The performance of the proposed method is compared with the original VLAD. The 4% increment in the mean average precision (mAP) indicates the better retrieval performance of the proposed sparse coding based VLAD.
Resumo:
This paper describes an efficient vision-based global topological localization approach that uses a coarse-to-fine strategy. Orientation Adjacency Coherence Histogram (OACH), a novel image feature, is proposed to improve the coarse localization. The coarse localization results are taken as inputs for the fine localization which is carried out by matching Harris-Laplace interest points characterized by the SIFT descriptor. Computation of OACHs and interest points is efficient due to the fact that these features are computed in an integrated process. We have implemented and tested the localization system in real environments. The experimental results demonstrate that our approach is efficient and reliable in both indoor and outdoor environments. © 2006 IEEE.
Resumo:
This paper presents a novel coarse-to-fine global localization approach that is inspired by object recognition and text retrieval techniques. Harris-Laplace interest points characterized by SIFT descriptors are used as natural land-marks. These descriptors are indexed into two databases: an inverted index and a location database. The inverted index is built based on a visual vocabulary learned from the feature descriptors. In the location database, each location is directly represented by a set of scale invariant descriptors. The localization process consists of two stages: coarse localization and fine localization. Coarse localization from the inverted index is fast but not accurate enough; whereas localization from the location database using voting algorithm is relatively slow but more accurate. The combination of coarse and fine stages makes fast and reliable localization possible. In addition, if necessary, the localization result can be verified by epipolar geometry between the representative view in database and the view to be localized. Experimental results show that our approach is efficient and reliable. ©2005 IEEE.
Resumo:
This paper presents a novel approach using combined features to retrieve images containing specific objects, scenes or buildings. The content of an image is characterized by two kinds of features: Harris-Laplace interest points described by the SIFT descriptor and edges described by the edge color histogram. Edges and corners contain the maximal amount of information necessary for image retrieval. The feature detection in this work is an integrated process: edges are detected directly based on the Harris function; Harris interest points are detected at several scales and Harris-Laplace interest points are found using the Laplace function. The combination of edges and interest points brings efficient feature detection and high recognition ratio to the image retrieval system. Experimental results show this system has good performance. © 2005 IEEE.
Resumo:
215 p.
Resumo:
Over the years, Nigeria have witnessed different government with different policy measures. Against the negative consequences of the past policies, the structural adjustment was initiated in 1986. Its aims is to effectively altar and restructure the consumption patterns of the economy as well as to eliminate price distortions and heavy dependence on the oil and the imports of consumer goods and services. Within the period of implementation, there has been a decreasing trend in yearly fish catch landings and sizes but the reverse in shrimping. There is also a gradual shift from fishing to shrimping, from the vessels purchased with 83.3% increase of shrimpers from 1985 to 1989. Decreasing fish catch sizes and quantity aggravated by the present high cost of fishing coupled with the favourable export market for Nigeria shrimp tend to influence the sift. This economic situation is the result of the supply measures of SAP through the devaluation of the Naira. There is also overconcentration of vessels on the inshore waters as majority of the vessels are old and low powers hence incapable of fishing on the deep sea. Rotterdam price being paid for automotive gas oil (AGO) by fishing industries is observed to be discriminating and unhealthy to the growth of the industry as it is exceedingly high and unstable thus affecting planning for fishing operation. Fuel alone takes 43% of the total cost of operation. The overall consequences is that fishing days are loss and therefore higher overhead cost. It was concluded that for a healthy growth and sustainable resources of our marine fishery under the structural adjustment programme licensing of new fishing vessels should be stopped immediately and the demand side of SAP should be employed by subsidizing high powered fishing vessels which can operate effectively on the deep sea
Resumo:
Esta tese propôs uma metodologia para detecção de áreas susceptíveis a deslizamentos de terra a partir de imagens aéreas, culminando no desenvolvimento de uma ferramenta computacional, denominada SASD/T, para testar a metodologia. Para justificar esta pesquisa, um levantamento sobre os desastres naturais da história brasileira relacionada a deslizamentos de terra e as metodologias utilizadas para a detecção e análise de áreas susceptíveis a deslizamentos de terra foi realizado. Estudos preliminares de visualização 3D e conceitos relacionados ao mapeamento 3D foram realizados. Estereoscopia foi implementada para visualizar tridimensionalmente a região selecionada. As altitudes foram encontradas através de paralaxe, a partir dos pontos homólogos encontrados pelo algoritmo SIFT. Os experimentos foram realizados com imagens da cidade de Nova Friburgo. O experimento inicial mostrou que o resultado obtido utilizando SIFT em conjunto com o filtro proposto, foi bastante significativo ao ser comparado com os resultados de Fernandes (2008) e Carmo (2010), devido ao número de pontos homólogos encontrados e da superfície gerada. Para detectar os locais susceptíveis a deslizamentos, informações como altitude, declividade, orientação e curvatura foram extraídas dos pares estéreos e, em conjunto com as variáveis inseridas pelo usuário, forneceram uma análise de quão uma determinada área é susceptível a deslizamentos. A metodologia proposta pode ser estendida para a avaliação e previsão de riscos de deslizamento de terra de qualquer outra região, uma vez que permite a interação com o usuário, de modo que este especifique as características, os itens e as ponderações necessárias à análise em questão.
Resumo:
A discriminação de fases que são praticamente indistinguíveis ao microscópio ótico de luz refletida ou ao microscópio eletrônico de varredura (MEV) é um dos problemas clássicos da microscopia de minérios. Com o objetivo de resolver este problema vem sendo recentemente empregada a técnica de microscopia colocalizada, que consiste na junção de duas modalidades de microscopia, microscopia ótica e microscopia eletrônica de varredura. O objetivo da técnica é fornecer uma imagem de microscopia multimodal, tornando possível a identificação, em amostras de minerais, de fases que não seriam distinguíveis com o uso de uma única modalidade, superando assim as limitações individuais dos dois sistemas. O método de registro até então disponível na literatura para a fusão das imagens de microscopia ótica e de microscopia eletrônica de varredura é um procedimento trabalhoso e extremamente dependente da interação do operador, uma vez que envolve a calibração do sistema com uma malha padrão a cada rotina de aquisição de imagens. Por esse motivo a técnica existente não é prática. Este trabalho propõe uma metodologia para automatizar o processo de registro de imagens de microscopia ótica e de microscopia eletrônica de varredura de maneira a aperfeiçoar e simplificar o uso da técnica de microscopia colocalizada. O método proposto pode ser subdividido em dois procedimentos: obtenção da transformação e registro das imagens com uso desta transformação. A obtenção da transformação envolve, primeiramente, o pré-processamento dos pares de forma a executar um registro grosseiro entre as imagens de cada par. Em seguida, são obtidos pontos homólogos, nas imagens óticas e de MEV. Para tal, foram utilizados dois métodos, o primeiro desenvolvido com base no algoritmo SIFT e o segundo definido a partir da varredura pelo máximo valor do coeficiente de correlação. Na etapa seguinte é calculada a transformação. Foram empregadas duas abordagens distintas: a média ponderada local (LWM) e os mínimos quadrados ponderados com polinômios ortogonais (MQPPO). O LWM recebe como entradas os chamados pseudo-homólogos, pontos que são forçadamente distribuídos de forma regular na imagem de referência, e que revelam, na imagem a ser registrada, os deslocamentos locais relativos entre as imagens. Tais pseudo-homólogos podem ser obtidos tanto pelo SIFT como pelo método do coeficiente de correlação. Por outro lado, o MQPPO recebe um conjunto de pontos com a distribuição natural. A análise dos registro de imagens obtidos empregou como métrica o valor da correlação entre as imagens obtidas. Observou-se que com o uso das variantes propostas SIFT-LWM e SIFT-Correlação foram obtidos resultados ligeiramente superiores aos do método com a malha padrão e LWM. Assim, a proposta, além de reduzir drasticamente a intervenção do operador, ainda possibilitou resultados mais precisos. Por outro lado, o método baseado na transformação fornecida pelos mínimos quadrados ponderados com polinômios ortogonais mostrou resultados inferiores aos produzidos pelo método que faz uso da malha padrão.
Resumo:
Estimating the fundamental matrix (F), to determine the epipolar geometry between a pair of images or video frames, is a basic step for a wide variety of vision-based functions used in construction operations, such as camera-pair calibration, automatic progress monitoring, and 3D reconstruction. Currently, robust methods (e.g., SIFT + normalized eight-point algorithm + RANSAC) are widely used in the construction community for this purpose. Although they can provide acceptable accuracy, the significant amount of required computational time impedes their adoption in real-time applications, especially video data analysis with many frames per second. Aiming to overcome this limitation, this paper presents and evaluates the accuracy of a solution to find F by combining the use of two speedy and consistent methods: SURF for the selection of a robust set of point correspondences and the normalized eight-point algorithm. This solution is tested extensively on construction site image pairs including changes in viewpoint, scale, illumination, rotation, and moving objects. The results demonstrate that this method can be used for real-time applications (5 image pairs per second with the resolution of 640 × 480) involving scenes of the built environment.
Resumo:
Manually inspecting bridges is a time-consuming and costly task. There are over 600,000 bridges in the US, and not all of them can be inspected and maintained within the specified time frame as some state DOTs cannot afford the essential costs and manpower. This paper presents a novel method that can detect bridge concrete columns from visual data for the purpose of eventually creating an automated bridge condition assessment system. The method employs SIFT feature detection and matching to find overlapping areas among images. Affine transformation matrices are then calculated to combine images containing different segments of one column into a single image. Following that, the bridge columns are detected by identifying the boundaries in the stitched image and classifying the material within each boundary. Preliminary test results using real bridge images indicate that most columns in stitched images can be correctly detected and thus, the viability of the application of this research.
Resumo:
低成本卡通制作中的图像和视频通常缺乏对动物角色毛发效果的表现,为了能对已有图像及视频中的动物角色进行处理,为其增添具备真实感的毛发效果,提出一种毛发风格化算法——卡通化毛发纹理算法.针对卡通中的动物角色合成毛发纹理并进行替换,分为图像应用及视频应用2个部分.在图像替换时,对要进行风格化处理的目标区域进行图像结构分析,以获取覆盖目标区域的三角网格,再生成毛发纹元并映射于网格之上,通过绘制纹元来生成具备真实感的毛发效果;在进行视频替换时,提取视频关键帧并基于图像应用算法生成相应的卡通化毛发纹理进行图像替换,之后根据关键帧的替换结果指导整个视频的替换.为了获取随时间变化的图像关键帧目标区域,采用SIFT算法计算特征点在时间轴上的匹配;为了快速合成卡通化毛发纹理,采用基于GPU的光线行进算法加速毛发纹元的体绘制过程.实验结果表明,文中算法可成功地对已有图像及视频的动物角色添加具备真实感的毛发效果.
Resumo:
Watermarking aims to hide particular information into some carrier but does not change the visual cognition of the carrier itself. Local features are good candidates to address the watermark synchronization error caused by geometric distortions and have attracted great attention for content-based image watermarking. This paper presents a novel feature point-based image watermarking scheme against geometric distortions. Scale invariant feature transform (SIFT) is first adopted to extract feature points and to generate a disk for each feature point that is invariant to translation and scaling. For each disk, orientation alignment is then performed to achieve rotation invariance. Finally, watermark is embedded in middle-frequency discrete Fourier transform (DFT) coefficients of each disk to improve the robustness against common image processing operations. Extensive experimental results and comparisons with some representative image watermarking methods confirm the excellent performance of the proposed method in robustness against various geometric distortions as well as common image processing operations.
Resumo:
图像匹配是计算机视觉中的一个重要研究领域,无论在民用还是军用上都有着重要的应用价值。本文以研究室国防重点预研究项目自动目标识别为背景,采用图像匹配方法,实现飞行器定位导航。具体工作流程是:事先利用侦察手段获取飞行器途经下方的地物景象(基准图)并存于飞行器载计算机中,然后当携带相应传感器的飞行器飞过预定的位置范围时,拍摄当地的地物景象(实时图),将实时图和基准图在飞行器载计算机中进行匹配比较,可确定当前飞行器的准确位置,完成定位导航功能。 由于对同一场景使用相同或不同的传感器(成像设备),以及在不同条件下(天候、照度、摄像位置和角度等)成像的复杂性和多样性等困难的存在,传统的相关匹配方法对上述困难的克服在方法原理上存在先天不足,所以无法胜任。故本文采用的方法是基于局部不变量特征的图像匹配。局部不变量特征因为能更灵活地描述图像,有效地处理图像复杂和遮挡问题,所以基于局部不变量特征的图像匹配方法对于视点的大变化,图像背景变化,以及目标场景识别等都有较好的效果。 基于局部不变量特征的图像匹配方法的步骤通常分为三部分:(1)用图像区域检测算子提取图像相关区域,(2)构造合适的特征描述区域,(3)选择特征相似度度量准则实现图像区域特征的匹配。本文详细研究了最大稳定极值区域 (MSER)方法,在此基础上进行了改进,具体工作如下:(1)利用高斯核函数对图像平滑采样,建立图像的高斯尺度空间,(2)在图像的高斯尺度空间中,利用MSER检测算子检测出图像在不同尺度下的所有仿射相关区域,(3)由于区域不规则,再用仿射不变的椭圆拟合并归一化,这时所有的区域仅存在旋转的不同,(4)用SIFT特征描述图像区域,得到所有区域的128维特征向量集。(5)采用欧式距离度量特征间的相似度,以最近邻和次近邻的比值作为特征匹配准则进行匹配。 本论文的主要研究工作在于把图像的高斯尺度空间引入到MSER算法中,进而大大改善了MSER算法对于图像的尺度变换、仿射变换以及图像模糊的性能。由于建立了高斯尺度空间,增加了MSER检测算子检测的范围,所以使得改进算法的性能得到了改善。论文第四章给出四组实验,分别为尺度变换,仿射变换,图像模糊和大视点变换。最后通过对匹配结果正确数量和错误数量的统计,论证了改进方法的性能要好于MSER算法。通过对算法复杂度的分析,得出虽然在改进算法引入了图像的高斯尺度空间,但是算法复杂度却并未增加,与MSER算法相同,为O(nloglogn)。