859 resultados para 3D feature extraction
Resumo:
对最佳鉴别矢量的求解方法进行了研究,根据矩阵的分块理论和优化理论,在一定的条件下,从理论上得到类间散布矩阵和总体散布矩阵的一种简洁表示方法,提出了求解最佳鉴别矢量的一种新算法,该算法的优点是计算量明显减少。ORL人脸数据库的数值实验,验证了上述论断的正确性。实验结果表明,虽然识别率与分块维数之间存在非线性关系,但可以通过选择适当的分块维数来获得较高的识别率。类间散布矩阵和总体散布矩阵的一种简洁表示方法适合于一切使用Fisher鉴别准则的模式识别问题。
Resumo:
从特殊票据———火车票票面字符的特点出发 ,将笔划复杂性指数与四周面积编码结合起来作为粗分类的分类特征 采用C -均值聚类算法进行预分类 最后生成分类特征库———分类字典 .得到了预期的分类效果 ,正确分类率达到 95 % .
Resumo:
快速图像特征提取算法是图像处理和计算机视觉领域的重要研究课题。在复杂背景下基于视频的自动目标识别与跟踪中,目标特征的快速准确提取是实现高概率自动目标识别的关键技术,目标特征的快速提取能简化待识别目标的表示,实现快速准确地识别出感兴趣的目标。在机器人视觉中,快速图像特征提取也有广泛的应用,如视觉里程计技术、机器人自主视觉导航技术等。本文根据视频监控和机器人视觉的实际需求,对此课题开展了研究,主要的研究成果包括如下内容: 1)为消除视频图像的噪声对特征点提取的影响,提出了一种新的基于图像统计信息消除椒盐噪声算法。该算法能在去除噪声同时保持图像的细节特征(边缘和角点等),并且具有较低的计算复杂度。 2)提出了基于4叉树和色彩迁移理论的光照常恒算法。该算法可对同一场景不同光照下的两幅图像进行光照校正。将一幅图像的亮度特征传递给另一幅图像,使目标图像具有与参照图像相似的亮度统计信息。通过亮度处理之后,两幅图像具有相似的光照背景,有助于后续的特征点检测。 3)提出了基于LBP的角点快速提取算法。该方法与目前流行Harris和SUSAN角点提取算法相比,具有复杂度低、实时性好、灰度伸缩不变性和旋转不变性等优点。 4)从透镜成像模型原理出发,推导出精确的景深计算公式,并将其和传统的景深计算公式进行了深入的比较和分析,最后从计算机视觉角度阐述了本文的景深计算公式的优点。 5)在上述算法研究和试验基础上,研发了森林烟火自动识别软件系统。在该系统中,上述提出的算法得到成功的应用。该软件系统已投入实际使用,实现了24小时全天候森林烟火自动监控和预警,同时也验证了文中所提算法的有效性。论文的最后一章进行了总结,并对今后的研究工作进行了展望。
Resumo:
插件作业 (parts mating)是装配机器人的一项基本作业环节 .本文介绍了以双目立体视觉实现该作业的视觉导引方法 .该方法通过采用人机交互方式 ,借助于人的智慧 ,提高了图像特征提取和匹配的准确性和可靠性、可直观准确地给出插件作业的动作参数 ,克服了自动视觉计算复杂、鲁棒性差的缺点 ,适用于机器人遥操作作业 .实验表明 ,基于人机交互的机器人插件作业在立体视觉导引下是完全可行的
Resumo:
本文对视觉控制下的一个简单实验室装配系统作了介绍,讨论了系统组成、机器人控制、二维图象特征的提取、对物体自动识别、定位定向、系统标定、实现垒积木装配工作.本实验系统用的是我所研制的国内第一台示教再现机器人.
Resumo:
本文提出的系统主要为了在自动化传输带上进行零件的自动识别、定位、定向。曾用该系统对三十多种钟表零件进行反复验证,效果甚佳。
Resumo:
本文结合自适应小波变换滤波去噪方法与小波阈值去噪方法,提出了一种可用于变速器故障振动信号去噪的双层滤波去噪算法。该算法的滤波过程分为两层,第一层滤波采用自适应小波变换滤波算法;第二层滤波采用经典的小波阈值去噪算法对信号进行二次去噪。最后,将去噪后的故障信号采用小波包进行了分解,并提取了小波包频带能量作为故障特征向量。
Resumo:
独立分量分析是一种有效的人脸特征提取方法。考虑到人脸样本的对称性,本文采用对称独立分量分析的方法对人脸样本进行特征提取。为了提高独立分量分析法表征人脸特征空间的能力,本文采用遗传算法对特征空间进行选择优化,获得最优的人脸特征子集。仿真实验表明,本文提出方法的识别率明显的好于独立分量分析方法的识别率。
Resumo:
The modeling formula based on seismic wavelet can well simulate zero - phase wavelet and hybrid-phase wavelet, and approximate maximal - phase and minimal - phase wavelet in a certain sense. The modeling wavelet can be used as wavelet function after suitable modification item added to meet some conditions. On the basis of the modified Morlet wavelet, the derivative wavelet function has been derived. As a basic wavelet, it can be sued for high resolution frequency - division processing and instantaneous feature extraction, in acoordance with the signal expanding characters in time and scale domains by each wavelet structured. Finally, an application example proves the effectiveness and reasonability of the method. Based on the analysis of SVD (Singular Value Decomposition) filter, by taking wavelet as basic wavelet and combining SVD filter and wavelet transform, a new de - noising method, which is Based on multi - dimension and multi-space de - noising method, is proposed. The implementation of this method is discussed the detail. Theoretical analysis and modeling show that the method has strong capacity of de - noising and keeping attributes of effective wave. It is a good tool for de - noising when the S/N ratio is poor. To give prominence to high frequency information of reflection event of important layer and to take account of other frequency information under processing seismic data, it is difficult for deconvolution filter to realize this goal. A filter from Fourier Transform has some problems for realizing the goal. In this paper, a new method is put forward, that is a method of processing seismic data in frequency division from wavelet transform and reconstruction. In ordinary seismic processing methods for resolution improvement, deconvolution operator has poor part characteristics, thus influencing the operator frequency. In wavelet transform, wavelet function has very good part characteristics. Frequency - division data processing in wavelet transform also brings quite good high resolution data, but it needs more time than deconvolution method does. On the basis of frequency - division processing method in wavelet domain, a new technique is put forward, which involves 1) designing filter operators equivalent to deconvolution operator in time and frequency domains in wavelet transform, 2) obtaining derivative wavelet function that is suitable to high - resolution seismic data processing, and 3) processing high resolution seismic data by deconvolution method in time domain. In the method of producing some instantaneous characteristic signals by using Hilbert transform, Hilbert transform is very sensitive to high - frequency random noise. As a result, even though there exist weak high - frequency noises in seismic signals, the obtained instantaneous characteristics of seismic signals may be still submerged by the noises. One method for having instantaneous characteristics of seismic signals in wavelet domain is put forward, which obtains directly the instantaneous characteristics of seismic signals by taking the characteristics of both the real part (real signals, namely seismic signals) and the imaginary part (the Hilbert transfom of real signals) of wavelet transform. The method has the functions of frequency division and noise removal. What is more, the weak wave whose frequency is lower than that of high - frequency random noise is retained in the obtained instantaneous characteristics of seismic signals, and the weak wave may be seen in instantaneous characteristic sections (such as instantaneous frequency, instantaneous phase and instantaneous amplitude). Impedance inversion is one of tools in the description of oil reservoir. one of methods in impedance inversion is Generalized Linear Inversion. This method has higher precision of inversion. But, this method is sensitive to noise of seismic data, so that error results are got. The description of oil reservoir in researching important geological layer, in order to give prominence to geological characteristics of the important layer, not only high frequency impedance to research thin sand layer, but other frequency impedance are needed. It is difficult for some impedance inversion method to realize the goal. Wavelet transform is very good in denoising and processing in frequency division. Therefore, in the paper, a method of impedance inversion is put forward based on wavelet transform, that is impedance inversion in frequency division from wavelet transform and reconstruction. in this paper, based on wavelet transform, methods of time - frequency analysis is given. Fanally, methods above are in application on real oil field - Sansan oil field.
Resumo:
The researches of the CC's form processing mainly involved the effects of all kinds of form properties. In most of the cases, the researches were conducted after the lexical process completed. A few which was about early phases of visual perception focused on the process of feature extraction in character recognition. Up till now, nobody put forward a propose that we should study the form processing in the early phases of visual perception of CC. We hold that because the form processing occurs in the early phases of visual perception, we should study the processing prelexically. Moreover, visual perception of CC is a course during which the CC becomes clear gradually, so that the effects of all kinds of form properties should not be a absolute phenomena of an all-or-none. In this study we adopted 4 methods to research the form processing in the early phases simulatedly and systematically, including the tachistoscopic repetition, increasing time to present gradually, enlarging the visual angle gradually and non- tachistoscopic searching and naming. Under all kinds of bad or degraded visual conditions, the instantaneous course of early-phases processing was slowed down and postponed, and then the growth course was open to before our eyes. We can captured the characteristics of the form processing in the early phases by analyzing the reaction speed and recognition accuracy. Accompanying the visual angle and time increasing, the clarity improved and we can find out the relation between the effects of form properties and visual clarity improving. The results were as follows: ①in the early phases of visual perception of CC, there were the effects of all kinds of form properties. ②the quantity of the effects would cut down when the visual conditions were being changed better and better. We raised the concept of character's space transparency and it's algorithm to explain these effects of form properties. Furthermore, a model was discussed to help understand the phenomenon that the quantity of the effects changed as the visual conditions were improved. ③The early phases of visual perception of CC isn't the loci of the frequency effect.
Resumo:
Liu, Yonghuai. Improving ICP with Easy Implementation for Free Form Surface Matching. Pattern Recognition, vol. 37, no. 2, pp. 211-226, 2004.
Resumo:
In the first part of this paper we reviewed the fingerprint classification literature from two different perspectives: the feature extraction and the classifier learning. Aiming at answering the question of which among the reviewed methods would perform better in a real implementation we end up in a discussion which showed the difficulty in answering this question. No previous comparison exists in the literature and comparisons among papers are done with different experimental frameworks. Moreover, the difficulty in implementing published methods was stated due to the lack of details in their description, parameters and the fact that no source code is shared. For this reason, in this paper we will go through a deep experimental study following the proposed double perspective. In order to do so, we have carefully implemented some of the most relevant feature extraction methods according to the explanations found in the corresponding papers and we have tested their performance with different classifiers, including those specific proposals made by the authors. Our aim is to develop an objective experimental study in a common framework, which has not been done before and which can serve as a baseline for future works on the topic. This way, we will not only test their quality, but their reusability by other researchers and will be able to indicate which proposals could be considered for future developments. Furthermore, we will show that combining different feature extraction models in an ensemble can lead to a superior performance, significantly increasing the results obtained by individual models.
Resumo:
Gemstone Team CHIP
Resumo:
A tree-based dictionary learning model is developed for joint analysis of imagery and associated text. The dictionary learning may be applied directly to the imagery from patches, or to general feature vectors extracted from patches or superpixels (using any existing method for image feature extraction). Each image is associated with a path through the tree (from root to a leaf), and each of the multiple patches in a given image is associated with one node in that path. Nodes near the tree root are shared between multiple paths, representing image characteristics that are common among different types of images. Moving toward the leaves, nodes become specialized, representing details in image classes. If available, words (text) are also jointly modeled, with a path-dependent probability over words. The tree structure is inferred via a nested Dirichlet process, and a retrospective stick-breaking sampler is used to infer the tree depth and width.
Resumo:
This paper introduces a new technique for palmprint recognition based on Fisher Linear Discriminant Analysis (FLDA) and Gabor filter bank. This method involves convolving a palmprint image with a bank of Gabor filters at different scales and rotations for robust palmprint features extraction. Once these features are extracted, FLDA is applied for dimensionality reduction and class separability. Since the palmprint features are derived from the principal lines, wrinkles and texture along the palm area. One should carefully consider this fact when selecting the appropriate palm region for the feature extraction process in order to enhance recognition accuracy. To address this problem, an improved region of interest (ROI) extraction algorithm is introduced. This algorithm allows for an efficient extraction of the whole palm area by ignoring all the undesirable parts, such as the fingers and background. Experiments have shown that the proposed method yields attractive performances as evidenced by an Equal Error Rate (EER) of 0.03%.