869 resultados para Relevance Feature Extraction


Relevância:

80.00% 80.00%

Publicador:

Resumo:

本文提出的系统主要为了在自动化传输带上进行零件的自动识别、定位、定向。曾用该系统对三十多种钟表零件进行反复验证,效果甚佳。

Relevância:

80.00% 80.00%

Publicador:

Resumo:

本文结合自适应小波变换滤波去噪方法与小波阈值去噪方法,提出了一种可用于变速器故障振动信号去噪的双层滤波去噪算法。该算法的滤波过程分为两层,第一层滤波采用自适应小波变换滤波算法;第二层滤波采用经典的小波阈值去噪算法对信号进行二次去噪。最后,将去噪后的故障信号采用小波包进行了分解,并提取了小波包频带能量作为故障特征向量。

Relevância:

80.00% 80.00%

Publicador:

Resumo:

独立分量分析是一种有效的人脸特征提取方法。考虑到人脸样本的对称性,本文采用对称独立分量分析的方法对人脸样本进行特征提取。为了提高独立分量分析法表征人脸特征空间的能力,本文采用遗传算法对特征空间进行选择优化,获得最优的人脸特征子集。仿真实验表明,本文提出方法的识别率明显的好于独立分量分析方法的识别率。

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The modeling formula based on seismic wavelet can well simulate zero - phase wavelet and hybrid-phase wavelet, and approximate maximal - phase and minimal - phase wavelet in a certain sense. The modeling wavelet can be used as wavelet function after suitable modification item added to meet some conditions. On the basis of the modified Morlet wavelet, the derivative wavelet function has been derived. As a basic wavelet, it can be sued for high resolution frequency - division processing and instantaneous feature extraction, in acoordance with the signal expanding characters in time and scale domains by each wavelet structured. Finally, an application example proves the effectiveness and reasonability of the method. Based on the analysis of SVD (Singular Value Decomposition) filter, by taking wavelet as basic wavelet and combining SVD filter and wavelet transform, a new de - noising method, which is Based on multi - dimension and multi-space de - noising method, is proposed. The implementation of this method is discussed the detail. Theoretical analysis and modeling show that the method has strong capacity of de - noising and keeping attributes of effective wave. It is a good tool for de - noising when the S/N ratio is poor. To give prominence to high frequency information of reflection event of important layer and to take account of other frequency information under processing seismic data, it is difficult for deconvolution filter to realize this goal. A filter from Fourier Transform has some problems for realizing the goal. In this paper, a new method is put forward, that is a method of processing seismic data in frequency division from wavelet transform and reconstruction. In ordinary seismic processing methods for resolution improvement, deconvolution operator has poor part characteristics, thus influencing the operator frequency. In wavelet transform, wavelet function has very good part characteristics. Frequency - division data processing in wavelet transform also brings quite good high resolution data, but it needs more time than deconvolution method does. On the basis of frequency - division processing method in wavelet domain, a new technique is put forward, which involves 1) designing filter operators equivalent to deconvolution operator in time and frequency domains in wavelet transform, 2) obtaining derivative wavelet function that is suitable to high - resolution seismic data processing, and 3) processing high resolution seismic data by deconvolution method in time domain. In the method of producing some instantaneous characteristic signals by using Hilbert transform, Hilbert transform is very sensitive to high - frequency random noise. As a result, even though there exist weak high - frequency noises in seismic signals, the obtained instantaneous characteristics of seismic signals may be still submerged by the noises. One method for having instantaneous characteristics of seismic signals in wavelet domain is put forward, which obtains directly the instantaneous characteristics of seismic signals by taking the characteristics of both the real part (real signals, namely seismic signals) and the imaginary part (the Hilbert transfom of real signals) of wavelet transform. The method has the functions of frequency division and noise removal. What is more, the weak wave whose frequency is lower than that of high - frequency random noise is retained in the obtained instantaneous characteristics of seismic signals, and the weak wave may be seen in instantaneous characteristic sections (such as instantaneous frequency, instantaneous phase and instantaneous amplitude). Impedance inversion is one of tools in the description of oil reservoir. one of methods in impedance inversion is Generalized Linear Inversion. This method has higher precision of inversion. But, this method is sensitive to noise of seismic data, so that error results are got. The description of oil reservoir in researching important geological layer, in order to give prominence to geological characteristics of the important layer, not only high frequency impedance to research thin sand layer, but other frequency impedance are needed. It is difficult for some impedance inversion method to realize the goal. Wavelet transform is very good in denoising and processing in frequency division. Therefore, in the paper, a method of impedance inversion is put forward based on wavelet transform, that is impedance inversion in frequency division from wavelet transform and reconstruction. in this paper, based on wavelet transform, methods of time - frequency analysis is given. Fanally, methods above are in application on real oil field - Sansan oil field.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The researches of the CC's form processing mainly involved the effects of all kinds of form properties. In most of the cases, the researches were conducted after the lexical process completed. A few which was about early phases of visual perception focused on the process of feature extraction in character recognition. Up till now, nobody put forward a propose that we should study the form processing in the early phases of visual perception of CC. We hold that because the form processing occurs in the early phases of visual perception, we should study the processing prelexically. Moreover, visual perception of CC is a course during which the CC becomes clear gradually, so that the effects of all kinds of form properties should not be a absolute phenomena of an all-or-none. In this study we adopted 4 methods to research the form processing in the early phases simulatedly and systematically, including the tachistoscopic repetition, increasing time to present gradually, enlarging the visual angle gradually and non- tachistoscopic searching and naming. Under all kinds of bad or degraded visual conditions, the instantaneous course of early-phases processing was slowed down and postponed, and then the growth course was open to before our eyes. We can captured the characteristics of the form processing in the early phases by analyzing the reaction speed and recognition accuracy. Accompanying the visual angle and time increasing, the clarity improved and we can find out the relation between the effects of form properties and visual clarity improving. The results were as follows: ①in the early phases of visual perception of CC, there were the effects of all kinds of form properties. ②the quantity of the effects would cut down when the visual conditions were being changed better and better. We raised the concept of character's space transparency and it's algorithm to explain these effects of form properties. Furthermore, a model was discussed to help understand the phenomenon that the quantity of the effects changed as the visual conditions were improved. ③The early phases of visual perception of CC isn't the loci of the frequency effect.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Liu, Yonghuai. Improving ICP with Easy Implementation for Free Form Surface Matching. Pattern Recognition, vol. 37, no. 2, pp. 211-226, 2004.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the first part of this paper we reviewed the fingerprint classification literature from two different perspectives: the feature extraction and the classifier learning. Aiming at answering the question of which among the reviewed methods would perform better in a real implementation we end up in a discussion which showed the difficulty in answering this question. No previous comparison exists in the literature and comparisons among papers are done with different experimental frameworks. Moreover, the difficulty in implementing published methods was stated due to the lack of details in their description, parameters and the fact that no source code is shared. For this reason, in this paper we will go through a deep experimental study following the proposed double perspective. In order to do so, we have carefully implemented some of the most relevant feature extraction methods according to the explanations found in the corresponding papers and we have tested their performance with different classifiers, including those specific proposals made by the authors. Our aim is to develop an objective experimental study in a common framework, which has not been done before and which can serve as a baseline for future works on the topic. This way, we will not only test their quality, but their reusability by other researchers and will be able to indicate which proposals could be considered for future developments. Furthermore, we will show that combining different feature extraction models in an ensemble can lead to a superior performance, significantly increasing the results obtained by individual models.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A tree-based dictionary learning model is developed for joint analysis of imagery and associated text. The dictionary learning may be applied directly to the imagery from patches, or to general feature vectors extracted from patches or superpixels (using any existing method for image feature extraction). Each image is associated with a path through the tree (from root to a leaf), and each of the multiple patches in a given image is associated with one node in that path. Nodes near the tree root are shared between multiple paths, representing image characteristics that are common among different types of images. Moving toward the leaves, nodes become specialized, representing details in image classes. If available, words (text) are also jointly modeled, with a path-dependent probability over words. The tree structure is inferred via a nested Dirichlet process, and a retrospective stick-breaking sampler is used to infer the tree depth and width.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper introduces a new technique for palmprint recognition based on Fisher Linear Discriminant Analysis (FLDA) and Gabor filter bank. This method involves convolving a palmprint image with a bank of Gabor filters at different scales and rotations for robust palmprint features extraction. Once these features are extracted, FLDA is applied for dimensionality reduction and class separability. Since the palmprint features are derived from the principal lines, wrinkles and texture along the palm area. One should carefully consider this fact when selecting the appropriate palm region for the feature extraction process in order to enhance recognition accuracy. To address this problem, an improved region of interest (ROI) extraction algorithm is introduced. This algorithm allows for an efficient extraction of the whole palm area by ignoring all the undesirable parts, such as the fingers and background. Experiments have shown that the proposed method yields attractive performances as evidenced by an Equal Error Rate (EER) of 0.03%.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The imaging properties of a phase conjugating lens operating in the far field zone of the imaged source and augmented with scatterers positioned in the source near field region are theoretically studied in this paper. The phase conjugating lens consists of a double sided 2D assembly of straight wire elements, individually interconnected through phase conjugation operators. The scattering elements are straight wire segments which are loaded with lumped impedance loads at their centers. We analytically and numerically analyze all stages of the imaging process; i) evanescent-to-propagating spectrum conversion; ii) focusing properties of infinite or finite sized phase conjugating lens; iii) source reconstruction upon propagating-to-evanescent spectrum conversion. We show that the resolution that can be achieved depends critically on the separation distance between the imaged source and scattering arrangement, as well as on the topology of the scatterers used. Imaged focal widths of up to one-seventh wavelength are demonstrated. The results obtained indicate the possibility of such an arrangement as a potential practical means for realising using conventional materials devices for fine feature extraction by electromagnetic lensing at distances remotely located from the source objects under investigation

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recent renewed interest in computational writer identification has resulted in an increased number of publications. In relation to historical musicology its application has so far been limited. One of the obstacles seems to be that the clarity of the images from the scans available for computational analysis is often not sufficient. In this paper, the use of the Hinge feature is proposed to avoid segmentation and staff-line removal for effective feature extraction from low quality scans. The use of an auto encoder in Hinge feature space is suggested as an alternative to staff-line removal by image processing, and their performance is compared. The result of the experiment shows an accuracy of 87 % for the dataset containing 84 writers’ samples, and superiority of our segmentation and staff-line removal free approach. Practical analysis on Bach’s autograph manuscript of the Well-Tempered Clavier II (Additional MS. 35021 in the British Library, London) is also presented and the extensive applicability of our approach is demonstrated.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a new approach to speech enhancement from single-channel measurements involving both noise and channel distortion (i.e., convolutional noise), and demonstrates its applications for robust speech recognition and for improving noisy speech quality. The approach is based on finding longest matching segments (LMS) from a corpus of clean, wideband speech. The approach adds three novel developments to our previous LMS research. First, we address the problem of channel distortion as well as additive noise. Second, we present an improved method for modeling noise for speech estimation. Third, we present an iterative algorithm which updates the noise and channel estimates of the corpus data model. In experiments using speech recognition as a test with the Aurora 4 database, the use of our enhancement approach as a preprocessor for feature extraction significantly improved the performance of a baseline recognition system. In another comparison against conventional enhancement algorithms, both the PESQ and the segmental SNR ratings of the LMS algorithm were superior to the other methods for noisy speech enhancement.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a new approach to single-channel speech enhancement involving both noise and channel distortion (i.e., convolutional noise). The approach is based on finding longest matching segments (LMS) from a corpus of clean, wideband speech. The approach adds three novel developments to our previous LMS research. First, we address the problem of channel distortion as well as additive noise. Second, we present an improved method for modeling noise. Third, we present an iterative algorithm for improved speech estimates. In experiments using speech recognition as a test with the Aurora 4 database, the use of our enhancement approach as a preprocessor for feature extraction significantly improved the performance of a baseline recognition system. In another comparison against conventional enhancement algorithms, both the PESQ and the segmental SNR ratings of the LMS algorithm were superior to the other methods for noisy speech enhancement. Index Terms: corpus-based speech model, longest matching segment, speech enhancement, speech recognition

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Although visual surveillance has emerged as an effective technolody for public security, privacy has become an issue of great concern in the transmission and distribution of surveillance videos. For example, personal facial images should not be browsed without permission. To cope with this issue, face image scrambling has emerged as a simple solution for privacyrelated applications. Consequently, online facial biometric verification needs to be carried out in the scrambled domain thus bringing a new challenge to face classification. In this paper, we investigate face verification issues in the scrambled domain and propose a novel scheme to handle this challenge. In our proposed method, to make feature extraction from scrambled face images robust, a biased random subspace sampling scheme is applied to construct fuzzy decision trees from randomly selected features, and fuzzy forest decision using fuzzy memberships is then obtained from combining all fuzzy tree decisions. In our experiment, we first estimated the optimal parameters for the construction of the random forest, and then applied the optimized model to the benchmark tests using three publically available face datasets. The experimental results validated that our proposed scheme can robustly cope with the challenging tests in the scrambled domain, and achieved an improved accuracy over all tests, making our method a promising candidate for the emerging privacy-related facial biometric applications.