899 resultados para vector filtering
Resumo:
针对包裹相位场在消除噪声的特别要求,提出了对包裹相位场预处理的向量滤波方法。理论分析了功率谱密度和向量变换的滤波思想。仿真实验研究了滤波模板的大小对滤波效果的影响,比较了在同样模板的选取下向量滤波方法与二次均值滤波法的滤波效果,分析了向量滤波方法与二次均值滤波法的功率谱密度分布。结果表明,相比二次均值滤波法,向量滤波法在降噪的同时,相位图依然清晰,它的功率谱密度分布没有明显变化,预处理后频谱信息的丢失远小于二次均值滤波。向量滤波法在提高信噪比的同时,信息的完整性得到较好保证。
Resumo:
Four-dimensional variational data assimilation (4D-Var) combines the information from a time sequence of observations with the model dynamics and a background state to produce an analysis. In this paper, a new mathematical insight into the behaviour of 4D-Var is gained from an extension of concepts that are used to assess the qualitative information content of observations in satellite retrievals. It is shown that the 4D-Var analysis increments can be written as a linear combination of the singular vectors of a matrix which is a function of both the observational and the forecast model systems. This formulation is used to consider the filtering and interpolating aspects of 4D-Var using idealized case-studies based on a simple model of baroclinic instability. The results of the 4D-Var case-studies exhibit the reconstruction of the state in unobserved regions as a consequence of the interpolation of observations through time. The results also exhibit the filtering of components with small spatial scales that correspond to noise, and the filtering of structures in unobserved regions. The singular vector perspective gives a very clear view of this filtering and interpolating by the 4D-Var algorithm and shows that the appropriate specification of the a priori statistics is vital to extract the largest possible amount of useful information from the observations. Copyright © 2005 Royal Meteorological Society
Resumo:
An information filtering (IF) system monitors an incoming document stream to find the documents that match the information needs specified by the user profiles. To learn to use the user profiles effectively is one of the most challenging tasks when developing an IF system. With the document selection criteria better defined based on the users’ needs, filtering large streams of information can be more efficient and effective. To learn the user profiles, term-based approaches have been widely used in the IF community because of their simplicity and directness. Term-based approaches are relatively well established. However, these approaches have problems when dealing with polysemy and synonymy, which often lead to an information overload problem. Recently, pattern-based approaches (or Pattern Taxonomy Models (PTM) [160]) have been proposed for IF by the data mining community. These approaches are better at capturing sematic information and have shown encouraging results for improving the effectiveness of the IF system. On the other hand, pattern discovery from large data streams is not computationally efficient. Also, these approaches had to deal with low frequency pattern issues. The measures used by the data mining technique (for example, “support” and “confidences”) to learn the profile have turned out to be not suitable for filtering. They can lead to a mismatch problem. This thesis uses the rough set-based reasoning (term-based) and pattern mining approach as a unified framework for information filtering to overcome the aforementioned problems. This system consists of two stages - topic filtering and pattern mining stages. The topic filtering stage is intended to minimize information overloading by filtering out the most likely irrelevant information based on the user profiles. A novel user-profiles learning method and a theoretical model of the threshold setting have been developed by using rough set decision theory. The second stage (pattern mining) aims at solving the problem of the information mismatch. This stage is precision-oriented. A new document-ranking function has been derived by exploiting the patterns in the pattern taxonomy. The most likely relevant documents were assigned higher scores by the ranking function. Because there is a relatively small amount of documents left after the first stage, the computational cost is markedly reduced; at the same time, pattern discoveries yield more accurate results. The overall performance of the system was improved significantly. The new two-stage information filtering model has been evaluated by extensive experiments. Tests were based on the well-known IR bench-marking processes, using the latest version of the Reuters dataset, namely, the Reuters Corpus Volume 1 (RCV1). The performance of the new two-stage model was compared with both the term-based and data mining-based IF models. The results demonstrate that the proposed information filtering system outperforms significantly the other IF systems, such as the traditional Rocchio IF model, the state-of-the-art term-based models, including the BM25, Support Vector Machines (SVM), and Pattern Taxonomy Model (PTM).
Resumo:
In correlation filtering we attempt to remove that component of the aeromagnetic field which is closely related to the topography. The magnetization vector is assumed to be spatially variable, but it can be successively estimated under the additional assumption that the magnetic component due to topography is uncorrelated with the magnetic signal of deeper origin. The correlation filtering was tested against a synthetic example. The filtered field compares very well with the known signal of deeper origin. We have also applied this method to real data from the south Indian shield. It is demonstrated that the performance of the correlation filtering is superior in situations where the direction of magnetization is variable, for example, where the remnant magnetization is dominant.
Resumo:
This paper considers the design and analysis of a filter at the receiver of a source coding system to mitigate the excess distortion caused due to channel errors. The index output by the source encoder is sent over a fading discrete binary symmetric channel and the possibly incorrect received index is mapped to the corresponding codeword by a Vector Quantization (VQ) decoder at the receiver. The output of the VQ decoder is then processed by a receive filter to obtain an estimate of the source instantiation. The distortion performance is analyzed for weighted mean square error (WMSE) and the optimum receive filter that minimizes the expected distortion is derived for two different cases of fading. It is shown that the performance of the system with the receive filter is strictly better than that of a conventional VQ and the difference becomes more significant as the number of bits transmitted increases. Theoretical expressions for an upper and lower bound on the WMSE performance of the system with the receive filter and a Rayleigh flat fading channel are derived. The design of a receive filter in the presence of channel mismatch is also studied and it is shown that a minimax solution is the one obtained by designing the receive filter for the worst possible channel. Simulation results are presented to validate the theoretical expressions and illustrate the benefits of receive filtering.
Resumo:
In this paper we present an information filtering agent called sharable instructable information filtering agent (SIIFA). It adopted the approach of sharable instructable agents. SIIFA provides comprehensible and flexible interaction to represent and filter the documents. The representation scheme in SIIFA is personalized. It, either fully or partly, can be shared among the users of the stream while not revealing their interests and can be easily edited. SIIFA is evaluated on the comp.ai.neural-nets Usent newsgroup documents and compared with the vector space method.
Resumo:
We address the classical problem of delta feature computation, and interpret the operation involved in terms of Savitzky- Golay (SG) filtering. Features such as themel-frequency cepstral coefficients (MFCCs), obtained based on short-time spectra of the speech signal, are commonly used in speech recognition tasks. In order to incorporate the dynamics of speech, auxiliary delta and delta-delta features, which are computed as temporal derivatives of the original features, are used. Typically, the delta features are computed in a smooth fashion using local least-squares (LS) polynomial fitting on each feature vector component trajectory. In the light of the original work of Savitzky and Golay, and a recent article by Schafer in IEEE Signal Processing Magazine, we interpret the dynamic feature vector computation for arbitrary derivative orders as SG filtering with a fixed impulse response. This filtering equivalence brings in significantly lower latency with no loss in accuracy, as validated by results on a TIMIT phoneme recognition task. The SG filters involved in dynamic parameter computation can be viewed as modulation filters, proposed by Hermansky.
Resumo:
Medical image segmentation finds application in computer-aided diagnosis, computer-guided surgery, measuring tissue volumes, locating tumors, and pathologies. One approach to segmentation is to use active contours or snakes. Active contours start from an initialization (often manually specified) and are guided by image-dependent forces to the object boundary. Snakes may also be guided by gradient vector fields associated with an image. The first main result in this direction is that of Xu and Prince, who proposed the notion of gradient vector flow (GVF), which is computed iteratively. We propose a new formalism to compute the vector flow based on the notion of bilateral filtering of the gradient field associated with the edge map - we refer to it as the bilateral vector flow (BVF). The range kernel definition that we employ is different from the one employed in the standard Gaussian bilateral filter. The advantage of the BVF formalism is that smooth gradient vector flow fields with enhanced edge information can be computed noniteratively. The quality of image segmentation turned out to be on par with that obtained using the GVF and in some cases better than the GVF.
Resumo:
This study considers linear filtering methods for minimising the end-to-end average distortion of a fixed-rate source quantisation system. For the source encoder, both scalar and vector quantisation are considered. The codebook index output by the encoder is sent over a noisy discrete memoryless channel whose statistics could be unknown at the transmitter. At the receiver, the code vector corresponding to the received index is passed through a linear receive filter, whose output is an estimate of the source instantiation. Under this setup, an approximate expression for the average weighted mean-square error (WMSE) between the source instantiation and the reconstructed vector at the receiver is derived using high-resolution quantisation theory. Also, a closed-form expression for the linear receive filter that minimises the approximate average WMSE is derived. The generality of framework developed is further demonstrated by theoretically analysing the performance of other adaptation techniques that can be employed when the channel statistics are available at the transmitter also, such as joint transmit-receive linear filtering and codebook scaling. Monte Carlo simulation results validate the theoretical expressions, and illustrate the improvement in the average distortion that can be obtained using linear filtering techniques.
Resumo:
N-gram analysis is an approach that investigates the structure of a program using bytes, characters, or text strings. A key issue with N-gram analysis is feature selection amidst the explosion of features that occurs when N is increased. The experiments within this paper represent programs as operational code (opcode) density histograms gained through dynamic analysis. A support vector machine is used to create a reference model, which is used to evaluate two methods of feature reduction, which are 'area of intersect' and 'subspace analysis using eigenvectors.' The findings show that the relationships between features are complex and simple statistics filtering approaches do not provide a viable approach. However, eigenvector subspace analysis produces a suitable filter.
Resumo:
Nowadays, vector sensors which measure both acoustic pressure and particle velocity begin to be available in underwater acoustic systems, normally configured as vector sensor arrays (VSA). The spatial filtering capabilities of a VSA can be used, with advantage over traditional pressure only hydrophone arrays, for estimating acoustic field directionality as well as arrival times and spectral content, which could open up the possibility for its use in bottom properties' estimation. An additional motivation for this work is to test the possibility of using high frequency probe signals (say above 2 kHz) for reducing size and cost of actual sub bottom profilers and current geoacoustic inversion methods. This work studies the bottom related structure of the VSA acquired signals, regarding the emitted signal waveform, frequency band and source-receiver geometry in order to estimate bottom properties, specially bottom reflection coefficient characteristics. Such a system was used during the Makai 2005 experiment, off Kauai I., Hawai (USA) to receive precoded signals in a broad frequency band from 8 up to 14 kHz. The agreement between the observed and the modelled acoustic data is discussed and preliminary results on the bottom reflection estimation are presented.
Resumo:
This paper presents a controller design scheme for a priori unknown non-linear dynamical processes that are identified via an operating point neurofuzzy system from process data. Based on a neurofuzzy design and model construction algorithm (NeuDec) for a non-linear dynamical process, a neurofuzzy state-space model of controllable form is initially constructed. The control scheme based on closed-loop pole assignment is then utilized to ensure the time invariance and linearization of the state equations so that the system stability can be guaranteed under some mild assumptions, even in the presence of modelling error. The proposed approach requires a known state vector for the application of pole assignment state feedback. For this purpose, a generalized Kalman filtering algorithm with coloured noise is developed on the basis of the neurofuzzy state-space model to obtain an optimal state vector estimation. The derived controller is applied in typical output tracking problems by minimizing the tracking error. Simulation examples are included to demonstrate the operation and effectiveness of the new approach.
Resumo:
One challenge on data assimilation (DA) methods is how the error covariance for the model state is computed. Ensemble methods have been proposed for producing error covariance estimates, as error is propagated in time using the non-linear model. Variational methods, on the other hand, use the concepts of control theory, whereby the state estimate is optimized from both the background and the measurements. Numerical optimization schemes are applied which solve the problem of memory storage and huge matrix inversion needed by classical Kalman filter methods. Variational Ensemble Kalman filter (VEnKF), as a method inspired the Variational Kalman Filter (VKF), enjoys the benefits from both ensemble methods and variational methods. It avoids filter inbreeding problems which emerge when the ensemble spread underestimates the true error covariance. In VEnKF this is tackled by resampling the ensemble every time measurements are available. One advantage of VEnKF over VKF is that it needs neither tangent linear code nor adjoint code. In this thesis, VEnKF has been applied to a two-dimensional shallow water model simulating a dam-break experiment. The model is a public code with water height measurements recorded in seven stations along the 21:2 m long 1:4 m wide flume’s mid-line. Because the data were too sparse to assimilate the 30 171 model state vector, we chose to interpolate the data both in time and in space. The results of the assimilation were compared with that of a pure simulation. We have found that the results revealed by the VEnKF were more realistic, without numerical artifacts present in the pure simulation. Creating a wrapper code for a model and DA scheme might be challenging, especially when the two were designed independently or are poorly documented. In this thesis we have presented a non-intrusive approach of coupling the model and a DA scheme. An external program is used to send and receive information between the model and DA procedure using files. The advantage of this method is that the model code changes needed are minimal, only a few lines which facilitate input and output. Apart from being simple to coupling, the approach can be employed even if the two were written in different programming languages, because the communication is not through code. The non-intrusive approach is made to accommodate parallel computing by just telling the control program to wait until all the processes have ended before the DA procedure is invoked. It is worth mentioning the overhead increase caused by the approach, as at every assimilation cycle both the model and the DA procedure have to be initialized. Nonetheless, the method can be an ideal approach for a benchmark platform in testing DA methods. The non-intrusive VEnKF has been applied to a multi-purpose hydrodynamic model COHERENS to assimilate Total Suspended Matter (TSM) in lake Säkylän Pyhäjärvi. The lake has an area of 154 km2 with an average depth of 5:4 m. Turbidity and chlorophyll-a concentrations from MERIS satellite images for 7 days between May 16 and July 6 2009 were available. The effect of the organic matter has been computationally eliminated to obtain TSM data. Because of computational demands from both COHERENS and VEnKF, we have chosen to use 1 km grid resolution. The results of the VEnKF have been compared with the measurements recorded at an automatic station located at the North-Western part of the lake. However, due to TSM data sparsity in both time and space, it could not be well matched. The use of multiple automatic stations with real time data is important to elude the time sparsity problem. With DA, this will help in better understanding the environmental hazard variables for instance. We have found that using a very high ensemble size does not necessarily improve the results, because there is a limit whereby additional ensemble members add very little to the performance. Successful implementation of the non-intrusive VEnKF and the ensemble size limit for performance leads to an emerging area of Reduced Order Modeling (ROM). To save computational resources, running full-blown model in ROM is avoided. When the ROM is applied with the non-intrusive DA approach, it might result in a cheaper algorithm that will relax computation challenges existing in the field of modelling and DA.