36 resultados para Smoothing
Resumo:
Neural data are inevitably contaminated by noise. When such noisy data are subjected to statistical analysis, misleading conclusions can be reached. Here we attempt to address this problem by applying a state-space smoothing method, based on the combined use of the Kalman filter theory and the Expectation–Maximization algorithm, to denoise two datasets of local field potentials recorded from monkeys performing a visuomotor task. For the first dataset, it was found that the analysis of the high gamma band (60–90 Hz) neural activity in the prefrontal cortex is highly susceptible to the effect of noise, and denoising leads to markedly improved results that were physiologically interpretable. For the second dataset, Granger causality between primary motor and primary somatosensory cortices was not consistent across two monkeys and the effect of noise was suspected. After denoising, the discrepancy between the two subjects was significantly reduced.
Resumo:
In this paper, we solve the distributed parameter fixed point smoothing problem by formulating it as an extended linear filtering problem and show that these results coincide with those obtained in the literature using the forward innovations method.
Resumo:
The effect of using a spatially smoothed forward-backward covariance matrix on the performance of weighted eigen-based state space methods/ESPRIT, and weighted MUSIC for direction-of-arrival (DOA) estimation is analyzed. Expressions for the mean-squared error in the estimates of the signal zeros and the DOA estimates, along with some general properties of the estimates and optimal weighting matrices, are derived. A key result is that optimally weighted MUSIC and weighted state-space methods/ESPRIT have identical asymptotic performance. Moreover, by properly choosing the number of subarrays, the performance of unweighted state space methods can be significantly improved. It is also shown that the mean-squared error in the DOA estimates is independent of the exact distribution of the source amplitudes. This results in a unified framework for dealing with DOA estimation using a uniformly spaced linear sensor array and the time series frequency estimation problems.
Resumo:
By using the strain smoothing technique proposed by Chen et al. (Comput. Mech. 2000; 25: 137-156) for meshless methods in the context of the finite element method (FEM), Liu et al. (Comput. Mech. 2007; 39(6): 859-877) developed the Smoothed FEM (SFEM). Although the SFEM is not yet well understood mathematically, numerical experiments point to potentially useful features of this particularly simple modification of the FEM. To date, the SFEM has only been investigated for bilinear and Wachspress approximations and is limited to linear reproducing conditions. The goal of this paper is to extend the strain smoothing to higher order elements and to investigate numerically in which condition strain smoothing is beneficial to accuracy and convergence of enriched finite element approximations. We focus on three widely used enrichment schemes, namely: (a) weak discontinuities; (b) strong discontinuities; (c) near-tip linear elastic fracture mechanics functions. The main conclusion is that strain smoothing in enriched approximation is only beneficial when the enrichment functions are polynomial (cases (a) and (b)), but that non-polynomial enrichment of type (c) lead to inferior methods compared to the standard enriched FEM (e.g. XFEM). Copyright (C) 2011 John Wiley & Sons, Ltd.
Resumo:
The paper analyses the effect of spatial smoothing on the performance of MUSIC algorithm. In particular, an attempt is made to bring out two effects of the smoothing: (i) reduction of effective correlation between the impinging signals and (ii) reduction of the noise perturbations due to finite data. For the case of a two-source scenario with widely spaced sources, simplified expressions for improvement with smoothing have been obtained which provide more insight into the impact of smoothing. Specifically, a pessimistic estimate of the minimum value of source correlation beyond which the smoothing is beneficial is brought out by these expressions. Computer simulations are used to demonstrate the usefulness of the analytical results.
Resumo:
The statistical performance analysis of ESPRIT, root-MUSIC, minimum-norm methods for direction estimation, due to finite data perturbations, using the modified spatially smoothed covariance matrix, is developed. Expressions for the mean-squared error in the direction estimates are derived based on a common framework. Based on the analysis, the use of the modified smoothed covariance matrix improves the performance of the methods when the sources are fully correlated. Also, the performance is better even when the number of subarrays is large unlike in the case of the conventionally smoothed covariance matrix. However, the performance for uncorrelated sources deteriorates due to an artificial correlation introduced by the modified smoothing. The theoretical expressions are validated using extensive simulations. (C) 1999 Elsevier Science B.V. All rights reserved.
Resumo:
Medical image segmentation finds application in computer-aided diagnosis, computer-guided surgery, measuring tissue volumes, locating tumors, and pathologies. One approach to segmentation is to use active contours or snakes. Active contours start from an initialization (often manually specified) and are guided by image-dependent forces to the object boundary. Snakes may also be guided by gradient vector fields associated with an image. The first main result in this direction is that of Xu and Prince, who proposed the notion of gradient vector flow (GVF), which is computed iteratively. We propose a new formalism to compute the vector flow based on the notion of bilateral filtering of the gradient field associated with the edge map - we refer to it as the bilateral vector flow (BVF). The range kernel definition that we employ is different from the one employed in the standard Gaussian bilateral filter. The advantage of the BVF formalism is that smooth gradient vector flow fields with enhanced edge information can be computed noniteratively. The quality of image segmentation turned out to be on par with that obtained using the GVF and in some cases better than the GVF.
Resumo:
Images obtained through fluorescence microscopy at low numerical aperture (NA) are noisy and have poor resolution. Images of specimens such as F-actin filaments obtained using confocal or widefield fluorescence microscopes contain directional information and it is important that an image smoothing or filtering technique preserve the directionality. F-actin filaments are widely studied in pathology because the abnormalities in actin dynamics play a key role in diagnosis of cancer, cardiac diseases, vascular diseases, myofibrillar myopathies, neurological disorders, etc. We develop the directional bilateral filter as a means of filtering out the noise in the image without significantly altering the directionality of the F-actin filaments. The bilateral filter is anisotropic to start with, but we add an additional degree of anisotropy by employing an oriented domain kernel for smoothing. The orientation is locally adapted using a structure tensor and the parameters of the bilateral filter are optimized for within the framework of statistical risk minimization. We show that the directional bilateral filter has better denoising performance than the traditional Gaussian bilateral filter and other denoising techniques such as SURE-LET, non-local means, and guided image filtering at various noise levels in terms of peak signal-to-noise ratio (PSNR). We also show quantitative improvements in low NA images of F-actin filaments. (C) 2015 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution 3.0 Unported License.
Resumo:
Analytical expressions for the corrections to duality are obtained for nonsingular potentials, and are found to be small numerically. An alternative consistent way of energy smoothing, developed by Strutinsky, is elucidated. This may be of use even when potential models are not valid.
Resumo:
This paper presents two algorithms for smoothing and feature extraction for fingerprint classification. Deutsch's(2) Thinning algorithm (rectangular array) is used for thinning the digitized fingerprint (binary version). A simple algorithm is also suggested for classifying the fingerprints. Experimental results obtained using such algorithms are presented.
Resumo:
The cricket is one of most popular games in the Asian subcontinent and its popularity is increasing every day. The issue of replacement of the cricket ball amidst the matches is always an uncomfortable situation for teams, umpires and even supporters. At present the basis of the replacement is solely on the judgement, experience and expertise of the umpires, which is subjective, controversial and debatable. In this paper, we have attempted a new approach to quantify the number of impacts or impact factor of a 4-piece leather ball used in the Intemational one-day and test cricket matches. This gives a more objective and scientific basis/ criteria for the replacement of the ball. Here, we have used a well known and widely used Thermal Infra-Red (TIR) imaging to capture the dynamics of the thermal profice of the cricket ball, which has been heated for about 15 seconds. The idea behind this approach is the simple observation that an old ball (ball with a few impacts) has different thermal signature/profice compared to the that of a new ball. This could be due to the change in the surface profice and internal structure, minor de-shaping, opening of seam etc. The TIR video and its frames, which is inherently noisy, are restored using Hebbian learning based FIR (sic), which performs optimal smoothing in relatively less number of iteration. We have focussed on the hottest region of the ball i.e., the inner core and tracked its thermal profice dynamics. Finally we have used multi layer perceptron model (MLP) to quantify the impact factor with fairly good accuracy.
Resumo:
In positron emission tomography (PET), image reconstruction is a demanding problem. Since, PET image reconstruction is an ill-posed inverse problem, new methodologies need to be developed. Although previous studies show that incorporation of spatial and median priors improves the image quality, the image artifacts such as over-smoothing and streaking are evident in the reconstructed image. In this work, we use a simple, yet powerful technique to tackle the PET image reconstruction problem. Proposed technique is based on the integration of Bayesian approach with that of finite impulse response (FIR) filter. A FIR filter is designed whose coefficients are determined based on the surface diffusion model. The resulting reconstructed image is iteratively filtered and fed back to obtain the new estimate. Experiments are performed on a simulated PET system. The results show that the proposed approach is better than recently proposed MRP algorithm in terms of image quality and normalized mean square error.
Resumo:
The neural network finds its application in many image denoising applications because of its inherent characteristics such as nonlinear mapping and self-adaptiveness. The design of filters largely depends on the a-priori knowledge about the type of noise. Due to this, standard filters are application and image specific. Widely used filtering algorithms reduce noisy artifacts by smoothing. However, this operation normally results in smoothing of the edges as well. On the other hand, sharpening filters enhance the high frequency details making the image non-smooth. An integrated general approach to design a finite impulse response filter based on principal component neural network (PCNN) is proposed in this study for image filtering, optimized in the sense of visual inspection and error metric. This algorithm exploits the inter-pixel correlation by iteratively updating the filter coefficients using PCNN. This algorithm performs optimal smoothing of the noisy image by preserving high and low frequency features. Evaluation results show that the proposed filter is robust under various noise distributions. Further, the number of unknown parameters is very few and most of these parameters are adaptively obtained from the processed image.
Resumo:
Denoising of medical images in wavelet domain has potential application in transmission technologies such as teleradiology. This technique becomes all the more attractive when we consider the progressive transmission in a teleradiology system. The transmitted images are corrupted mainly due to noisy channels. In this paper, we present a new real time image denoising scheme based on limited restoration of bit-planes of wavelet coefficients. The proposed scheme exploits the fundamental property of wavelet transform - its ability to analyze the image at different resolution levels and the edge information associated with each sub-band. The desired bit-rate control is achieved by applying the restoration on a limited number of bit-planes subject to the optimal smoothing. The proposed method adapts itself to the preference of the medical expert; a single parameter can be used to balance the preservation of (expert-dependent) relevant details against the degree of noise reduction. The proposed scheme relies on the fact that noise commonly manifests itself as a fine-grained structure in image and wavelet transform allows the restoration strategy to adapt itself according to directional features of edges. The proposed approach shows promising results when compared with unrestored case, in context of error reduction. It also has capability to adapt to situations where noise level in the image varies and with the changing requirements of medical-experts. The applicability of the proposed approach has implications in restoration of medical images in teleradiology systems. The proposed scheme is computationally efficient.
Resumo:
We propose F-norm of the cross-correlation part of the array covariance matrix as a measure of correlation between the impinging signals and study the performance of different decorrelation methods in the broadband case using this measure. We first show that dimensionality of the composite signal subspace, defined as the number of significant eigenvectors of the source sample covariance matrix, collapses in the presence of multipath and the spatial smoothing recovers this dimensionality. Using an upper bound on the proposed measure, we then study the decorrelation of the broadband signals with spatial smoothing and the effect of spacing and directions of the sources on the rate of decorrelation with progressive smoothing. Next, we introduce a weighted smoothing method based on Toeplitz-block-Toeplitz (TBT) structuring of the data covariance matrix which decorrelates the signals much faster than the spatial smoothing. Computer simulations are included to demonstrate the performance of the two methods.