103 resultados para Gaussian convolution


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work the G(A)(0) distribution is assumed as the universal model for amplitude Synthetic Aperture (SAR) imagery data under the Multiplicative Model. The observed data, therefore, is assumed to obey a G(A)(0) (alpha; gamma, n) law, where the parameter n is related to the speckle noise, and (alpha, gamma) are related to the ground truth, giving information about the background. Therefore, maps generated by the estimation of (alpha, gamma) in each coordinate can be used as the input for classification methods. Maximum likelihood estimators are derived and used to form estimated parameter maps. This estimation can be hampered by the presence of corner reflectors, man-made objects used to calibrate SAR images that produce large return values. In order to alleviate this contamination, robust (M) estimators are also derived for the universal model. Gaussian Maximum Likelihood classification is used to obtain maps using hard-to-deal-with simulated data, and the superiority of robust estimation is quantitatively assessed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A novel framework for multimodal semantic-associative collateral image labelling, aiming at associating image regions with textual keywords, is described. Both the primary image and collateral textual modalities are exploited in a cooperative and complementary fashion. The collateral content and context based knowledge is used to bias the mapping from the low-level region-based visual primitives to the high-level visual concepts defined in a visual vocabulary. We introduce the notion of collateral context, which is represented as a co-occurrence matrix, of the visual keywords, A collaborative mapping scheme is devised using statistical methods like Gaussian distribution or Euclidean distance together with collateral content and context-driven inference mechanism. Finally, we use Self Organising Maps to examine the classification and retrieval effectiveness of the proposed high-level image feature vector model which is constructed based on the image labelling results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work compares classification results of lactose, mandelic acid and dl-mandelic acid, obtained on the basis of their respective THz transients. The performance of three different pre-processing algorithms applied to the time-domain signatures obtained using a THz-transient spectrometer are contrasted by evaluating the classifier performance. A range of amplitudes of zero-mean white Gaussian noise are used to artificially degrade the signal-to-noise ratio of the time-domain signatures to generate the data sets that are presented to the classifier for both learning and validation purposes. This gradual degradation of interferograms by increasing the noise level is equivalent to performing measurements assuming a reduced integration time. Three signal processing algorithms were adopted for the evaluation of the complex insertion loss function of the samples under study; a) standard evaluation by ratioing the sample with the background spectra, b) a subspace identification algorithm and c) a novel wavelet-packet identification procedure. Within class and between class dispersion metrics are adopted for the three data sets. A discrimination metric evaluates how well the three classes can be distinguished within the frequency range 0. 1 - 1.0 THz using the above algorithms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we present a feature selection approach based on Gabor wavelet feature and boosting for face verification. By convolution with a group of Gabor wavelets, the original images are transformed into vectors of Gabor wavelet features. Then for individual person, a small set of significant features are selected by the boosting algorithm from a large set of Gabor wavelet features. The experiment results have shown that the approach successfully selects meaningful and explainable features for face verification. The experiments also suggest that for the common characteristics such as eyes, noses, mouths may not be as important as some unique characteristic when training set is small. When training set is large, the unique characteristics and the common characteristics are both important.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The identification and visualization of clusters formed by motor unit action potentials (MUAPs) is an essential step in investigations seeking to explain the control of the neuromuscular system. This work introduces the generative topographic mapping (GTM), a novel machine learning tool, for clustering of MUAPs, and also it extends the GTM technique to provide a way of visualizing MUAPs. The performance of GTM was compared to that of three other clustering methods: the self-organizing map (SOM), a Gaussian mixture model (GMM), and the neural-gas network (NGN). The results, based on the study of experimental MUAPs, showed that the rate of success of both GTM and SOM outperformed that of GMM and NGN, and also that GTM may in practice be used as a principled alternative to the SOM in the study of MUAPs. A visualization tool, which we called GTM grid, was devised for visualization of MUAPs lying in a high-dimensional space. The visualization provided by the GTM grid was compared to that obtained from principal component analysis (PCA). (c) 2005 Elsevier Ireland Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A greedy technique is proposed to construct parsimonious kernel classifiers using the orthogonal forward selection method and boosting based on Fisher ratio for class separability measure. Unlike most kernel classification methods, which restrict kernel means to the training input data and use a fixed common variance for all the kernel terms, the proposed technique can tune both the mean vector and diagonal covariance matrix of individual kernel by incrementally maximizing Fisher ratio for class separability measure. An efficient weighted optimization method is developed based on boosting to append kernels one by one in an orthogonal forward selection procedure. Experimental results obtained using this construction technique demonstrate that it offers a viable alternative to the existing state-of-the-art kernel modeling methods for constructing sparse Gaussian radial basis function network classifiers. that generalize well.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A quadratic programming optimization procedure for designing asymmetric apodization windows tailored to the shape of time-domain sample waveforms recorded using a terahertz transient spectrometer is proposed. By artificially degrading the waveforms, the performance of the designed window in both the time and the frequency domains is compared with that of conventional rectangular, triangular (Mertz), and Hamming windows. Examples of window optimization assuming Gaussian functions as the building elements of the apodization window are provided. The formulation is sufficiently general to accommodate other basis functions. (C) 2007 Optical Society of America

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A beamforming algorithm is introduced based on the general objective function that approximates the bit error rate for the wireless systems with binary phase shift keying and quadrature phase shift keying modulation schemes. The proposed minimum approximate bit error rate (ABER) beamforming approach does not rely on the Gaussian assumption of the channel noise. Therefore, this approach is also applicable when the channel noise is non-Gaussian. The simulation results show that the proposed minimum ABER solution improves the standard minimum mean squares error beamforming solution, in terms of a smaller achievable system's bit error rate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A novel framework referred to as collaterally confirmed labelling (CCL) is proposed, aiming at localising the visual semantics to regions of interest in images with textual keywords. Both the primary image and collateral textual modalities are exploited in a mutually co-referencing and complementary fashion. The collateral content and context-based knowledge is used to bias the mapping from the low-level region-based visual primitives to the high-level visual concepts defined in a visual vocabulary. We introduce the notion of collateral context, which is represented as a co-occurrence matrix of the visual keywords. A collaborative mapping scheme is devised using statistical methods like Gaussian distribution or Euclidean distance together with collateral content and context-driven inference mechanism. We introduce a novel high-level visual content descriptor that is devised for performing semantic-based image classification and retrieval. The proposed image feature vector model is fundamentally underpinned by the CCL framework. Two different high-level image feature vector models are developed based on the CCL labelling of results for the purposes of image data clustering and retrieval, respectively. A subset of the Corel image collection has been used for evaluating our proposed method. The experimental results to-date already indicate that the proposed semantic-based visual content descriptors outperform both traditional visual and textual image feature models. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cloud radar and lidar can be used to evaluate the skill of numerical weather prediction models in forecasting the timing and placement of clouds, but care must be taken in choosing the appropriate metric of skill to use due to the non- Gaussian nature of cloud-fraction distributions. We compare the properties of a number of different verification measures and conclude that of existing measures the Log of Odds Ratio is the most suitable for cloud fraction. We also propose a new measure, the Symmetric Extreme Dependency Score, which has very attractive properties, being equitable (for large samples), difficult to hedge and independent of the frequency of occurrence of the quantity being verified. We then use data from five European ground-based sites and seven forecast models, processed using the ‘Cloudnet’ analysis system, to investigate the dependence of forecast skill on cloud fraction threshold (for binary skill scores), height, horizontal scale and (for the Met Office and German Weather Service models) forecast lead time. The models are found to be least skillful at predicting the timing and placement of boundary-layer clouds and most skilful at predicting mid-level clouds, although in the latter case they tend to underestimate mean cloud fraction when cloud is present. It is found that skill decreases approximately inverse-exponentially with forecast lead time, enabling a forecast ‘half-life’ to be estimated. When considering the skill of instantaneous model snapshots, we find typical values ranging between 2.5 and 4.5 days. Copyright c 2009 Royal Meteorological Society

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Rainfall can be modeled as a spatially correlated random field superimposed on a background mean value; therefore, geostatistical methods are appropriate for the analysis of rain gauge data. Nevertheless, there are certain typical features of these data that must be taken into account to produce useful results, including the generally non-Gaussian mixed distribution, the inhomogeneity and low density of observations, and the temporal and spatial variability of spatial correlation patterns. Many studies show that rigorous geostatistical analysis performs better than other available interpolation techniques for rain gauge data. Important elements are the use of climatological variograms and the appropriate treatment of rainy and nonrainy areas. Benefits of geostatistical analysis for rainfall include ease of estimating areal averages, estimation of uncertainties, and the possibility of using secondary information (e.g., topography). Geostatistical analysis also facilitates the generation of ensembles of rainfall fields that are consistent with a given set of observations, allowing for a more realistic exploration of errors and their propagation in downstream models, such as those used for agricultural or hydrological forecasting. This article provides a review of geostatistical methods used for kriging, exemplified where appropriate by daily rain gauge data from Ethiopia.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper addresses the statistical mechanics of ideal polymer chains next to a hard wall. The principal quantity of interest, from which all monomer densities can be calculated, is the partition function, G N(z) , for a chain of N discrete monomers with one end fixed a distance z from the wall. It is well accepted that in the limit of infinite N , G N(z) satisfies the diffusion equation with the Dirichlet boundary condition, G N(0) = 0 , unless the wall possesses a sufficient attraction, in which case the Robin boundary condition, G N(0) = - x G N ′(0) , applies with a positive coefficient, x . Here we investigate the leading N -1/2 correction, D G N(z) . Prior to the adsorption threshold, D G N(z) is found to involve two distinct parts: a Gaussian correction (for z <~Unknown control sequence '\lesssim' aN 1/2 with a model-dependent amplitude, A , and a proximal-layer correction (for z <~Unknown control sequence '\lesssim' a described by a model-dependent function, B(z).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We show that for any sample size, any size of the test, and any weights matrix outside a small class of exceptions, there exists a positive measure set of regression spaces such that the power of the Cli-Ord test vanishes as the autocorrelation increases in a spatial error model. This result extends to the tests that dene the Gaussian power envelope of all invariant tests for residual spatial autocorrelation. In most cases, the regression spaces such that the problem occurs depend on the size of the test, but there also exist regression spaces such that the power vanishes regardless of the size. A characterization of such particularly hostile regression spaces is provided.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Peak picking is an early key step in MS data analysis. We compare three commonly used approaches to peak picking and discuss their merits by means of statistical analysis. Methods investigated encompass signal-to-noise ratio, continuous wavelet transform, and a correlation-based approach using a Gaussian template. Functionality of the three methods is illustrated and discussed in a practical context using a mass spectral data set created with MALDI-TOF technology. Sensitivity and specificity are investigated using a manually defined reference set of peaks. As an additional criterion, the robustness of the three methods is assessed by a perturbation analysis and illustrated using ROC curves.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper first points out the important fact that the rectangle formulas of continuous convolution discretization, which was widely used in conventional digital deconvolution algorithms, can result in zero-time error. Then, an improved digital deconvolution equation is suggested which is equivalent to the trapezoid formulas of continuous convolution discretization and can overcome the disadvantage of conventional equation satisfactorily. Finally, a simulation in computer is given, thus confirming the theoretical result.