899 resultados para two-Gaussian mixture model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes a novel image denoising technique based on the normal inverse Gaussian (NIG) density model using an extended non-negative sparse coding (NNSC) algorithm proposed by us. This algorithm can converge to feature basis vectors, which behave in the locality and orientation in spatial and frequency domain. Here, we demonstrate that the NIG density provides a very good fitness to the non-negative sparse data. In the denoising process, by exploiting a NIG-based maximum a posteriori estimator (MAP) of an image corrupted by additive Gaussian noise, the noise can be reduced successfully. This shrinkage technique, also referred to as the NNSC shrinkage technique, is self-adaptive to the statistical properties of image data. This denoising method is evaluated by values of the normalized signal to noise rate (SNR). Experimental results show that the NNSC shrinkage approach is indeed efficient and effective in denoising. Otherwise, we also compare the effectiveness of the NNSC shrinkage method with methods of standard sparse coding shrinkage, wavelet-based shrinkage and the Wiener filter. The simulation results show that our method outperforms the three kinds of denoising approaches mentioned above.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Logistic regression and Gaussian mixture model (GMM) classifiers have been trained to estimate the probability of acute myocardial infarction (AMI) in patients based upon the concentrations of a panel of cardiac markers. The panel consists of two new markers, fatty acid binding protein (FABP) and glycogen phosphorylase BB (GPBB), in addition to the traditional cardiac troponin I (cTnI), creatine kinase MB (CKMB) and myoglobin. The effect of using principal component analysis (PCA) and Fisher discriminant analysis (FDA) to preprocess the marker concentrations was also investigated. The need for classifiers to give an accurate estimate of the probability of AMI is argued and three categories of performance measure are described, namely discriminatory ability, sharpness, and reliability. Numerical performance measures for each category are given and applied. The optimum classifier, based solely upon the samples take on admission, was the logistic regression classifier using FDA preprocessing. This gave an accuracy of 0.85 (95% confidence interval: 0.78-0.91) and a normalised Brier score of 0.89. When samples at both admission and a further time, 1-6 h later, were included, the performance increased significantly, showing that logistic regression classifiers can indeed use the information from the five cardiac markers to accurately and reliably estimate the probability AMI. © Springer-Verlag London Limited 2008.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, the fractional Fourier transform (FrFT) is applied to the spectral bands of two component mixture containing oxfendazole and oxyclozanide to provide the multicomponent quantitative prediction of the related substances. With this aim in mind, the modulus of FrFT spectral bands are processed by the continuous Mexican Hat family of wavelets, being denoted by MEXH-CWT-MOFrFT. Four modulus sets are obtained for the parameter a of the FrFT going from 0.6 up to 0.9 in order to compare their effects upon the spectral and quantitative resolutions. Four linear regression plots for each substance were obtained by measuring the MEXH-CWT-MOFrFT amplitudes in the application of the MEXH family to the modulus of the FrFT. This new combined powerful tool is validated by analyzing the artificial samples of the related drugs, and it is applied to the quality control of the commercial veterinary samples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, the fractional Fourier transform (FrFT) is applied to the spectral bands of two component mixture containing oxfendazole and oxyclozanide to provide the multicomponent quantitative prediction of the related substances. With this aim in mind, the modulus of FrFT spectral bands are processed by the continuous Mexican Hat family of wavelets, being denoted by MEXH-CWT-MOFrFT. Four modulus sets are obtained for the parameter a of the FrFT going from 0.6 up to 0.9 in order to compare their effects upon the spectral and quantitative resolutions. Four linear regression plots for each substance were obtained by measuring the MEXH-CWT-MOFrFT amplitudes in the application of the MEXH family to the modulus of the FrFT. This new combined powerful tool is validated by analyzing the artificial samples of the related drugs, and it is applied to the quality control of the commercial veterinary samples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We contribute a quantitative and systematic model to capture etch non-uniformity in deep reactive ion etch of microelectromechanical systems (MEMS) devices. Deep reactive ion etch is commonly used in MEMS fabrication where high-aspect ratio features are to be produced in silicon. It is typical for many supposedly identical devices, perhaps of diameter 10 mm, to be etched simultaneously into one silicon wafer of diameter 150 mm. Etch non-uniformity depends on uneven distributions of ion and neutral species at the wafer level, and on local consumption of those species at the device, or die, level. An ion–neutral synergism model is constructed from data obtained from etching several layouts of differing pattern opening densities. Such a model is used to predict wafer-level variation with an r.m.s. error below 3%. This model is combined with a die-level model, which we have reported previously, on a MEMS layout. The two-level model is shown to enable prediction of both within-die and wafer-scale etch rate variation for arbitrary wafer loadings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Resumen tomado de la publicación

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Population size estimation with discrete or nonparametric mixture models is considered, and reliable ways of construction of the nonparametric mixture model estimator are reviewed and set into perspective. Construction of the maximum likelihood estimator of the mixing distribution is done for any number of components up to the global nonparametric maximum likelihood bound using the EM algorithm. In addition, the estimators of Chao and Zelterman are considered with some generalisations of Zelterman’s estimator. All computations are done with CAMCR, a special software developed for population size estimation with mixture models. Several examples and data sets are discussed and the estimators illustrated. Problems using the mixture model-based estimators are highlighted.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Estimation of a population size by means of capture-recapture techniques is an important problem occurring in many areas of life and social sciences. We consider the frequencies of frequencies situation, where a count variable is used to summarize how often a unit has been identified in the target population of interest. The distribution of this count variable is zero-truncated since zero identifications do not occur in the sample. As an application we consider the surveillance of scrapie in Great Britain. In this case study holdings with scrapie that are not identified (zero counts) do not enter the surveillance database. The count variable of interest is the number of scrapie cases per holding. For count distributions a common model is the Poisson distribution and, to adjust for potential heterogeneity, a discrete mixture of Poisson distributions is used. Mixtures of Poissons usually provide an excellent fit as will be demonstrated in the application of interest. However, as it has been recently demonstrated, mixtures also suffer under the so-called boundary problem, resulting in overestimation of population size. It is suggested here to select the mixture model on the basis of the Bayesian Information Criterion. This strategy is further refined by employing a bagging procedure leading to a series of estimates of population size. Using the median of this series, highly influential size estimates are avoided. In limited simulation studies it is shown that the procedure leads to estimates with remarkable small bias.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Population size estimation with discrete or nonparametric mixture models is considered, and reliable ways of construction of the nonparametric mixture model estimator are reviewed and set into perspective. Construction of the maximum likelihood estimator of the mixing distribution is done for any number of components up to the global nonparametric maximum likelihood bound using the EM algorithm. In addition, the estimators of Chao and Zelterman are considered with some generalisations of Zelterman’s estimator. All computations are done with CAMCR, a special software developed for population size estimation with mixture models. Several examples and data sets are discussed and the estimators illustrated. Problems using the mixture model-based estimators are highlighted.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is known that the empirical orthogonal function method is unable to detect possible nonlinear structure in climate data. Here, isometric feature mapping (Isomap), as a tool for nonlinear dimensionality reduction, is applied to 1958–2001 ERA-40 sea-level pressure anomalies to study nonlinearity of the Asian summer monsoon intraseasonal variability. Using the leading two Isomap time series, the probability density function is shown to be bimodal. A two-dimensional bivariate Gaussian mixture model is then applied to identify the monsoon phases, the obtained regimes representing enhanced and suppressed phases, respectively. The relationship with the large-scale seasonal mean monsoon indicates that the frequency of monsoon regime occurrence is significantly perturbed in agreement with conceptual ideas, with preference for enhanced convection on intraseasonal time scales during large-scale strong monsoons. Trend analysis suggests a shift in concentration of monsoon convection, with less emphasis on South Asia and more on the East China Sea.