44 resultados para gaussian mixture model
Resumo:
This paper investigates sub-integer implementations of the adaptive Gaussian mixture model (GMM) for background/foreground segmentation to allow the deployment of the method on low cost/low power processors that lack Floating Point Unit (FPU). We propose two novel integer computer arithmetic techniques to update Gaussian parameters. Specifically, the mean value and the variance of each Gaussian are updated by a redefined and generalised "round'' operation that emulates the original updating rules for a large set of learning rates. Weights are represented by counters that are updated following stochastic rules to allow a wider range of learning rates and the weight trend is approximated by a line or a staircase. We demonstrate that the memory footprint and computational cost of GMM are significantly reduced, without significantly affecting the performance of background/foreground segmentation.
Resumo:
This paper proposes an optimisation of the adaptive Gaussian mixture background model that allows the deployment of the method on processors with low memory capacity. The effect of the granularity of the Gaussian mean-value and variance in an integer-based implementation is investigated and novel updating rules of the mixture weights are described. Based on the proposed framework, an implementation for a very low power consumption micro-controller is presented. Results show that the proposed method operates in real time on the micro-controller and has similar performance to the original model. © 2012 Springer-Verlag.
Resumo:
This paper proposes a novel image denoising technique based on the normal inverse Gaussian (NIG) density model using an extended non-negative sparse coding (NNSC) algorithm proposed by us. This algorithm can converge to feature basis vectors, which behave in the locality and orientation in spatial and frequency domain. Here, we demonstrate that the NIG density provides a very good fitness to the non-negative sparse data. In the denoising process, by exploiting a NIG-based maximum a posteriori estimator (MAP) of an image corrupted by additive Gaussian noise, the noise can be reduced successfully. This shrinkage technique, also referred to as the NNSC shrinkage technique, is self-adaptive to the statistical properties of image data. This denoising method is evaluated by values of the normalized signal to noise rate (SNR). Experimental results show that the NNSC shrinkage approach is indeed efficient and effective in denoising. Otherwise, we also compare the effectiveness of the NNSC shrinkage method with methods of standard sparse coding shrinkage, wavelet-based shrinkage and the Wiener filter. The simulation results show that our method outperforms the three kinds of denoising approaches mentioned above.
Resumo:
Logistic regression and Gaussian mixture model (GMM) classifiers have been trained to estimate the probability of acute myocardial infarction (AMI) in patients based upon the concentrations of a panel of cardiac markers. The panel consists of two new markers, fatty acid binding protein (FABP) and glycogen phosphorylase BB (GPBB), in addition to the traditional cardiac troponin I (cTnI), creatine kinase MB (CKMB) and myoglobin. The effect of using principal component analysis (PCA) and Fisher discriminant analysis (FDA) to preprocess the marker concentrations was also investigated. The need for classifiers to give an accurate estimate of the probability of AMI is argued and three categories of performance measure are described, namely discriminatory ability, sharpness, and reliability. Numerical performance measures for each category are given and applied. The optimum classifier, based solely upon the samples take on admission, was the logistic regression classifier using FDA preprocessing. This gave an accuracy of 0.85 (95% confidence interval: 0.78-0.91) and a normalised Brier score of 0.89. When samples at both admission and a further time, 1-6 h later, were included, the performance increased significantly, showing that logistic regression classifiers can indeed use the information from the five cardiac markers to accurately and reliably estimate the probability AMI. © Springer-Verlag London Limited 2008.
Resumo:
This paper investigated using lip movements as a behavioural biometric for person authentication. The system was trained, evaluated and tested using the XM2VTS dataset, following the Lausanne Protocol configuration II. Features were selected from the DCT coefficients of the greyscale lip image. This paper investigated the number of DCT coefficients selected, the selection process, and static and dynamic feature combinations. Using a Gaussian Mixture Model - Universal Background Model framework an Equal Error Rate of 2.20% was achieved during evaluation and on an unseen test set a False Acceptance Rate of 1.7% and False Rejection Rate of 3.0% was achieved. This compares favourably with face authentication results on the same dataset whilst not being susceptible to spoofing attacks.
Resumo:
We address the problem of non-linearity in 2D Shape modelling of a particular articulated object: the human body. This issue is partially resolved by applying a different Point Distribution Model (PDM) depending on the viewpoint. The remaining non-linearity is solved by using Gaussian Mixture Models (GMM). A dynamic-based clustering is proposed and carried out in the Pose Eigenspace. A fundamental question when clustering is to determine the optimal number of clusters. From our point of view, the main aspect to be evaluated is the mean gaussianity. This partitioning is then used to fit a GMM to each one of the view-based PDM, derived from a database of Silhouettes and Skeletons. Dynamic correspondences are then obtained between gaussian models of the 4 mixtures. Finally, we compare this approach with other two methods we previously developed to cope with non-linearity: Nearest Neighbor (NN) Classifier and Independent Component Analysis (ICA).
Resumo:
Automatic gender classification has many security and commercial applications. Various modalities have been investigated for gender classification with face-based classification being the most popular. In some real-world scenarios the face may be partially occluded. In these circumstances a classification based on individual parts of the face known as local features must be adopted. We investigate gender classification using lip movements. We show for the first time that important gender specific information can be obtained from the way in which a person moves their lips during speech. Furthermore our study indicates that the lip dynamics during speech provide greater gender discriminative information than simply lip appearance. We also show that the lip dynamics and appearance contain complementary gender information such that a model which captures both traits gives the highest overall classification result. We use Discrete Cosine Transform based features and Gaussian Mixture Modelling to model lip appearance and dynamics and employ the XM2VTS database for our experiments. Our experiments show that a model which captures lip dynamics along with appearance can improve gender classification rates by between 16-21% compared to models of only lip appearance.
Resumo:
Due to the variability of wind power, it is imperative to accurately and timely forecast the wind generation to enhance the flexibility and reliability of the operation and control of real-time power. Special events such as ramps, spikes are hard to predict with traditional methods using solely recently measured data. In this paper, a new Gaussian Process model with hybrid training data taken from both the local time and historic dataset is proposed and applied to make short-term predictions from 10 minutes to one hour ahead. A key idea is that the similar pattern data in history are properly selected and embedded in Gaussian Process model to make predictions. The results of the proposed algorithms are compared to those of standard Gaussian Process model and the persistence model. It is shown that the proposed method not only reduces magnitude error but also phase error.
Resumo:
Generative algorithms for random graphs have yielded insights into the structure and evolution of real-world networks. Most networks exhibit a well-known set of properties, such as heavy-tailed degree distributions, clustering and community formation. Usually, random graph models consider only structural information, but many real-world networks also have labelled vertices and weighted edges. In this paper, we present a generative model for random graphs with discrete vertex labels and numeric edge weights. The weights are represented as a set of Beta Mixture Models (BMMs) with an arbitrary number of mixtures, which are learned from real-world networks. We propose a Bayesian Variational Inference (VI) approach, which yields an accurate estimation while keeping computation times tractable. We compare our approach to state-of-the-art random labelled graph generators and an earlier approach based on Gaussian Mixture Models (GMMs). Our results allow us to draw conclusions about the contribution of vertex labels and edge weights to graph structure.
Resumo:
Due to the variability and stochastic nature of wind power system, accurate wind power forecasting has an important role in developing reliable and economic power system operation and control strategies. As wind variability is stochastic, Gaussian Process regression has recently been introduced to capture the randomness of wind energy. However, the disadvantages of Gaussian Process regression include its computation complexity and incapability to adapt to time varying time-series systems. A variant Gaussian Process for time series forecasting is introduced in this study to address these issues. This new method is shown to be capable of reducing computational complexity and increasing prediction accuracy. It is further proved that the forecasting result converges as the number of available data approaches innite. Further, a teaching learning based optimization (TLBO) method is used to train the model and to accelerate
the learning rate. The proposed modelling and optimization method is applied to forecast both the wind power generation of Ireland and that from a single wind farm to show the eectiveness of the proposed method.
Resumo:
The features of two popular models used to describe the observed response characteristics of typical oxygen optical sensors based on luminescence quenching are examined critically. The models are the 'two-site' and 'Gaussian distribution in natural lifetime, tau(o),' models. These models are used to characterise the response features of typical optical oxygen sensors; features which include: downward curving Stern-Volmer plots and increasingly non-first order luminescence decay kinetics with increasing partial pressures of oxygen, pO(2). Neither model appears able to unite these latter features, let alone the observed disparate array of response features exhibited by the myriad optical oxygen sensors reported in the literature, and still maintain any level of physical plausibility. A model based on a Gaussian distribution in quenching rate constant, k(q), is developed and, although flawed by a limited breadth in distribution, rho, does produce Stern-Volmer plots which would cover the range in curvature seen with real optical oxygen sensors. A new 'log-Gaussian distribution in tau(o) or k(q)' model is introduced which has the advantage over a Gaussian distribution model of placing no limitation on the value of rho. Work on a 'log-Gaussian distribution in tau(o)' model reveals that the Stern-Volmer quenching plots would show little degree in curvature, even at large rho values and the luminescence decays would become increasingly first order with increasing pO(2). In fact, with real optical oxygen sensors, the opposite is observed and thus the model appears of little value. In contrast, a 'log-Gaussian distribution in k(o)' model does produce the trends observed with real optical oxygen sensors; although it is technically restricted in use to those in which the kinetics of luminescence decay are good first order in the absence of oxygen. The latter model gives a good fit to the major response features of sensors which show the latter feature, most notably the [Ru(dpp)(3)(2+)(Ph4B-)(2)] in cellulose optical oxygen sensors. The scope of a log-Gaussian model for further expansion and, therefore, application to optical oxygen sensors, by combining both a log-Gaussian distribution in k(o) with one in tau(o) is briefly discussed.
Resumo:
The spatial coherence of a nanosecond pulsed germanium collisionally excited x-ray laser is measured experimentally for three target configurations. The diagnostic is based on Young's slit interference fringes with a dispersing element to resolve the 23.2- and 23.6-nm spectral lines. Target configurations include a double-slab target, known as the injector, and geometries in which the injector image is image relayed to seed either an additional single-slab target or a second double-slab target. A special feature of this study is the observation of the change in the apparent source size with angle of refraction across the diverging laser beam. Source sizes derived with a Gaussian source model decrease from 44 mu m for the injector target by a variable factor of as much as 2, according to target configuration, for beams leaving the additional amplifiers after strong refraction in the plasma. (C) 1998 Optical Society of America [S0740-3224(98)00810-8].
Resumo:
Thermal management as a method of heightening performance in miniaturized electronic devices using microchannel heat sinks has recently become of interest to researchers and the industry. One of the current challenges is to design heat sinks with uniform flow distribution. A number of experimental studies have been conducted to seek appropriate designs for microchannel heat sinks. However, pursuing this goal experimentally can be an expensive endeavor. The present work investigates the effect of cross-links on adiabatic two-phase flow in an array of parallel channels. It is carried out using the three dimensional mixture model from the computational fluid dynamics software, FLUENT 6.3. A straight channel and two cross-linked channel models were simulated. The cross-links were located at 1/3 and 2/3 of the channel length, and their widths were one and two times larger than the channel width. All test models had 45 parallel rectangular channels, with a hydraulic diameter of 1.59 mm. The results showed that the trend of flow distribution agrees with experimental results. A new design, with cross-links incorporated, was proposed and the results showed a significant improvement of up to 55% on flow distribution compared with the standard straight channel configuration without a penalty in the pressure drop. Further discussion about the effect of cross-links on flow distribution, flow structure, and pressure drop was also documented.