17 resultados para mixture model
Resumo:
This paper investigates sub-integer implementations of the adaptive Gaussian mixture model (GMM) for background/foreground segmentation to allow the deployment of the method on low cost/low power processors that lack Floating Point Unit (FPU). We propose two novel integer computer arithmetic techniques to update Gaussian parameters. Specifically, the mean value and the variance of each Gaussian are updated by a redefined and generalised "round'' operation that emulates the original updating rules for a large set of learning rates. Weights are represented by counters that are updated following stochastic rules to allow a wider range of learning rates and the weight trend is approximated by a line or a staircase. We demonstrate that the memory footprint and computational cost of GMM are significantly reduced, without significantly affecting the performance of background/foreground segmentation.
Resumo:
Logistic regression and Gaussian mixture model (GMM) classifiers have been trained to estimate the probability of acute myocardial infarction (AMI) in patients based upon the concentrations of a panel of cardiac markers. The panel consists of two new markers, fatty acid binding protein (FABP) and glycogen phosphorylase BB (GPBB), in addition to the traditional cardiac troponin I (cTnI), creatine kinase MB (CKMB) and myoglobin. The effect of using principal component analysis (PCA) and Fisher discriminant analysis (FDA) to preprocess the marker concentrations was also investigated. The need for classifiers to give an accurate estimate of the probability of AMI is argued and three categories of performance measure are described, namely discriminatory ability, sharpness, and reliability. Numerical performance measures for each category are given and applied. The optimum classifier, based solely upon the samples take on admission, was the logistic regression classifier using FDA preprocessing. This gave an accuracy of 0.85 (95% confidence interval: 0.78-0.91) and a normalised Brier score of 0.89. When samples at both admission and a further time, 1-6 h later, were included, the performance increased significantly, showing that logistic regression classifiers can indeed use the information from the five cardiac markers to accurately and reliably estimate the probability AMI. © Springer-Verlag London Limited 2008.
Resumo:
Thermal management as a method of heightening performance in miniaturized electronic devices using microchannel heat sinks has recently become of interest to researchers and the industry. One of the current challenges is to design heat sinks with uniform flow distribution. A number of experimental studies have been conducted to seek appropriate designs for microchannel heat sinks. However, pursuing this goal experimentally can be an expensive endeavor. The present work investigates the effect of cross-links on adiabatic two-phase flow in an array of parallel channels. It is carried out using the three dimensional mixture model from the computational fluid dynamics software, FLUENT 6.3. A straight channel and two cross-linked channel models were simulated. The cross-links were located at 1/3 and 2/3 of the channel length, and their widths were one and two times larger than the channel width. All test models had 45 parallel rectangular channels, with a hydraulic diameter of 1.59 mm. The results showed that the trend of flow distribution agrees with experimental results. A new design, with cross-links incorporated, was proposed and the results showed a significant improvement of up to 55% on flow distribution compared with the standard straight channel configuration without a penalty in the pressure drop. Further discussion about the effect of cross-links on flow distribution, flow structure, and pressure drop was also documented.
Resumo:
This paper proposes a discrete mixture model which assigns individuals, up to a probability, to either a class of random utility (RU) maximizers or a class of random regret (RR) minimizers, on the basis of their sequence of observed choices. Our proposed model advances the state of the art of RU-RR mixture models by (i) adding and simultaneously estimating a membership model which predicts the probability of belonging to a RU or RR class; (ii) adding a layer of random taste heterogeneity within each behavioural class; and (iii) deriving a welfare measure associated with the RU-RR mixture model and consistent with referendum-voting, which is the adequate mechanism of provision for such local public goods. The context of our empirical application is a stated choice experiment concerning traffic calming schemes. We find that the random parameter RU-RR mixture model not only outperforms its fixed coefficient counterpart in terms of fit-as expected-but also in terms of plausibility of membership determinants of behavioural class. In line with psychological theories of regret, we find that, compared to respondents who are familiar with the choice context (i.e. the traffic calming scheme), unfamiliar respondents are more likely to be regret minimizers than utility maximizers. © 2014 Elsevier Ltd.
Resumo:
BACKGROUND: Methylation-induced silencing of promoter CpG islands in tumor suppressor genes plays an important role in human carcinogenesis. In colorectal cancer, the CpG island methylator phenotype (CIMP) is defined as widespread and elevated levels of DNA methylation and CIMP+ tumors have distinctive clinicopathological and molecular features. In contrast, the existence of a comparable CIMP subtype in gastric cancer (GC) has not been clearly established. To further investigate this issue, in the present study we performed comprehensive DNA methylation profiling of a well-characterised series of primary GC.
METHODS: The methylation status of 1,421 autosomal CpG sites located within 768 cancer-related genes was investigated using the Illumina GoldenGate Methylation Panel I assay on DNA extracted from 60 gastric tumors and matched tumor-adjacent gastric tissue pairs. Methylation data was analysed using a recursively partitioned mixture model and investigated for associations with clinicopathological and molecular features including age, Helicobacter pylori status, tumor site, patient survival, microsatellite instability and BRAF and KRAS mutations.
RESULTS: A total of 147 genes were differentially methylated between tumor and matched tumor-adjacent gastric tissue, with HOXA5 and hedgehog signalling being the top-ranked gene and signalling pathway, respectively. Unsupervised clustering of methylation data revealed the existence of 6 subgroups under two main clusters, referred to as L (low methylation; 28% of cases) and H (high methylation; 72%). Female patients were over-represented in the H tumor group compared to L group (36% vs 6%; P = 0.024), however no other significant differences in clinicopathological or molecular features were apparent. CpG sites that were hypermethylated in group H were more frequently located in CpG islands and marked for polycomb occupancy.
CONCLUSIONS: High-throughput methylation analysis implicates genes involved in embryonic development and hedgehog signaling in gastric tumorigenesis. GC is comprised of two major methylation subtypes, with the highly methylated group showing some features consistent with a CpG island methylator phenotype.
Resumo:
This paper investigated using lip movements as a behavioural biometric for person authentication. The system was trained, evaluated and tested using the XM2VTS dataset, following the Lausanne Protocol configuration II. Features were selected from the DCT coefficients of the greyscale lip image. This paper investigated the number of DCT coefficients selected, the selection process, and static and dynamic feature combinations. Using a Gaussian Mixture Model - Universal Background Model framework an Equal Error Rate of 2.20% was achieved during evaluation and on an unseen test set a False Acceptance Rate of 1.7% and False Rejection Rate of 3.0% was achieved. This compares favourably with face authentication results on the same dataset whilst not being susceptible to spoofing attacks.
Resumo:
An RVE–based stochastic numerical model is used to calculate the permeability of randomly generated porous media at different values of the fiber volume fraction for the case of transverse flow in a unidirectional ply. Analysis of the numerical results shows that the permeability is not normally distributed. With the aim of proposing a new understanding on this particular topic, permeability data are fitted using both a mixture model and a unimodal distribution. Our findings suggest that permeability can be fitted well using a mixture model based on the lognormal and power law distributions. In case of a unimodal distribution, it is found, using the maximum-likelihood estimation method (MLE), that the generalized extreme value (GEV) distribution represents the best fit. Finally, an expression of the permeability as a function of the fiber volume fraction based on the GEV distribution is discussed in light of the previous results.
Resumo:
This paper provides a summary of our studies on robust speech recognition based on a new statistical approach – the probabilistic union model. We consider speech recognition given that part of the acoustic features may be corrupted by noise. The union model is a method for basing the recognition on the clean part of the features, thereby reducing the effect of the noise on recognition. To this end, the union model is similar to the missing feature method. However, the two methods achieve this end through different routes. The missing feature method usually requires the identity of the noisy data for noise removal, while the union model combines the local features based on the union of random events, to reduce the dependence of the model on information about the noise. We previously investigated the applications of the union model to speech recognition involving unknown partial corruption in frequency band, in time duration, and in feature streams. Additionally, a combination of the union model with conventional noise-reduction techniques was studied, as a means of dealing with a mixture of known or trainable noise and unknown unexpected noise. In this paper, a unified review, in the context of dealing with unknown partial feature corruption, is provided into each of these applications, giving the appropriate theory and implementation algorithms, along with an experimental evaluation.
Resumo:
This study explores using artificial neural networks to predict the rheological and mechanical properties of underwater concrete (UWC) mixtures and to evaluate the sensitivity of such properties to variations in mixture ingredients. Artificial neural networks (ANN) mimic the structure and operation of biological neurons and have the unique ability of self-learning, mapping, and functional approximation. Details of the development of the proposed neural network model, its architecture, training, and validation are presented in this study. A database incorporating 175 UWC mixtures from nine different studies was developed to train and test the ANN model. The data are arranged in a patterned format. Each pattern contains an input vector that includes quantity values of the mixture variables influencing the behavior of UWC mixtures (that is, cement, silica fume, fly ash, slag, water, coarse and fine aggregates, and chemical admixtures) and a corresponding output vector that includes the rheological or mechanical property to be modeled. Results show that the ANN model thus developed is not only capable of accurately predicting the slump, slump-flow, washout resistance, and compressive strength of underwater concrete mixtures used in the training process, but it can also effectively predict the aforementioned properties for new mixtures designed within the practical range of the input parameters used in the training process with an absolute error of 4.6, 10.6, 10.6, and 4.4%, respectively.
Resumo:
This paper investigates the performance of the tests proposed by Hadri and by Hadri and Larsson for testing for stationarity in heterogeneous panel data under model misspecification. The panel tests are based on the well known KPSS test (cf. Kwiatkowski et al.) which considers two models: stationarity around a deterministic level and stationarity around a deterministic trend. There is no study, as far as we know, on the statistical properties of the test when the wrong model is used. We also consider the case of the simultaneous presence of the two types of models in a panel. We employ two asymptotics: joint asymptotic, T, N -> infinity simultaneously, and T fixed and N allowed to grow indefinitely. We use Monte Carlo experiments to investigate the effects of misspecification in sample sizes usually used in practice. The results indicate that the assumption that T is fixed rather than asymptotic leads to tests that have less size distortions, particularly for relatively small T with large N panels (micro-panels) than the tests derived under the joint asymptotics. We also find that choosing a deterministic trend when a deterministic level is true does not significantly affect the properties of the test. But, choosing a deterministic level when a deterministic trend is true leads to extreme over-rejections. Therefore, when unsure about which model has generated the data, it is suggested to use the model with a trend. We also propose a new statistic for testing for stationarity in mixed panel data where the mixture is known. The performance of this new test is very good for both cases of T asymptotic and T fixed. The statistic for T asymptotic is slightly undersized when T is very small (
Resumo:
Silicone elastomer systems have previously been shown to offer potential for the sustained release of protein therapeutics. However, the general requirement for the incorporation of large amounts of release enhancing solid excipients to achieve therapeutically effective release rates from these otherwise hydrophobic polymer systems can detrimentally affect the viscosity of the precure silicone elastomer mixture and its curing characteristics. The increase in viscosity necessitates the use of higher operating pressures in manufacture, resulting in higher shear stresses that are often detrimental to the structural integrity of the incorporated protein. The addition of liquid silicones increases the initial tan delta value and the tan delta values in the early stages of curing by increasing the liquid character (G '') of the silicone elastomer system and reducing its elastic character (G'), thereby reducing the shear stress placed on the formulation during manufacture and minimizing the potential for protein degradation. However, SEM analysis has demonstrated that if the liquid character of the silicone elastomer is too high, the formulation will be unable to fill the mold during manufacture. This study demonstrates that incorporation of liquid hydroxy-terminated polydimethylsiloxanes into addition-cure silicone elastomer-covered rod formulations can both effectively lower the viscosity of the precured silicone elastomer and enhance the release rate of the model therapeutic protein bovine serum albumin. (C) 2011 Wiley Periodicals, Inc. J Appl Polym Sci, 2011
Resumo:
The ammonia oxidation reaction on supported polycrystalline platinum catalyst was investigated in an aluminum-based microreactor. An extensive set of reactions was included in the chemical reactor modeling to facilitate the construction of a kinetic model capable of satisfactory predictions for a wide range of conditions (NH3 partial pressure, 0.01-0.12 atm; O-2 partial pressure, 0.10-0.88 atm; temperature, 523-673 K; contact time, 0.3-0.7 ms). The elementary surface reactions used in developing the mechanism were chosen based on the literature data concerning ammonia oxidation on a Pt catalyst. Parameter estimates for the kinetic model were obtained using multi-response least squares regression analysis using the isothermal plug-flow reactor approximation. To evaluate the model, the behavior of a microstructured reactor was simulated by means of a complete Navier-Stokes model accounting for the reactions on the catalyst surface and the effect of temperature on the physico-chemical properties of the reacting mixture. In this way, the effect of the catalytic wall temperature non-uniformity and the effect of a boundary layer on the ammonia conversion and selectivity were examined. After further optimization of appropriate kinetic parameters, the calculated selectivities and product yields agree very well with the values actually measured in the microreactor. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
This paper proposes an optimisation of the adaptive Gaussian mixture background model that allows the deployment of the method on processors with low memory capacity. The effect of the granularity of the Gaussian mean-value and variance in an integer-based implementation is investigated and novel updating rules of the mixture weights are described. Based on the proposed framework, an implementation for a very low power consumption micro-controller is presented. Results show that the proposed method operates in real time on the micro-controller and has similar performance to the original model. © 2012 Springer-Verlag.
Resumo:
One of the most widely used techniques in computer vision for foreground detection is to model each background pixel as a Mixture of Gaussians (MoG). While this is effective for a static camera with a fixed or a slowly varying background, it fails to handle any fast, dynamic movement in the background. In this paper, we propose a generalised framework, called region-based MoG (RMoG), that takes into consideration neighbouring pixels while generating the model of the observed scene. The model equations are derived from Expectation Maximisation theory for batch mode, and stochastic approximation is used for online mode updates. We evaluate our region-based approach against ten sequences containing dynamic backgrounds, and show that the region-based approach provides a performance improvement over the traditional single pixel MoG. For feature and region sizes that are equal, the effect of increasing the learning rate is to reduce both true and false positives. Comparison with four state-of-the art approaches shows that RMoG outperforms the others in reducing false positives whilst still maintaining reasonable foreground definition. Lastly, using the ChangeDetection (CDNet 2014) benchmark, we evaluated RMoG against numerous surveillance scenes and found it to amongst the leading performers for dynamic background scenes, whilst providing comparable performance for other commonly occurring surveillance scenes.