44 resultados para gaussian mixture model
Resumo:
In this paper, we propose a multi-camera application capable of processing high resolution images and extracting features based on colors patterns over graphic processing units (GPU). The goal is to work in real time under the uncontrolled environment of a sport event like a football match. Since football players are composed for diverse and complex color patterns, a Gaussian Mixture Models (GMM) is applied as segmentation paradigm, in order to analyze sport live images and video. Optimization techniques have also been applied over the C++ implementation using profiling tools focused on high performance. Time consuming tasks were implemented over NVIDIA's CUDA platform, and later restructured and enhanced, speeding up the whole process significantly. Our resulting code is around 4-11 times faster on a low cost GPU than a highly optimized C++ version on a central processing unit (CPU) over the same data. Real time has been obtained processing until 64 frames per second. An important conclusion derived from our study is the scalability of the application to the number of cores on the GPU. © 2011 Springer-Verlag.
Resumo:
This paper proposes a discrete mixture model which assigns individuals, up to a probability, to either a class of random utility (RU) maximizers or a class of random regret (RR) minimizers, on the basis of their sequence of observed choices. Our proposed model advances the state of the art of RU-RR mixture models by (i) adding and simultaneously estimating a membership model which predicts the probability of belonging to a RU or RR class; (ii) adding a layer of random taste heterogeneity within each behavioural class; and (iii) deriving a welfare measure associated with the RU-RR mixture model and consistent with referendum-voting, which is the adequate mechanism of provision for such local public goods. The context of our empirical application is a stated choice experiment concerning traffic calming schemes. We find that the random parameter RU-RR mixture model not only outperforms its fixed coefficient counterpart in terms of fit-as expected-but also in terms of plausibility of membership determinants of behavioural class. In line with psychological theories of regret, we find that, compared to respondents who are familiar with the choice context (i.e. the traffic calming scheme), unfamiliar respondents are more likely to be regret minimizers than utility maximizers. © 2014 Elsevier Ltd.
Resumo:
BACKGROUND: Methylation-induced silencing of promoter CpG islands in tumor suppressor genes plays an important role in human carcinogenesis. In colorectal cancer, the CpG island methylator phenotype (CIMP) is defined as widespread and elevated levels of DNA methylation and CIMP+ tumors have distinctive clinicopathological and molecular features. In contrast, the existence of a comparable CIMP subtype in gastric cancer (GC) has not been clearly established. To further investigate this issue, in the present study we performed comprehensive DNA methylation profiling of a well-characterised series of primary GC.
METHODS: The methylation status of 1,421 autosomal CpG sites located within 768 cancer-related genes was investigated using the Illumina GoldenGate Methylation Panel I assay on DNA extracted from 60 gastric tumors and matched tumor-adjacent gastric tissue pairs. Methylation data was analysed using a recursively partitioned mixture model and investigated for associations with clinicopathological and molecular features including age, Helicobacter pylori status, tumor site, patient survival, microsatellite instability and BRAF and KRAS mutations.
RESULTS: A total of 147 genes were differentially methylated between tumor and matched tumor-adjacent gastric tissue, with HOXA5 and hedgehog signalling being the top-ranked gene and signalling pathway, respectively. Unsupervised clustering of methylation data revealed the existence of 6 subgroups under two main clusters, referred to as L (low methylation; 28% of cases) and H (high methylation; 72%). Female patients were over-represented in the H tumor group compared to L group (36% vs 6%; P = 0.024), however no other significant differences in clinicopathological or molecular features were apparent. CpG sites that were hypermethylated in group H were more frequently located in CpG islands and marked for polycomb occupancy.
CONCLUSIONS: High-throughput methylation analysis implicates genes involved in embryonic development and hedgehog signaling in gastric tumorigenesis. GC is comprised of two major methylation subtypes, with the highly methylated group showing some features consistent with a CpG island methylator phenotype.
Resumo:
An RVE–based stochastic numerical model is used to calculate the permeability of randomly generated porous media at different values of the fiber volume fraction for the case of transverse flow in a unidirectional ply. Analysis of the numerical results shows that the permeability is not normally distributed. With the aim of proposing a new understanding on this particular topic, permeability data are fitted using both a mixture model and a unimodal distribution. Our findings suggest that permeability can be fitted well using a mixture model based on the lognormal and power law distributions. In case of a unimodal distribution, it is found, using the maximum-likelihood estimation method (MLE), that the generalized extreme value (GEV) distribution represents the best fit. Finally, an expression of the permeability as a function of the fiber volume fraction based on the GEV distribution is discussed in light of the previous results.
Resumo:
This paper provides a summary of our studies on robust speech recognition based on a new statistical approach – the probabilistic union model. We consider speech recognition given that part of the acoustic features may be corrupted by noise. The union model is a method for basing the recognition on the clean part of the features, thereby reducing the effect of the noise on recognition. To this end, the union model is similar to the missing feature method. However, the two methods achieve this end through different routes. The missing feature method usually requires the identity of the noisy data for noise removal, while the union model combines the local features based on the union of random events, to reduce the dependence of the model on information about the noise. We previously investigated the applications of the union model to speech recognition involving unknown partial corruption in frequency band, in time duration, and in feature streams. Additionally, a combination of the union model with conventional noise-reduction techniques was studied, as a means of dealing with a mixture of known or trainable noise and unknown unexpected noise. In this paper, a unified review, in the context of dealing with unknown partial feature corruption, is provided into each of these applications, giving the appropriate theory and implementation algorithms, along with an experimental evaluation.
Resumo:
This paper exposes the strengths and weaknesses of the recently proposed velocity-based local model (LM) network. The global dynamics of the velocity-based blended representation are directly related to the dynamics of the underlying local models, an important property in the design of local controller networks. Furthermore, the sub-models are continuous-time and linear providing continuity with established linear theory and methods. This is not true for the conventional LM framework, where the global dynamics are only weakly related to the affine sub-models. In this paper, a velocity-based multiple model network is identified for a highly nonlinear dynamical system. The results show excellent dynamical modelling performances, highlighting the value of the velocity-based approach for the design and analysis of LM based control. Three important practical issues are also addressed. These relate to the blending of the velocity-based local models, the use of normalised Gaussian basis functions and the requirement of an input derivative.
Resumo:
The extension of the bootstrap filter to the multiple model target tracking problem is considered. Bayesian bootstrap filtering is a very powerful technique since it represents samples by random samples and is therefore not restricted to linear, Gaussian systems, making it ideal for the multiple model problem where very complex densities fan be generated.
Resumo:
This study explores using artificial neural networks to predict the rheological and mechanical properties of underwater concrete (UWC) mixtures and to evaluate the sensitivity of such properties to variations in mixture ingredients. Artificial neural networks (ANN) mimic the structure and operation of biological neurons and have the unique ability of self-learning, mapping, and functional approximation. Details of the development of the proposed neural network model, its architecture, training, and validation are presented in this study. A database incorporating 175 UWC mixtures from nine different studies was developed to train and test the ANN model. The data are arranged in a patterned format. Each pattern contains an input vector that includes quantity values of the mixture variables influencing the behavior of UWC mixtures (that is, cement, silica fume, fly ash, slag, water, coarse and fine aggregates, and chemical admixtures) and a corresponding output vector that includes the rheological or mechanical property to be modeled. Results show that the ANN model thus developed is not only capable of accurately predicting the slump, slump-flow, washout resistance, and compressive strength of underwater concrete mixtures used in the training process, but it can also effectively predict the aforementioned properties for new mixtures designed within the practical range of the input parameters used in the training process with an absolute error of 4.6, 10.6, 10.6, and 4.4%, respectively.
Resumo:
This paper investigates the performance of the tests proposed by Hadri and by Hadri and Larsson for testing for stationarity in heterogeneous panel data under model misspecification. The panel tests are based on the well known KPSS test (cf. Kwiatkowski et al.) which considers two models: stationarity around a deterministic level and stationarity around a deterministic trend. There is no study, as far as we know, on the statistical properties of the test when the wrong model is used. We also consider the case of the simultaneous presence of the two types of models in a panel. We employ two asymptotics: joint asymptotic, T, N -> infinity simultaneously, and T fixed and N allowed to grow indefinitely. We use Monte Carlo experiments to investigate the effects of misspecification in sample sizes usually used in practice. The results indicate that the assumption that T is fixed rather than asymptotic leads to tests that have less size distortions, particularly for relatively small T with large N panels (micro-panels) than the tests derived under the joint asymptotics. We also find that choosing a deterministic trend when a deterministic level is true does not significantly affect the properties of the test. But, choosing a deterministic level when a deterministic trend is true leads to extreme over-rejections. Therefore, when unsure about which model has generated the data, it is suggested to use the model with a trend. We also propose a new statistic for testing for stationarity in mixed panel data where the mixture is known. The performance of this new test is very good for both cases of T asymptotic and T fixed. The statistic for T asymptotic is slightly undersized when T is very small (
Resumo:
The stochastic nature of oil price fluctuations is investigated over a twelve-year period, borrowing feedback from an existing database (USA Energy Information Administration database, available online). We evaluate the scaling exponents of the fluctuations by employing different statistical analysis methods, namely rescaled range analysis (R/S), scale windowed variance analysis (SWV) and the generalized Hurst exponent (GH) method. Relying on the scaling exponents obtained, we apply a rescaling procedure to investigate the complex characteristics of the probability density functions (PDFs) dominating oil price fluctuations. It is found that PDFs exhibit scale invariance, and in fact collapse onto a single curve when increments are measured over microscales (typically less than 30 days). The time evolution of the distributions is well fitted by a Levy-type stable distribution. The relevance of a Levy distribution is made plausible by a simple model of nonlinear transfer. Our results also exhibit a degree of multifractality as the PDFs change and converge toward to a Gaussian distribution at the macroscales.
Resumo:
Silicone elastomer systems have previously been shown to offer potential for the sustained release of protein therapeutics. However, the general requirement for the incorporation of large amounts of release enhancing solid excipients to achieve therapeutically effective release rates from these otherwise hydrophobic polymer systems can detrimentally affect the viscosity of the precure silicone elastomer mixture and its curing characteristics. The increase in viscosity necessitates the use of higher operating pressures in manufacture, resulting in higher shear stresses that are often detrimental to the structural integrity of the incorporated protein. The addition of liquid silicones increases the initial tan delta value and the tan delta values in the early stages of curing by increasing the liquid character (G '') of the silicone elastomer system and reducing its elastic character (G'), thereby reducing the shear stress placed on the formulation during manufacture and minimizing the potential for protein degradation. However, SEM analysis has demonstrated that if the liquid character of the silicone elastomer is too high, the formulation will be unable to fill the mold during manufacture. This study demonstrates that incorporation of liquid hydroxy-terminated polydimethylsiloxanes into addition-cure silicone elastomer-covered rod formulations can both effectively lower the viscosity of the precured silicone elastomer and enhance the release rate of the model therapeutic protein bovine serum albumin. (C) 2011 Wiley Periodicals, Inc. J Appl Polym Sci, 2011
Resumo:
It is convenient and effective to solve nonlinear problems with a model that has a linear-in-the-parameters (LITP) structure. However, the nonlinear parameters (e.g. the width of Gaussian function) of each model term needs to be pre-determined either from expert experience or through exhaustive search. An alternative approach is to optimize them by a gradient-based technique (e.g. Newton’s method). Unfortunately, all of these methods still need a lot of computations. Recently, the extreme learning machine (ELM) has shown its advantages in terms of fast learning from data, but the sparsity of the constructed model cannot be guaranteed. This paper proposes a novel algorithm for automatic construction of a nonlinear system model based on the extreme learning machine. This is achieved by effectively integrating the ELM and leave-one-out (LOO) cross validation with our two-stage stepwise construction procedure [1]. The main objective is to improve the compactness and generalization capability of the model constructed by the ELM method. Numerical analysis shows that the proposed algorithm only involves about half of the computation of orthogonal least squares (OLS) based method. Simulation examples are included to confirm the efficacy and superiority of the proposed technique.