973 resultados para Mixture-models


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Although NSSI engagement is a growing public health concern, little research has documented the developmental precursors to NSSI in longitudinal studies using youth samples. This study aimed to expand upon previous research on groups of NSSI engagement in a population-based sample of youth using multi-wave data. Moreover, this study examined whether chronic peer and romantic stress, the serotonin transporter gene (5-HTTLPR), parenting behaviors, and negative attributional style predicted the NSSI group membership as well as the role of sex and grade. Participants were 549 youth in beginning in the 3rd, 6th, and 9th grades at the baseline assessment. NSSI was assessed across 7 waves of data. Chronic peer and romantic stress, 5-HTTLPR, parenting behaviors, and negative attributional style were assessed at baseline. Growth mixture models, conducted to test the latent trajectory of NSSI groups did not converge. Three NSSI groups were manually created according to classifications that were determined a priori. NSSI groups included: no NSSI (85.1%), episodic NSSI (8.5%), and repeated NSSI (6.4%). Chronic peer and romantic stress, sex, and grade differentiated the no NSSI vs. repeated NSSI groups and the episodic NSSI vs. repeated NSSI groups. Specifically, higher levels of stress, being female, and being in higher grades related to repeated NSSI. 5-HTTLPR differentiated the no NSSI vs. repeated NSSI groups, such that carrying the short allele of 5-HTTLPR related to repeated NSSI. Exploratory analyses revealed that the relationship between attributional style and NSSI group was moderated by grade. This study suggests chronic interpersonal peer and romantic stress is an important factor placing youth at greater risk for repeatedly engaging in NSSI.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Using a sample of 339 university graduates from the University of Alicante (Spain) three years after completion of their studies, we studied the relationships between general intelligence (GI), personality traits, emotional intelligence (EI), academic performance, and occupational attainment and compared the results of conventional regression analysis with the results obtained from applying regression mixture models. The results reveal the influence of unobserved population heterogeneity (latent class) on the relationship between predictors and criteria and the improvement in the prediction obtained from applying regression mixture models compared to applying a conventional regression model.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Normal mixture models are often used to cluster continuous data. However, conventional approaches for fitting these models will have problems in producing nonsingular estimates of the component-covariance matrices when the dimension of the observations is large relative to the number of observations. In this case, methods such as principal components analysis (PCA) and the mixture of factor analyzers model can be adopted to avoid these estimation problems. We examine these approaches applied to the Cabernet wine data set of Ashenfelter (1999), considering the clustering of both the wines and the judges, and comparing our results with another analysis. The mixture of factor analyzers model proves particularly effective in clustering the wines, accurately classifying many of the wines by location.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Finite mixture models are being increasingly used to model the distributions of a wide variety of random phenomena. While normal mixture models are often used to cluster data sets of continuous multivariate data, a more robust clustering can be obtained by considering the t mixture model-based approach. Mixtures of factor analyzers enable model-based density estimation to be undertaken for high-dimensional data where the number of observations n is very large relative to their dimension p. As the approach using the multivariate normal family of distributions is sensitive to outliers, it is more robust to adopt the multivariate t family for the component error and factor distributions. The computational aspects associated with robustness and high dimensionality in these approaches to cluster analysis are discussed and illustrated.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Principal component analysis (PCA) is one of the most popular techniques for processing, compressing and visualising data, although its effectiveness is limited by its global linearity. While nonlinear variants of PCA have been proposed, an alternative paradigm is to capture data complexity by a combination of local linear PCA projections. However, conventional PCA does not correspond to a probability density, and so there is no unique way to combine PCA models. Previous attempts to formulate mixture models for PCA have therefore to some extent been ad hoc. In this paper, PCA is formulated within a maximum-likelihood framework, based on a specific form of Gaussian latent variable model. This leads to a well-defined mixture model for probabilistic principal component analysers, whose parameters can be determined using an EM algorithm. We discuss the advantages of this model in the context of clustering, density modelling and local dimensionality reduction, and we demonstrate its application to image compression and handwritten digit recognition.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a novel approach to water pollution detection from remotely sensed low-platform mounted visible band camera images. We examine the feasibility of unsupervised segmentation for slick (oily spills on the water surface) region labelling. Adaptive and non adaptive filtering is combined with density modeling of the obtained textural features. A particular effort is concentrated on the textural feature extraction from raw intensity images using filter banks and adaptive feature extraction from the obtained output coefficients. Segmentation in the extracted feature space is achieved using Gaussian mixture models (GMM).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Investigations into the modelling techniques that depict the transport of discrete phases (gas bubbles or solid particles) and model biochemical reactions in a bubble column reactor are discussed here. The mixture model was used to calculate gas-liquid, solid-liquid and gasliquid-solid interactions. Multiphase flow is a difficult phenomenon to capture, particularly in bubble columns where the major driving force is caused by the injection of gas bubbles. The gas bubbles cause a large density difference to occur that results in transient multi-dimensional fluid motion. Standard design procedures do not account for the transient motion, due to the simplifying assumptions of steady plug flow. Computational fluid dynamics (CFD) can assist in expanding the understanding of complex flows in bubble columns by characterising the flow phenomena for many geometrical configurations. Therefore, CFD has a role in the education of chemical and biochemical engineers, providing the examples of flow phenomena that many engineers may not experience, even through experimentation. The performance of the mixture model was investigated for three domains (plane, rectangular and cylindrical) and three flow models (laminar, k-e turbulence and the Reynolds stresses). mThis investigation raised many questions about how gas-liquid interactions are captured numerically. To answer some of these questions the analogy between thermal convection in a cavity and gas-liquid flow in bubble columns was invoked. This involved modelling the buoyant motion of air in a narrow cavity for a number of turbulence schemes. The difference in density was caused by a temperature gradient that acted across the width of the cavity. Multiple vortices were obtained when the Reynolds stresses were utilised with the addition of a basic flow profile after each time step. To implement the three-phase models an alternative mixture model was developed and compared against a commercially available mixture model for three turbulence schemes. The scheme where just the Reynolds stresses model was employed, predicted the transient motion of the fluids quite well for both mixture models. Solid-liquid and then alternative formulations of gas-liquid-solid model were compared against one another. The alternative form of the mixture model was found to perform particularly well for both gas and solid phase transport when calculating two and three-phase flow. The improvement in the solutions obtained was a result of the inclusion of the Reynolds stresses model and differences in the mixture models employed. The differences between the alternative mixture models were found in the volume fraction equation (flux and deviatoric stress tensor terms) and the viscosity formulation for the mixture phase.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The main objective of the project is to enhance the already effective health-monitoring system (HUMS) for helicopters by analysing structural vibrations to recognise different flight conditions directly from sensor information. The goal of this paper is to develop a new method to select those sensors and frequency bands that are best for detecting changes in flight conditions. We projected frequency information to a 2-dimensional space in order to visualise flight-condition transitions using the Generative Topographic Mapping (GTM) and a variant which supports simultaneous feature selection. We created an objective measure of the separation between different flight conditions in the visualisation space by calculating the Kullback-Leibler (KL) divergence between Gaussian mixture models (GMMs) fitted to each class: the higher the KL-divergence, the better the interclass separation. To find the optimal combination of sensors, they were considered in pairs, triples and groups of four sensors. The sensor triples provided the best result in terms of KL-divergence. We also found that the use of a variational training algorithm for the GMMs gave more reliable results.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Brain injury due to lack of oxygen or impaired blood flow around the time of birth, may cause long term neurological dysfunction or death in severe cases. The treatments need to be initiated as soon as possible and tailored according to the nature of the injury to achieve best outcomes. The Electroencephalogram (EEG) currently provides the best insight into neurological activities. However, its interpretation presents formidable challenge for the neurophsiologists. Moreover, such expertise is not widely available particularly around the clock in a typical busy Neonatal Intensive Care Unit (NICU). Therefore, an automated computerized system for detecting and grading the severity of brain injuries could be of great help for medical staff to diagnose and then initiate on-time treatments. In this study, automated systems for detection of neonatal seizures and grading the severity of Hypoxic-Ischemic Encephalopathy (HIE) using EEG and Heart Rate (HR) signals are presented. It is well known that there is a lot of contextual and temporal information present in the EEG and HR signals if examined at longer time scale. The systems developed in the past, exploited this information either at very early stage of the system without any intelligent block or at very later stage where presence of such information is much reduced. This work has particularly focused on the development of a system that can incorporate the contextual information at the middle (classifier) level. This is achieved by using dynamic classifiers that are able to process the sequences of feature vectors rather than only one feature vector at a time.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The objective of this study was to gain an understanding of the effects of population heterogeneity, missing data, and causal relationships on parameter estimates from statistical models when analyzing change in medication use. From a public health perspective, two timely topics were addressed: the use and effects of statins in populations in primary prevention of cardiovascular disease and polypharmacy in older population. Growth mixture models were applied to characterize the accumulation of cardiovascular and diabetes medications among apparently healthy population of statin initiators. The causal effect of statin adherence on the incidence of acute cardiovascular events was estimated using marginal structural models in comparison with discrete-time hazards models. The impact of missing data on the growth estimates of evolution of polypharmacy was examined comparing statistical models under different assumptions for missing data mechanism. The data came from Finnish administrative registers and from the population-based Geriatric Multidisciplinary Strategy for the Good Care of the Elderly study conducted in Kuopio, Finland, during 2004–07. Five distinct patterns of accumulating medications emerged among the population of apparently healthy statin initiators during two years after statin initiation. Proper accounting for time-varying dependencies between adherence to statins and confounders using marginal structural models produced comparable estimation results with those from a discrete-time hazards model. Missing data mechanism was shown to be a key component when estimating the evolution of polypharmacy among older persons. In conclusion, population heterogeneity, missing data and causal relationships are important aspects in longitudinal studies that associate with the study question and should be critically assessed when performing statistical analyses. Analyses should be supplemented with sensitivity analyses towards model assumptions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Evaluation of human kinematic performance is essential in rehabilitation and skill assessment. These services are in high demand where the improvements made due to exercises need to be regularly assessed. In some relevant industries there is a need to evaluate their employee capabilities quantitatively for accident compensation and insurance purposes. In particular, these assessments are preferred to be based on more quantifiable measures in a standardized form ensuring accuracy, reliability, ease of use and anywhere anytime information to the clinician. Therefore, it is necessary to have an efficient mechanism for evaluation and assessment of human kinematic movements as the current motion matching and recognition algorithms fall short due to characteristically strict specifications required in numerous health care applications. In this paper, we propose a summative approach using a double integral to define a closeness between two trajectories typically generated by human movement. This approach can be considered as a spatial scoring mechanism in the evaluation of human kinematic performance as well as in movement recognition applications. Several experiments based on computer simulations as well as real data were set up to examine the performance of the proposed approach as a scoring mechanism for the evaluation of human kinematic performances. The results demonstrated better characterization of the movement assessment and motion recognition ability, with a recognition rate of 86.19%, than the currently used methods such as Gaussian mixture models and pose normalization employed in motion recognition tasks. Finally, we use the scoring mechanism to analyze the proximity in human kinematic performance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In fragmented landscapes, a species' dispersal ability and response to habitat condition are key determinants of persistence. To understand the relative importance of dispersal and condition for survival of Nephrurus stellatus (Gekkonidae) in southern Australia, we surveyed 92 woodland remnants three times. This gecko favours early post-fire succession conditions so may be at risk of extinction in the long-unburnt agricultural landscape. Using N-mixture models, we compared the influence of four measures of isolation, patch area and two habitat variables on the abundance and occurrence of N. stellatus, while taking into account detection probability. Patch occupancy was high, despite the long-term absence of fire from most remnants. Distance to the nearest occupied site was the most informative measure of patch isolation, exhibiting a negative relationship with occupancy. Distance to a nearby conservation park had little influence, suggesting that mainland-island metapopulation dynamics are not important. Abundance and occurrence were positively related to %-cover of spinifex (Triodia), indicating that niche-related factors may also contribute to spatial dynamics. Patterns of patch occupancy imply that N. stellatus has a sequence of spatial dynamics across an isolation gradient, with patchy populations and source-sink dynamics when patches are within 300 m, metapopulations at intermediate isolation, and declining populations when patches are separated by >1-2 km. Considering the conservation needs of the community, habitat condition and connectivity may need to be improved before fire can be reintroduced to the landscape. We speculate that fire may interact with habitat degradation and isolation, increasing the risk of local extinctions.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The experimental solubilities of the mixture of nitrophenol (m- and p-) isomers were determined at 308, 318 and 328 K over a pressure range of 10-17.55 MPa. Compared to the binary solubilities, the ternary solubilities of m-nitrophenol increased at 308, 318 and 328 K. The ternary solubilities of p-nitrophenol increased at 308 K, while the ternary solubilities decreased at lower pressures and increased at higher pressure at 318 and 328 K. The solubilities of the solid mixtures in supercritical carbon dioxide (SCCO2) were correlated with solution models by incorporating the non-idealities using activity coefficient based models. The Wilson and NRTL activity coefficient models were applied to determine the nature of the interactions between the molecules. The equation developed by using the NRTL model has three parameters and correlates mixture solubilities of solid solutes in terms of temperature and cosolute composition. The equation derived from the Wilson model contains five parameters and correlates solubilities in terms of temperature, density and cosolute composition. These two new equations developed in this work were used to correlate the solubilities of 25 binary solid mixtures including the current data. The average AARDs of the model equations derived using the NRTL and Wilson models for the solid mixtures were found to be 7% and 4%, respectively. (C) 2012 Elsevier B.V. All rights reserved.