14 resultados para finite mixture distribution

em University of Queensland eSpace - Australia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A two-component mixture regression model that allows simultaneously for heterogeneity and dependency among observations is proposed. By specifying random effects explicitly in the linear predictor of the mixture probability and the mixture components, parameter estimation is achieved by maximising the corresponding best linear unbiased prediction type log-likelihood. Approximate residual maximum likelihood estimates are obtained via an EM algorithm in the manner of generalised linear mixed model (GLMM). The method can be extended to a g-component mixture regression model with the component density from the exponential family, leading to the development of the class of finite mixture GLMM. For illustration, the method is applied to analyse neonatal length of stay (LOS). It is shown that identification of pertinent factors that influence hospital LOS can provide important information for health care planning and resource allocation. (C) 2002 Elsevier Science B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The modelling of inpatient length of stay (LOS) has important implications in health care studies. Finite mixture distributions are usually used to model the heterogeneous LOS distribution, due to a certain proportion of patients sustaining-a longer stay. However, the morbidity data are collected from hospitals, observations clustered within the same hospital are often correlated. The generalized linear mixed model approach is adopted to accommodate the inherent correlation via unobservable random effects. An EM algorithm is developed to obtain residual maximum quasi-likelihood estimation. The proposed hierarchical mixture regression approach enables the identification and assessment of factors influencing the long-stay proportion and the LOS for the long-stay patient subgroup. A neonatal LOS data set is used for illustration, (C) 2003 Elsevier Science Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Cluster analysis via a finite mixture model approach is considered. With this approach to clustering, the data can be partitioned into a specified number of clusters g by first fitting a mixture model with g components. An outright clustering of the data is then obtained by assigning an observation to the component to which it has the highest estimated posterior probability of belonging; that is, the ith cluster consists of those observations assigned to the ith component (i = 1,..., g). The focus is on the use of mixtures of normal components for the cluster analysis of data that can be regarded as being continuous. But attention is also given to the case of mixed data, where the observations consist of both continuous and discrete variables.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Finite mixture models are being increasingly used to model the distributions of a wide variety of random phenomena. While normal mixture models are often used to cluster data sets of continuous multivariate data, a more robust clustering can be obtained by considering the t mixture model-based approach. Mixtures of factor analyzers enable model-based density estimation to be undertaken for high-dimensional data where the number of observations n is very large relative to their dimension p. As the approach using the multivariate normal family of distributions is sensitive to outliers, it is more robust to adopt the multivariate t family for the component error and factor distributions. The computational aspects associated with robustness and high dimensionality in these approaches to cluster analysis are discussed and illustrated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider the problem of assessing the number of clusters in a limited number of tissue samples containing gene expressions for possibly several thousands of genes. It is proposed to use a normal mixture model-based approach to the clustering of the tissue samples. One advantage of this approach is that the question on the number of clusters in the data can be formulated in terms of a test on the smallest number of components in the mixture model compatible with the data. This test can be carried out on the basis of the likelihood ratio test statistic, using resampling to assess its null distribution. The effectiveness of this approach is demonstrated on simulated data and on some microarray datasets, as considered previously in the bioinformatics literature. (C) 2004 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Superplastic bulging is the most successful application of superplastic forming (SPF) in industry, but the non-uniform wall thickness distribution of parts formed by it is a common technical problem yet to be overcome. Based on a rigid-viscoplastic finite element program developed by the authors, for simulation of the sheet superplastic forming process combined with the prediction of microstructure variations (such as grain growth and cavity growth), a simple and efficient preform design method is proposed and applied to the design of preform mould for manufacturing parts with uniform wall thickness. Examples of formed parts are presented here to demonstrate that the technology can be used to improve the uniformity of wall thickness to meet practical requirements. (C) 2004 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new approach based on the nonlocal density functional theory to determine pore size distribution (PSD) of activated carbons and energetic heterogeneity of the pore wall is proposed. The energetic heterogeneity is modeled with an energy distribution function (EDF), describing the distribution of solid-fluid potential well depth (this distribution is a Dirac delta function for an energetic homogeneous surface). The approach allows simultaneous determining of the PSD (assuming slit shape) and EDF from nitrogen or argon isotherms at their respective boiling points by using a set of local isotherms calculated for a range of pore widths and solid-fluid potential well depths. It is found that the structure of the pore wall surface significantly differs from that of graphitized carbon black. This could be attributed to defects in the crystalline structure of the surface, active oxide centers, finite size of the pore walls (in either wall thickness or pore length), and so forth. Those factors depend on the precursor and the process of carbonization and activation and hence provide a fingerprint for each adsorbent. The approach allows very accurate correlation of the experimental adsorption isotherm and leads to PSDs that are simpler and more realistic than those obtained with the original nonlocal density functional theory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A hydrogel intervertebral disc (lVD) model consisting of an inner nucleus core and an outer anulus ring was manufactured from 30 and 35% by weight Poly(vinyl alcohol) hydrogel (PVA-H) concentrations and subjected to axial compression in between saturated porous endplates at 200 N for 11 h, 30 min. Repeat experiments (n = 4) on different samples (N = 2) show good reproducibility of fluid loss and axial deformation. An axisymmetric nonlinear poroelastic finite element model with variable permeability was developed using commercial finite element software to compare axial deformation and predicted fluid loss with experimental data. The FE predictions indicate differential fluid loss similar to that of biological IVDs, with the nucleus losing more water than the anulus, and there is overall good agreement between experimental and finite element predicted fluid loss. The stress distribution pattern indicates important similarities with the biological lVD that includes stress transference from the nucleus to the anulus upon sustained loading and renders it suitable as a model that can be used in future studies to better understand the role of fluid and stress in biological IVDs. (C) 2005 Springer Science + Business Media, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The adsorption of simple Lennard-Jones fluids in a carbon slit pore of finite length was studied with Canonical Ensemble (NVT) and Gibbs Ensemble Monte Carlo Simulations (GEMC). The Canonical Ensemble was a collection of cubic simulation boxes in which a finite pore resides, while the Gibbs Ensemble was that of the pore space of the finite pore. Argon was used as a model for Lennard-Jones fluids, while the adsorbent was modelled as a finite carbon slit pore whose two walls were composed of three graphene layers with carbon atoms arranged in a hexagonal pattern. The Lennard-Jones (LJ) 12-6 potential model was used to compute the interaction energy between two fluid particles, and also between a fluid particle and a carbon atom. Argon adsorption isotherms were obtained at 87.3 K for pore widths of 1.0, 1.5 and 2.0 nm using both Canonical and Gibbs Ensembles. These results were compared with isotherms obtained with corresponding infinite pores using Grand Canonical Ensembles. The effects of the number of cycles necessary to reach equilibrium, the initial allocation of particles, the displacement step and the simulation box size were particularly investigated in the Monte Carlo simulation with Canonical Ensembles. Of these parameters, the displacement step had the most significant effect on the performance of the Monte Carlo simulation. The simulation box size was also important, especially at low pressures at which the size must be sufficiently large to have a statistically acceptable number of particles in the bulk phase. Finally, it was found that the Canonical Ensemble and the Gibbs Ensemble both yielded the same isotherm (within statistical error); however, the computation time for GEMC was shorter than that for canonical ensemble simulation. However, the latter method described the proper interface between the reservoir and the adsorbed phase (and hence the meniscus).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The application of nonlocal density functional theory (NLDFT) to determine pore size distribution (PSD) of activated carbons using a nongraphitized carbon black, instead of graphitized thermal carbon black, as a reference system is explored. We show that in this case nitrogen and argon adsorption isotherms in activated carbons are precisely correlated by the theory, and such an excellent correlation would never be possible if the pore wall surface was assumed to be identical to that of graphitized carbon black. It suggests that pore wall surfaces of activated carbon are closer to that of amorphous solids because of defects of crystalline lattice, finite pore length, and the presence of active centers.. etc. Application of the NLDFT adapted to amorphous solids resulted in quantitative description of N-2 and Ar adsorption isotherms on nongraphitized carbon black BP280 at their respective boiling points. In the present paper we determined solid-fluid potentials from experimental adsorption isotherms on nongraphitized carbon black and subsequently used those potentials to model adsorption in slit pores and generate a corresponding set of local isotherms, which we used to determine the PSD functions of different activated carbons. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Motivation: An important problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. We provide a straightforward and easily implemented method for estimating the posterior probability that an individual gene is null. The problem can be expressed in a two-component mixture framework, using an empirical Bayes approach. Current methods of implementing this approach either have some limitations due to the minimal assumptions made or with more specific assumptions are computationally intensive. Results: By converting to a z-score the value of the test statistic used to test the significance of each gene, we propose a simple two-component normal mixture that models adequately the distribution of this score. The usefulness of our approach is demonstrated on three real datasets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider a problem of robust performance analysis of linear discrete time varying systems on a bounded time interval. The system is represented in the state-space form. It is driven by a random input disturbance with imprecisely known probability distribution; this distributional uncertainty is described in terms of entropy. The worst-case performance of the system is quantified by its a-anisotropic norm. Computing the anisotropic norm is reduced to solving a set of difference Riccati and Lyapunov equations and a special form equation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many instances of differential diffusion, i e, different species having different turbulent diffusion coefficients in the same flow, can be explained as a finite mixing length effect. That is, in a simple mixing length scenario, the turbulent diffusion coefficient has the form 1 ( m )2 m m c l K w l OL   =  +    where, wm is the mixing velocity, lm the mixing length and Lc the overall distribution scale for a particular species. The first term represents the familiar gradient diffusion while the second term becomes important when lm/Lc is finite. This second term shows that different species will have different diffusion coefficients if they have different overall distribution scales. Such different Lcs may come about due to different boundary conditions and different intrinsic properties (molecular diffusivity, settling velocity etc) for different species. For momentum transfer in turbulent oscillatory boundary layers the second term is imaginary and explains observed phase leads of shear stresses ahead of velocity gradients.