31 resultados para two-Gaussian mixture model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Quantitatively predicting mass transport rates for chemical mixtures in porous materials is important in applications of materials such as adsorbents, membranes, and catalysts. Because directly assessing mixture transport experimentally is challenging, theoretical models that can predict mixture diffusion coefficients using Only single-component information would have many uses. One such model was proposed by Skoulidas, Sholl, and Krishna (Langmuir, 2003, 19, 7977), and applications of this model to a variety of chemical mixtures in nanoporous materials have yielded promising results. In this paper, the accuracy of this model for predicting mixture diffusion coefficients in materials that exhibit a heterogeneous distribution of local binding energies is examined. To examine this issue, single-component and binary mixture diffusion coefficients are computed using kinetic Monte Carlo for a two-dimensional lattice model over a wide range of lattice occupancies and compositions. The approach suggested by Skoulidas, Sholl, and Krishna is found to be accurate in situations where the spatial distribution of binding site energies is relatively homogeneous, but is considerably less accurate for strongly heterogeneous energy distributions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An important and common problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. As this problem concerns the selection of significant genes from a large pool of candidate genes, it needs to be carried out within the framework of multiple hypothesis testing. In this paper, we focus on the use of mixture models to handle the multiplicity issue. With this approach, a measure of the local FDR (false discovery rate) is provided for each gene. An attractive feature of the mixture model approach is that it provides a framework for the estimation of the prior probability that a gene is not differentially expressed, and this probability can subsequently be used in forming a decision rule. The rule can also be formed to take the false negative rate into account. We apply this approach to a well-known publicly available data set on breast cancer, and discuss our findings with reference to other approaches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper considers a model-based approach to the clustering of tissue samples of a very large number of genes from microarray experiments. It is a nonstandard problem in parametric cluster analysis because the dimension of the feature space (the number of genes) is typically much greater than the number of tissues. Frequently in practice, there are also clinical data available on those cases on which the tissue samples have been obtained. Here we investigate how to use the clinical data in conjunction with the microarray gene expression data to cluster the tissue samples. We propose two mixture model-based approaches in which the number of components in the mixture model corresponds to the number of clusters to be imposed on the tissue samples. One approach specifies the components of the mixture model to be the conditional distributions of the microarray data given the clinical data with the mixing proportions also conditioned on the latter data. Another takes the components of the mixture model to represent the joint distributions of the clinical and microarray data. The approaches are demonstrated on some breast cancer data, as studied recently in van't Veer et al. (2002).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Time-course experiments with microarrays are often used to study dynamic biological systems and genetic regulatory networks (GRNs) that model how genes influence each other in cell-level development of organisms. The inference for GRNs provides important insights into the fundamental biological processes such as growth and is useful in disease diagnosis and genomic drug design. Due to the experimental design, multilevel data hierarchies are often present in time-course gene expression data. Most existing methods, however, ignore the dependency of the expression measurements over time and the correlation among gene expression profiles. Such independence assumptions violate regulatory interactions and can result in overlooking certain important subject effects and lead to spurious inference for regulatory networks or mechanisms. In this paper, a multilevel mixed-effects model is adopted to incorporate data hierarchies in the analysis of time-course data, where temporal and subject effects are both assumed to be random. The method starts with the clustering of genes by fitting the mixture model within the multilevel random-effects model framework using the expectation-maximization (EM) algorithm. The network of regulatory interactions is then determined by searching for regulatory control elements (activators and inhibitors) shared by the clusters of co-expressed genes, based on a time-lagged correlation coefficients measurement. The method is applied to two real time-course datasets from the budding yeast (Saccharomyces cerevisiae) genome. It is shown that the proposed method provides clusters of cell-cycle regulated genes that are supported by existing gene function annotations, and hence enables inference on regulatory interactions for the genetic network.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently, methods for computing D-optimal designs for population pharmacokinetic studies have become available. However there are few publications that have prospectively evaluated the benefits of D-optimality in population or single-subject settings. This study compared a population optimal design with an empirical design for estimating the base pharmacokinetic model for enoxaparin in a stratified randomized setting. The population pharmacokinetic D-optimal design for enoxaparin was estimated using the PFIM function (MATLAB version 6.0.0.88). The optimal design was based on a one-compartment model with lognormal between subject variability and proportional residual variability and consisted of a single design with three sampling windows (0-30 min, 1.5-5 hr and 11 - 12 hr post-dose) for all patients. The empirical design consisted of three sample time windows per patient from a total of nine windows that collectively represented the entire dose interval. Each patient was assigned to have one blood sample taken from three different windows. Windows for blood sampling times were also provided for the optimal design. Ninety six patients were recruited into the study who were currently receiving enoxaparin therapy. Patients were randomly assigned to either the optimal or empirical sampling design, stratified for body mass index. The exact times of blood samples and doses were recorded. Analysis was undertaken using NONMEM (version 5). The empirical design supported a one compartment linear model with additive residual error, while the optimal design supported a two compartment linear model with additive residual error as did the model derived from the full data set. A posterior predictive check was performed where the models arising from the empirical and optimal designs were used to predict into the full data set. This revealed the optimal'' design derived model was superior to the empirical design model in terms of precision and was similar to the model developed from the full dataset. This study suggests optimal design techniques may be useful, even when the optimized design was based on a model that was misspecified in terms of the structural and statistical models and when the implementation of the optimal designed study deviated from the nominal design.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The authors report the results of two studies that model the antecedents of goal congruence in retail-service settings. They draw the antecedents from extant research and propose that goal congruence is related to employees' perceptions of morale, leadership support, fairness in reward allocation, and empowerment. They hypothesize and test direct and indirect relationships between these constructs and goal congruence. Results of structural equations modeling suggest an important mediating role for morale and interesting areas of variation across retail and service settings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aims [1] To quantify the random and predictable components of variability for aminoglycoside clearance and volume of distribution [2] To investigate models for predicting aminoglycoside clearance in patients with low serum creatinine concentrations [3] To evaluate the predictive performance of initial dosing strategies for achieving an aminoglycoside target concentration. Methods Aminoglycoside demographic, dosing and concentration data were collected from 697 adult patients (>=20 years old) as part of standard clinical care using a target concentration intervention approach for dose individualization. It was assumed that aminoglycoside clearance had a renal and a nonrenal component, with the renal component being linearly related to predicted creatinine clearance. Results A two compartment pharmacokinetic model best described the aminoglycoside data. The addition of weight, age, sex and serum creatinine as covariates reduced the random component of between subject variability (BSVR) in clearance (CL) from 94% to 36% of population parameter variability (PPV). The final pharmacokinetic parameter estimates for the model with the best predictive performance were: CL, 4.7 l h(-1) 70 kg(-1); intercompartmental clearance (CLic), 1 l h(-1) 70 kg(-1); volume of central compartment (V-1), 19.5 l 70 kg(-1); volume of peripheral compartment (V-2) 11.2 l 70 kg(-1). Conclusions Using a fixed dose of aminoglycoside will achieve 35% of typical patients within 80-125% of a required dose. Covariate guided predictions increase this up to 61%. However, because we have shown that random within subject variability (WSVR) in clearance is less than safe and effective variability (SEV), target concentration intervention can potentially achieve safe and effective doses in 90% of patients.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A novel class of nonlinear, visco-elastic rheologies has recently been developed by MUHLHAUS et al. (2002a, b). The theory was originally developed for the simulation of large deformation processes including folding and kinking in multi-layered visco-elastic rock. The orientation of the layer surfaces or slip planes in the context of crystallographic slip is determined by the normal vector the so-called director of these surfaces. Here the model (MUHLHAUS et al., 2002a, b) is generalized to include thermal effects; it is shown that in 2-D steady states the director is given by the gradient of the flow potential. The model is applied to anisotropic simple shear where the directors are initially parallel to the shear direction. The relative effects of textural hardening and thermal softening are demonstrated. We then turn to natural convection and compare the time evolution and approximately steady states of isotropic and anisotropic convection for a Rayleigh number Ra=5.64x10(5) for aspect ratios of the experimental domain of 1 and 2, respectively. The isotropic case has a simple steady-state solution, whereas in the orthotropic convection model patterns evolve continuously in the core of the convection cell, which makes only a near-steady condition possible. This near-steady state condition shows well aligned boundary layers, and the number of convection cells which develop appears to be reduced in the orthotropic case. At the moderate Rayleigh numbers explored here we found only minor influences in the change from aspect ratio one to two in the model domain.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we studied vapor-liquid equilibria (VLE) and adsorption of ethylene on graphitized thermal carbon black and in slit pores whose walls are composed of graphene layers. Simple models of a one-center Lennard-Jones (LJ) potential and a two-center united atom (UA)-LJ potential are investigated to study the impact of the choice of potential models in the description of VLE and adsorption behavior. Here, we used a Monte Carlo simulation method with grand canonical Monte Carlo (GCMC) and Gibbs ensemble Monte Carlo ensembles. The one-center potential model cannot describe adequately the VLE over the practical range of temperature from the triple point to the critical point. On the other hand, the two-center potential model (Wick et al. J. Phys. Chem. B 2000, 104, 8008-8016) performs well in the description of VLE (saturated vapor and liquid densities and vapor pressure) over the wide range of temperature. This UA-LJ model is then used in the study of adsorption of ethylene on graphitized thermal carbon black and in slit pores. Agreement between the GCMC simulation results and the experimental data on graphitized thermal carbon black for moderate temperatures is excellent, demonstrating that the potential of the GCMC method and the proper choice of potential model are essential to investigate adsorption. For slit pores of various sizes, we have found that the behavior of ethylene exhibits a number of features that are not manifested in the study of spherical LJ particles. In particular, the singlet density distribution versus distance across the pore and the angle between the molecular axis and the z direction provide rich information about the way molecules arrange themselves when the pore width is varied. Such an arrangement has been found to be very sensitive to the pore width.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Numerical simulations of turbulent driven flow in a dense medium cyclone with magnetite medium have been conducted using Fluent. The predicted air core shape and diameter were found to be close to the experimental results measured by gamma ray tomography. It is possible that the Large eddy simulation (LES) turbulence model with Mixture multi-phase model can be used to predict the air/slurry interface accurately although the LES may need a finer grid. Multi-phase simulations (air/water/medium) are showing appropriate medium segregation effects but are over-predicting the level of segregation compared to that measured by gamma-ray tomography in particular with over prediction of medium concentrations near the wall. Further, investigated the accurate prediction of axial segregation of magnetite using the LES turbulence model together with the multi-phase mixture model and viscosity corrections according to the feed particle loading factor. Addition of lift forces and viscosity correction improved the predictions especially near the wall. Predicted density profiles are very close to gamma ray tomography data showing a clear density drop near the wall. The effect of size distribution of the magnetite has been fully studied. It is interesting to note that the ultra-fine magnetite sizes (i.e. 2 and 7 mu m) are distributed uniformly throughout the cyclone. As the size of magnetite increases, more segregation of magnetite occurs close to the wall. The cut-density (d(50)) of the magnetite segregation is 32 gm, which is expected with superfine magnetite feed size distribution. At higher feed densities the agreement between the [Dungilson, 1999; Wood, J.C., 1990. A performance model for coal-washing dense medium cyclones, Ph.D. Thesis, JKMRC, University of Queensland] correlations and the CFD are reasonably good, but the overflow density is lower than the model predictions. It is believed that the excessive underflow volumetric flow rates are responsible for under prediction of the overflow density. (c) 2006 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aim To develop an appropriate dosing strategy for continuous intravenous infusions (CII) of enoxaparin by minimizing the percentage of steady-state anti-Xa concentration (C-ss) outside the therapeutic range of 0.5-1.2 IU ml(-1). Methods A nonlinear mixed effects model was developed with NONMEM (R) for 48 adult patients who received CII of enoxaparin with infusion durations that ranged from 8 to 894 h at rates between 100 and 1600 IU h(-1). Three hundred and sixty-three anti-Xa concentration measurements were available from patients who received CII. These were combined with 309 anti-Xa concentrations from 35 patients who received subcutaneous enoxaparin. The effects of age, body size, height, sex, creatinine clearance (CrCL) and patient location [intensive care unit (ICU) or general medical unit] on pharmacokinetic (PK) parameters were evaluated. Monte Carlo simulations were used to (i) evaluate covariate effects on C-ss and (ii) compare the impact of different infusion rates on predicted C-ss. The best dose was selected based on the highest probability that the C-ss achieved would lie within the therapeutic range. Results A two-compartment linear model with additive and proportional residual error for general medical unit patients and only a proportional error for patients in ICU provided the best description of the data. Both CrCL and weight were found to affect significantly clearance and volume of distribution of the central compartment, respectively. Simulations suggested that the best doses for patients in the ICU setting were 50 IU kg(-1) per 12 h (4.2 IU kg(-1) h(-1)) if CrCL < 30 ml min(-1); 60 IU kg(-1) per 12 h (5.0 IU kg(-1) h(-1)) if CrCL was 30-50 ml min(-1); and 70 IU kg(-1) per 12 h (5.8 IU kg(-1) h(-1)) if CrCL > 50 ml min(-1). The best doses for patients in the general medical unit were 60 IU kg(-1) per 12 h (5.0 IU kg(-1) h(-1)) if CrCL < 30 ml min(-1); 70 IU kg(-1) per 12 h (5.8 IU kg(-1) h(-1)) if CrCL was 30-50 ml min(-1); and 100 IU kg(-1) per 12 h (8.3 IU kg(-1) h(-1)) if CrCL > 50 ml min(-1). These best doses were selected based on providing the lowest equal probability of either being above or below the therapeutic range and the highest probability that the C-ss achieved would lie within the therapeutic range. Conclusion The dose of enoxaparin should be individualized to the patients' renal function and weight. There is some evidence to support slightly lower doses of CII enoxaparin in patients in the ICU setting.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Early work has shown variation in the grain yield of rice cultivars grown under water stress conditions to be associated with the plant water status, mainly with the maintenance of high leaf water potential (LWP) at flowering and grain filling stage. Considerable variation for LWP among rice varieties has been recorded. The present work was designed to investigate genotypic consistency in water potential within the plant and under canopy manipulation to vary plant water requirement. In a glasshouse experiment, with six rice genotypes, a consistent water potential gradient from stem base to leaf tip has been observed. Leaf tip water potential has been found as the minimum LWP that can be recorded at any time of stress. Genotypes with similar canopy size could maintain different levels of LWP under stress conditions. In a field experiment, with four selected lines, four canopy sizes and two canopy mixture treatments were introduced prior to the imposition of control, mild and severe water stress conditions. It was found that the line differences in LWP and relative water content (RWC) were expressed under both mild and severe stress conditions, regardless of canopy size, tiller number and whether they were mixed with another line with different capacity to maintain LWP. Although there were some differences among canopy size treatments for radiation interception in three water conditions, canopy manipulation (plant size) within a line did not affect the expression of LWP and hence genotypic variation in LWP was maintained. Under both glasshouse and field conditions, lines that maintained high LWP had larger xylem diameter and stem areas than those that had low LWP. The results indicated that the size of the vascular bundles could influence the maintenance of plant water relations under water deficit. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Adsorbents from coal fly ash treated by a solid-state fusion method using NaOH were prepared. It was found that amorphous aluminosilicate, geopolymers would be formed. These fly ash-derived inorganic polymers were assessed as potential adsorbents for removal of some basic dyes, methylene blue and crystal violet, from aqueous solution. It was found that the adsorption capacity of the synthesised adsorbents depends on the preparation conditions such as NaOH:fly-ash ratio and fusion temperature with the optimal conditions being at 121 weight ratio of Na:fly-ash at 250-350 degrees C. The synthesised materials exhibit much higher adsorption capacity than fly ash itself and natural zeolite. The adsorption isotherm can be fitted by Langmuir and Freundlich models while the two-site Langmuir model producing the best results. It was also found that the fly ash derived geopolymeric adsorbents show higher adsorption capacity for crystal violet than methylene blue and the adsorption temperature influences the adsorption capacity. Kinetic studies show that the adsorption process follows the pseudo second-order kinetics. (c) 2006 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

All muscle contractions are dependent on the functioning of motor units. In diseases such as amyotrophic lateral sclerosis (ALS), progressive loss of motor units leads to gradual paralysis. A major difficulty in the search for a treatment for these diseases has been the lack of a reliable measure of disease progression. One possible measure would be an estimate of the number of surviving motor units. Despite over 30 years of motor unit number estimation (MUNE), all proposed methods have been met with practical and theoretical objections. Our aim is to develop a method of MUNE that overcomes these objections. We record the compound muscle action potential (CMAP) from a selected muscle in response to a graded electrical stimulation applied to the nerve. As the stimulus increases, the threshold of each motor unit is exceeded, and the size of the CMAP increases until a maximum response is obtained. However, the threshold potential required to excite an axon is not a precise value but fluctuates over a small range leading to probabilistic activation of motor units in response to a given stimulus. When the threshold ranges of motor units overlap, there may be alternation where the number of motor units that fire in response to the stimulus is variable. This means that increments in the value of the CMAP correspond to the firing of different combinations of motor units. At a fixed stimulus, variability in the CMAP, measured as variance, can be used to conduct MUNE using the "statistical" or the "Poisson" method. However, this method relies on the assumptions that the numbers of motor units that are firing probabilistically have the Poisson distribution and that all single motor unit action potentials (MUAP) have a fixed and identical size. These assumptions are not necessarily correct. We propose to develop a Bayesian statistical methodology to analyze electrophysiological data to provide an estimate of motor unit numbers. Our method of MUNE incorporates the variability of the threshold, the variability between and within single MUAPs, and baseline variability. Our model not only gives the most probable number of motor units but also provides information about both the population of units and individual units. We use Markov chain Monte Carlo to obtain information about the characteristics of individual motor units and about the population of motor units and the Bayesian information criterion for MUNE. We test our method of MUNE on three subjects. Our method provides a reproducible estimate for a patient with stable but severe ALS. In a serial study, we demonstrate a decline in the number of motor unit numbers with a patient with rapidly advancing disease. Finally, with our last patient, we show that our method has the capacity to estimate a larger number of motor units.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ecological regions are increasingly used as a spatial unit for planning and environmental management. It is important to define these regions in a scientifically defensible way to justify any decisions made on the basis that they are representative of broad environmental assets. The paper describes a methodology and tool to identify cohesive bioregions. The methodology applies an elicitation process to obtain geographical descriptions for bioregions, each of these is transformed into a Normal density estimate on environmental variables within that region. This prior information is balanced with data classification of environmental datasets using a Bayesian statistical modelling approach to objectively map ecological regions. The method is called model-based clustering as it fits a Normal mixture model to the clusters associated with regions, and it addresses issues of uncertainty in environmental datasets due to overlapping clusters.