905 resultados para Mixture components


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objectives: This study examined the retention of solvents within experimental HEMA/solvent primers after two conditions for solvent evaporation: from a free surface or from dentine surface. Methods: Experimental primers were prepared by mixing 35% HEMA with 65% water, methanol, ethanol or acetone (v/v). Aliquots of each primer (50 mu l) were placed on glass wells or they were applied to the surface of acid-etched dentine cubes (2 mm x 2 mm x 2 mm) (n = 5). For both conditions (i.e. from free surface or dentine cubes), change in primers mass due to solvent evaporation was gravimetrically measured for 10 min at 51% RH and 21 degrees C. The rate of solvent evaporation was calculated as a function of loss of primers mass (%) over time. Data were analysed by two-way ANOVA and Student-Newman-Keuls (p < 0.05). Results: There were significant differences between solvent retention(%) and evaporation rate (%/min) depending on the solvent present in the primer and the condition for evaporation (from free surface or dentine cubes) (p < 0.05). For both conditions, the greatest amount of retained solvent was observed for HEMA/water primer. The rate of solvent evaporation for HEMA/acetone primer was almost 2- to 10-times higher than for HEMA/water primer depending whether evaporation occurred, respectively, from a free surface or dentine cubes. The rate of solvent evaporation varied with time, being in general highest at the earliest periods. Conclusions: The rate of solvent evaporation and its retention into HEMA/solvent primers was influenced by the type of the solvent and condition allowed for their evaporation. (C) 2009 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The increasing worldwide demand for electricity impels to develop clean and renewable energy resources. In the field of portable power devices not only size and weight represent important aspects to take into account, but the fuel and its storage are also critical issues to consider. In this last sense, the direct methanol (MeOH) fuel cells (DMFC) play an important role as they can offer high power and energy density, low emissions, ambient operating conditions and fast and convenient refuelling.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Estimation of the number of mixture components (k) is an unsolved problem. Available methods for estimation of k include bootstrapping the likelihood ratio test statistics and optimizing a variety of validity functionals such as AIC, BIC/MDL, and ICOMP. We investigate the minimization of distance between fitted mixture model and the true density as a method for estimating k. The distances considered are Kullback-Leibler (KL) and “L sub 2”. We estimate these distances using cross validation. A reliable estimate of k is obtained by voting of B estimates of k corresponding to B cross validation estimates of distance. This estimation methods with KL distance is very similar to Monte Carlo cross validated likelihood methods discussed by Smyth (2000). With focus on univariate normal mixtures, we present simulation studies that compare the cross validated distance method with AIC, BIC/MDL, and ICOMP. We also apply the cross validation estimate of distance approach along with AIC, BIC/MDL and ICOMP approach, to data from an osteoporosis drug trial in order to find groups that differentially respond to treatment.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

A two-component mixture regression model that allows simultaneously for heterogeneity and dependency among observations is proposed. By specifying random effects explicitly in the linear predictor of the mixture probability and the mixture components, parameter estimation is achieved by maximising the corresponding best linear unbiased prediction type log-likelihood. Approximate residual maximum likelihood estimates are obtained via an EM algorithm in the manner of generalised linear mixed model (GLMM). The method can be extended to a g-component mixture regression model with the component density from the exponential family, leading to the development of the class of finite mixture GLMM. For illustration, the method is applied to analyse neonatal length of stay (LOS). It is shown that identification of pertinent factors that influence hospital LOS can provide important information for health care planning and resource allocation. (C) 2002 Elsevier Science B.V. All rights reserved.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Understanding how aquatic species grow is fundamental in fisheries because stock assessment often relies on growth dependent statistical models. Length-frequency-based methods become important when more applicable data for growth model estimation are either not available or very expensive. In this article, we develop a new framework for growth estimation from length-frequency data using a generalized von Bertalanffy growth model (VBGM) framework that allows for time-dependent covariates to be incorporated. A finite mixture of normal distributions is used to model the length-frequency cohorts of each month with the means constrained to follow a VBGM. The variances of the finite mixture components are constrained to be a function of mean length, reducing the number of parameters and allowing for an estimate of the variance at any length. To optimize the likelihood, we use a minorization–maximization (MM) algorithm with a Nelder–Mead sub-step. This work was motivated by the decline in catches of the blue swimmer crab (BSC) (Portunus armatus) off the east coast of Queensland, Australia. We test the method with a simulation study and then apply it to the BSC fishery data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Real-world learning tasks often involve high-dimensional data sets with complex patterns of missing features. In this paper we review the problem of learning from incomplete data from two statistical perspectives---the likelihood-based and the Bayesian. The goal is two-fold: to place current neural network approaches to missing data within a statistical framework, and to describe a set of algorithms, derived from the likelihood-based framework, that handle clustering, classification, and function approximation from incomplete data in a principled and efficient manner. These algorithms are based on mixture modeling and make two distinct appeals to the Expectation-Maximization (EM) principle (Dempster, Laird, and Rubin 1977)---both for the estimation of mixture components and for coping with the missing data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIM's). Learning is treated as a maximum likelihood problem; in particular, we present an Expectation-Maximization (EM) algorithm for adjusting the parameters of the architecture. We also develop an on-line learning algorithm in which the parameters are updated incrementally. Comparative simulation results are presented in the robot dynamics domain.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The BTEX (benzene, toluene, ethylbenzene and xylene) mixture is an environmental pollutant that has a high potential to contaminate water resources, especially groundwater. The bioremediation process by microorganisms has often been used as a tool for removing BTEX from contaminated sites. The application of biological assays is useful in evaluating the efficiency of bioremediation processes, besides identifying the toxicity of the original contaminants. It also allows identifying the effects of possible metabolites formed during the biodegradation process on test organisms. In this study, we evaluated the genotoxic and mutagenic potential of five different BTEX concentrations in rat hepatoma tissue culture (HTC) cells, using comet and micronucleus assays, before and after biodegradation. A mutagenic effect was observed for the highest concentration tested and for its respective non-biodegraded concentration. Genotoxicity was significant for all non-biodegraded concentrations and not significant for the biodegraded ones. According to our results, we can state that BTEX is mutagenic at concentrations close to its water solubility, and genotoxic even at lower concentrations, differing from some described results reported for the mixture components, when tested individually. Our results suggest a synergistic effect for the mixture and that the biodegradation process is a safe and efficient methodology to be applied at BTEX-contaminated sites. © 2012 Elsevier Ltd.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Historical information is always relevant for clinical trial design. Additionally, if incorporated in the analysis of a new trial, historical data allow to reduce the number of subjects. This decreases costs and trial duration, facilitates recruitment, and may be more ethical. Yet, under prior-data conflict, a too optimistic use of historical data may be inappropriate. We address this challenge by deriving a Bayesian meta-analytic-predictive prior from historical data, which is then combined with the new data. This prospective approach is equivalent to a meta-analytic-combined analysis of historical and new data if parameters are exchangeable across trials. The prospective Bayesian version requires a good approximation of the meta-analytic-predictive prior, which is not available analytically. We propose two- or three-component mixtures of standard priors, which allow for good approximations and, for the one-parameter exponential family, straightforward posterior calculations. Moreover, since one of the mixture components is usually vague, mixture priors will often be heavy-tailed and therefore robust. Further robustness and a more rapid reaction to prior-data conflicts can be achieved by adding an extra weakly-informative mixture component. Use of historical prior information is particularly attractive for adaptive trials, as the randomization ratio can then be changed in case of prior-data conflict. Both frequentist operating characteristics and posterior summaries for various data scenarios show that these designs have desirable properties. We illustrate the methodology for a phase II proof-of-concept trial with historical controls from four studies. Robust meta-analytic-predictive priors alleviate prior-data conflicts ' they should encourage better and more frequent use of historical data in clinical trials.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We analyse how the Generative Topographic Mapping (GTM) can be modified to cope with missing values in the training data. Our approach is based on an Expectation -Maximisation (EM) method which estimates the parameters of the mixture components and at the same time deals with the missing values. We incorporate this algorithm into a hierarchical GTM. We verify the method on a toy data set (using a single GTM) and a realistic data set (using a hierarchical GTM). The results show our algorithm can help to construct informative visualisation plots, even when some of the training points are corrupted with missing values.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Diffusion-ordered NMR spectroscopy ("DOSY") is a useful tool for the identification of mixture components. In its basic form it relies on simple differences in hydrodynamic radius to distinguish between different species. This can be very effective where species have significantly different molecular sizes, but generally fails for isomeric species. The use of surfactant co-solutes can allow isomeric species to be distinguished by virtue of their different degrees of interaction with micelles or reversed micelles. The use of micelle-assisted DOSY to resolve the NMR spectra of isomers is illustrated for the case of the three dihydroxybenzenes (catechol, resorcinol, and hydroquinone) in aqueous solution containing sodium dodecyl sulfate micelles, and in chloroform solution containing AOT reversed micelles. © 2009 American Chemical Society.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Motivation: The clustering of gene profiles across some experimental conditions of interest contributes significantly to the elucidation of unknown gene function, the validation of gene discoveries and the interpretation of biological processes. However, this clustering problem is not straightforward as the profiles of the genes are not all independently distributed and the expression levels may have been obtained from an experimental design involving replicated arrays. Ignoring the dependence between the gene profiles and the structure of the replicated data can result in important sources of variability in the experiments being overlooked in the analysis, with the consequent possibility of misleading inferences being made. We propose a random-effects model that provides a unified approach to the clustering of genes with correlated expression levels measured in a wide variety of experimental situations. Our model is an extension of the normal mixture model to account for the correlations between the gene profiles and to enable covariate information to be incorporated into the clustering process. Hence the model is applicable to longitudinal studies with or without replication, for example, time-course experiments by using time as a covariate, and to cross-sectional experiments by using categorical covariates to represent the different experimental classes. Results: We show that our random-effects model can be fitted by maximum likelihood via the EM algorithm for which the E(expectation) and M(maximization) steps can be implemented in closed form. Hence our model can be fitted deterministically without the need for time-consuming Monte Carlo approximations. The effectiveness of our model-based procedure for the clustering of correlated gene profiles is demonstrated on three real datasets, representing typical microarray experimental designs, covering time-course, repeated-measurement and cross-sectional data. In these examples, relevant clusters of the genes are obtained, which are supported by existing gene-function annotation. A synthetic dataset is considered too.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Gene clustering is a useful exploratory technique to group together genes with similar expression levels under distinct cell cycle phases or distinct conditions. It helps the biologist to identify potentially meaningful relationships between genes. In this study, we propose a clustering method based on multivariate normal mixture models, where the number of clusters is predicted via sequential hypothesis tests: at each step, the method considers a mixture model of m components (m = 2 in the first step) and tests if in fact it should be m - 1. If the hypothesis is rejected, m is increased and a new test is carried out. The method continues (increasing m) until the hypothesis is accepted. The theoretical core of the method is the full Bayesian significance test, an intuitive Bayesian approach, which needs no model complexity penalization nor positive probabilities for sharp hypotheses. Numerical experiments were based on a cDNA microarray dataset consisting of expression levels of 205 genes belonging to four functional categories, for 10 distinct strains of Saccharomyces cerevisiae. To analyze the method's sensitivity to data dimension, we performed principal components analysis on the original dataset and predicted the number of classes using 2 to 10 principal components. Compared to Mclust (model-based clustering), our method shows more consistent results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Normal mixture models are being increasingly used to model the distributions of a wide variety of random phenomena and to cluster sets of continuous multivariate data. However, for a set of data containing a group or groups of observations with longer than normal tails or atypical observations, the use of normal components may unduly affect the fit of the mixture model. In this paper, we consider a more robust approach by modelling the data by a mixture of t distributions. The use of the ECM algorithm to fit this t mixture model is described and examples of its use are given in the context of clustering multivariate data in the presence of atypical observations in the form of background noise.