985 resultados para Mixture method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A two-component mixture regression model that allows simultaneously for heterogeneity and dependency among observations is proposed. By specifying random effects explicitly in the linear predictor of the mixture probability and the mixture components, parameter estimation is achieved by maximising the corresponding best linear unbiased prediction type log-likelihood. Approximate residual maximum likelihood estimates are obtained via an EM algorithm in the manner of generalised linear mixed model (GLMM). The method can be extended to a g-component mixture regression model with the component density from the exponential family, leading to the development of the class of finite mixture GLMM. For illustration, the method is applied to analyse neonatal length of stay (LOS). It is shown that identification of pertinent factors that influence hospital LOS can provide important information for health care planning and resource allocation. (C) 2002 Elsevier Science B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Adsorption of pure nitrogen, argon, acetone, chloroform and acetone-chloroform mixture on graphitized thermal carbon black is considered at sub-critical conditions by means of molecular layer structure theory (MLST). In the present version of the MLST an adsorbed fluid is considered as a sequence of 2D molecular layers, whose Helmholtz free energies are obtained directly from the analysis of experimental adsorption isotherm of pure components. The interaction of the nearest layers is accounted for in the framework of mean field approximation. This approach allows quantitative correlating of experimental nitrogen and argon adsorption isotherm both in the monolayer region and in the range of multi-layer coverage up to 10 molecular layers. In the case of acetone and chloroform the approach also leads to excellent quantitative correlation of adsorption isotherms, while molecular approaches such as the non-local density functional theory (NLDFT) fail to describe those isotherms. We extend our new method to calculate the Helmholtz free energy of an adsorbed mixture using a simple mixing rule, and this allows us to predict mixture adsorption isotherms from pure component adsorption isotherms. The approach, which accounts for the difference in composition in different molecular layers, is tested against the experimental data of acetone-chloroform mixture (non-ideal mixture) adsorption on graphitized thermal carbon black at 50 degrees C. (C) 2005 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Motivation: An important problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. We provide a straightforward and easily implemented method for estimating the posterior probability that an individual gene is null. The problem can be expressed in a two-component mixture framework, using an empirical Bayes approach. Current methods of implementing this approach either have some limitations due to the minimal assumptions made or with more specific assumptions are computationally intensive. Results: By converting to a z-score the value of the test statistic used to test the significance of each gene, we propose a simple two-component normal mixture that models adequately the distribution of this score. The usefulness of our approach is demonstrated on three real datasets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Solutions of fructose, maltodextrin (DE 5), and their mixtures at the ratios of 20:80, 40:60, 50:50, 60:40, and 80:20 were gelled with 1% agar-agar and dried under convective-conductive drying conditions. The thin slabs were maintained at isothermal drying condition of 30 and 50 degrees C. Yamamoto's simplified method based on regular regime approach was used to calculate the (effective) moisture diffusivity. Both the drying rates and the moisture diffusivity exhibited strong concentration dependence. The concentration dependence was stronger in the case of fructose and fructose rich solutions. Both the moisture diffusivity and drying rates of the mixture solutions were enhanced due to plasticization of fructose on maltodextrin, which is explained through free volume theory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mixture Density Networks (MDNs) are a well-established method for modelling the conditional probability density which is useful for complex multi-valued functions where regression methods (such as MLPs) fail. In this paper we extend earlier research of a regularisation method for a special case of MDNs to the general case using evidence based regularisation and we show how the Hessian of the MDN error function can be evaluated using R-propagation. The method is tested on two data sets and compared with early stopping.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Training Mixture Density Network (MDN) configurations within the NETLAB framework takes time due to the nature of the computation of the error function and the gradient of the error function. By optimising the computation of these functions, so that gradient information is computed in parameter space, training time is decreased by at least a factor of sixty for the example given. Decreased training time increases the spectrum of problems to which MDNs can be practically applied making the MDN framework an attractive method to the applied problem solver.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mixture Density Networks are a principled method to model conditional probability density functions which are non-Gaussian. This is achieved by modelling the conditional distribution for each pattern with a Gaussian Mixture Model for which the parameters are generated by a neural network. This thesis presents a novel method to introduce regularisation in this context for the special case where the mean and variance of the spherical Gaussian Kernels in the mixtures are fixed to predetermined values. Guidelines for how these parameters can be initialised are given, and it is shown how to apply the evidence framework to mixture density networks to achieve regularisation. This also provides an objective stopping criteria that can replace the `early stopping' methods that have previously been used. If the neural network used is an RBF network with fixed centres this opens up new opportunities for improved initialisation of the network weights, which are exploited to start training relatively close to the optimum. The new method is demonstrated on two data sets. The first is a simple synthetic data set while the second is a real life data set, namely satellite scatterometer data used to infer the wind speed and wind direction near the ocean surface. For both data sets the regularisation method performs well in comparison with earlier published results. Ideas on how the constraint on the kernels may be relaxed to allow fully adaptable kernels are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mixture Density Networks are a principled method to model conditional probability density functions which are non-Gaussian. This is achieved by modelling the conditional distribution for each pattern with a Gaussian Mixture Model for which the parameters are generated by a neural network. This thesis presents a novel method to introduce regularisation in this context for the special case where the mean and variance of the spherical Gaussian Kernels in the mixtures are fixed to predetermined values. Guidelines for how these parameters can be initialised are given, and it is shown how to apply the evidence framework to mixture density networks to achieve regularisation. This also provides an objective stopping criteria that can replace the `early stopping' methods that have previously been used. If the neural network used is an RBF network with fixed centres this opens up new opportunities for improved initialisation of the network weights, which are exploited to start training relatively close to the optimum. The new method is demonstrated on two data sets. The first is a simple synthetic data set while the second is a real life data set, namely satellite scatterometer data used to infer the wind speed and wind direction near the ocean surface. For both data sets the regularisation method performs well in comparison with earlier published results. Ideas on how the constraint on the kernels may be relaxed to allow fully adaptable kernels are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The invention relates to a liquid bio-fuel mixture, and uses thereof in the generation of electrical power, mechanical power and/or heat. The liquid bio-fuel mixture is macroscopically single phase, and comprises a liquid condensate product of biomass fast pyrolysis, a bio-diesel component and an ethanol component.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new creep test, Partial Triaxial Test (PTT), was developed to study the permanent deformation properties of asphalt mixtures. The PTT used two duplicate platens whose diameters were smaller than the diameter of the cylindrical asphalt mixtures specimen. One base platen was centrally placed under the specimen and another loading platen was centrally placed on the top surface of the specimen. Then the compressive repeated load was applied on the loading platen and the vertical deformation of the asphalt mixture was recorded in the PTTs. Triaxial repeated load permanent deformation tests (TRT) and PTTs were respectively conducted on AC20 and SMA13 asphalt mixtures at 40°C and 60°C so as to provide the parameters of the creep constitutive relations in the ABAQUS finite element models (FEMs) which were built to simulate the laboratory wheel tracking tests. The real laboratory wheel tracking tests were also conducted on AC20 and SMA13 asphalt mixtures at 40°C and 60°C. Then the calculated rutting depth from the FEMs were compared with the measured rutting depth of the laboratory wheeling tracking tests. Results indicated that PTT was able to characterize the permanent deformation of the asphalt mixtures in laboratory. The rutting depth calculated using the parameters estimated from PTTs' results was closer to and showed better matches with the measured rutting than the rutting depth calculated using the parameters estimated from TRTs' results. Main reason was that PTT could better simulate the changing confinement conditions of asphalt mixtures in the laboratory wheeling tracking tests than the TRT.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In nonlinear and stochastic control problems, learning an efficient feed-forward controller is not amenable to conventional neurocontrol methods. For these approaches, estimating and then incorporating uncertainty in the controller and feed-forward models can produce more robust control results. Here, we introduce a novel inversion-based neurocontroller for solving control problems involving uncertain nonlinear systems which could also compensate for multi-valued systems. The approach uses recent developments in neural networks, especially in the context of modelling statistical distributions, which are applied to forward and inverse plant models. Provided that certain conditions are met, an estimate of the intrinsic uncertainty for the outputs of neural networks can be obtained using the statistical properties of networks. More generally, multicomponent distributions can be modelled by the mixture density network. Based on importance sampling from these distributions a novel robust inverse control approach is obtained. This importance sampling provides a structured and principled approach to constrain the complexity of the search space for the ideal control law. The developed methodology circumvents the dynamic programming problem by using the predicted neural network uncertainty to localise the possible control solutions to consider. A nonlinear multi-variable system with different delays between the input-output pairs is used to demonstrate the successful application of the developed control algorithm. The proposed method is suitable for redundant control systems and allows us to model strongly non-Gaussian distributions of control signal as well as processes with hysteresis. © 2004 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Ni-Mg-Al-Ca catalyst was prepared by a co-precipitation method for hydrogen production from polymeric materials. The prepared catalyst was designed for both the steam cracking of hydrocarbons and for the in situ absorption of CO2 via enhancement of the water-gas shift reaction. The influence of Ca content in the catalyst and catalyst calcination temperature in relation to the pyrolysis-gasification of a wood sawdust/polypropylene mixture was investigated. The highest hydrogen yield of 39.6molH2/g Ni with H2/CO ratio of 1.90 was obtained in the presence of the Ca containing catalyst of molar ratio Ni:Mg:Al:Ca=1:1:1:4, calcined at 500°C. In addition, thermogravimetric and morphology analyses of the reacted catalysts revealed that Ca introduction into the Ni-Mg-Al catalyst prevented the deposition of filamentous carbon on the catalyst surface. Furthermore, all metals were well dispersed in the catalyst after the pyrolysis-gasification process with 20-30nm of NiO sized particles observed after the gasification without significant aggregation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research provides a novel approach for the determination of water content and higher heating value of pyrolysis oil. Pyrolysis oil from Napier grass was used in this study. Water content was determined with pH adjustment using a Karl Fischer titration unit. An equation for actual water in the oil was developed and used, and the results were compared with the traditional Karl Fischer method. The oil was found to have between 42 and 64% moisture under the same pyrolysis condition depending on the properties of the Napier grass prior to the pyrolysis. The higher heating value of the pyrolysis oil was determined using an oil-diesel mixture, and 20 to 25 wt% of the oil in the mixture gave optimum and stable results. A new model was developed for evaluation of higher heating value of dry pyrolysis oil. The dry oil has higher heating values in the range between 19 and 26 MJ/kg. The developed protocols and equations may serve as a reliable alternative means for establishing the actual water content and the higher heating value of pyrolysis oil.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The cyclic phosphazene trimers [N3P3(OC6H5)5OC5H4N·Ti(Cp)2Cl][PF6] (3), [N3P3(OC6H4CH2CN·Ti(Cp)2Cl)6][PF6]6 (4), [N3P3(OC6H4-But)5(OC6H4CH2CN·Ti(Cp)2Cl)][PF6] (5), [N3P3(OC6H5)5C6H4CH2CN·Ru(Cp)(PPh3)2][PF6] (6), [N3P3(OC6H5)5C6H4CH2CN·Fe(Cp)(dppe)][PF6] (7) and N3P3(OC6H5)5OC5H4N·W(CO)5 (8) were prepared and characterized. As a model, the simple compounds [HOC5H5N·Ti(Cp)2Cl]PF6 (1) and [HOC6H4CH2CN·Ti(Cp)2Cl]PF6 (2) were also prepared and characterized. Pyrolysis of the organometallic cyclic trimers in air yields metallic nanostructured materials, which according to transmission and scanning electron microscopy (TEM/SEM), energy-dispersive X-ray microanalysis (EDX), and IR data, can be formulated as either a metal oxide, metal pyrophosphate or a mixture in some cases, depending on the nature and quantity of the metal, characteristics of the organic spacer and the auxiliary substituent attached to the phosphorus cycle. Atomic force microscopy (AFM) data indicate the formation of small island and striate nanostructures. A plausible formation mechanism which involves the formation of a cyclomatrix is proposed, and the pyrolysis of the organometallic cyclic phosphazene polymer as a new and general method for obtaining metallic nanostructured materials is discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A large eddy simulation is performed to study the deflagration to detonation transition phenomenon in an obstructed channel containing premixed stoichiometric hydrogen–air mixture. Two-dimensional filtered reactive Navier–Stokes equations are solved utilizing the artificially thickened flame approach (ATF) for modeling sub-grid scale combustion. To include the effect of induction time, a 27-step detailed mechanism is utilized along with an in situ adaptive tabulation (ISAT) method to reduce the computational cost due to the detailed chemistry. The results show that in the slow flame propagation regime, the flame–vortex interaction and the resulting flame folding and wrinkling are the main mechanisms for the increase of the flame surface and consequently acceleration of the flame. Furthermore, at high speed, the major mechanisms responsible for flame propagation are repeated reflected shock–flame interactions and the resulting baroclinic vorticity. These interactions intensify the rate of heat release and maintain the turbulence and flame speed at high level. During the flame acceleration, it is seen that the turbulent flame enters the ‘thickened reaction zones’ regime. Therefore, it is necessary to utilize the chemistry based combustion model with detailed chemical kinetics to properly capture the salient features of the fast deflagration propagation.