958 resultados para Enthalpy of mixture


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cette étude aborde le thème de l’utilisation des modèles de mélange de lois pour analyser des données de comportements et d’habiletés cognitives mesurées à plusieurs moments au cours du développement des enfants. L’estimation des mélanges de lois multinormales en utilisant l’algorithme EM est expliquée en détail. Cet algorithme simplifie beaucoup les calculs, car il permet d’estimer les paramètres de chaque groupe séparément, permettant ainsi de modéliser plus facilement la covariance des observations à travers le temps. Ce dernier point est souvent mis de côté dans les analyses de mélanges. Cette étude porte sur les conséquences d’une mauvaise spécification de la covariance sur l’estimation du nombre de groupes formant un mélange. La conséquence principale est la surestimation du nombre de groupes, c’est-à-dire qu’on estime des groupes qui n’existent pas. En particulier, l’hypothèse d’indépendance des observations à travers le temps lorsque ces dernières étaient corrélées résultait en l’estimation de plusieurs groupes qui n’existaient pas. Cette surestimation du nombre de groupes entraîne aussi une surparamétrisation, c’est-à-dire qu’on utilise plus de paramètres qu’il n’est nécessaire pour modéliser les données. Finalement, des modèles de mélanges ont été estimés sur des données de comportements et d’habiletés cognitives. Nous avons estimé les mélanges en supposant d’abord une structure de covariance puis l’indépendance. On se rend compte que dans la plupart des cas l’ajout d’une structure de covariance a pour conséquence d’estimer moins de groupes et les résultats sont plus simples et plus clairs à interpréter.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

L'objectif de cette thèse est de présenter différentes applications du programme de recherche de calcul conditionnel distribué. On espère que ces applications, ainsi que la théorie présentée ici, mènera à une solution générale du problème d'intelligence artificielle, en particulier en ce qui a trait à la nécessité d'efficience. La vision du calcul conditionnel distribué consiste à accélérer l'évaluation et l'entraînement de modèles profonds, ce qui est très différent de l'objectif usuel d'améliorer sa capacité de généralisation et d'optimisation. Le travail présenté ici a des liens étroits avec les modèles de type mélange d'experts. Dans le chapitre 2, nous présentons un nouvel algorithme d'apprentissage profond qui utilise une forme simple d'apprentissage par renforcement sur un modèle d'arbre de décisions à base de réseau de neurones. Nous démontrons la nécessité d'une contrainte d'équilibre pour maintenir la distribution d'exemples aux experts uniforme et empêcher les monopoles. Pour rendre le calcul efficient, l'entrainement et l'évaluation sont contraints à être éparse en utilisant un routeur échantillonnant des experts d'une distribution multinomiale étant donné un exemple. Dans le chapitre 3, nous présentons un nouveau modèle profond constitué d'une représentation éparse divisée en segments d'experts. Un modèle de langue à base de réseau de neurones est construit à partir des transformations éparses entre ces segments. L'opération éparse par bloc est implémentée pour utilisation sur des cartes graphiques. Sa vitesse est comparée à deux opérations denses du même calibre pour démontrer le gain réel de calcul qui peut être obtenu. Un modèle profond utilisant des opérations éparses contrôlées par un routeur distinct des experts est entraîné sur un ensemble de données d'un milliard de mots. Un nouvel algorithme de partitionnement de données est appliqué sur un ensemble de mots pour hiérarchiser la couche de sortie d'un modèle de langage, la rendant ainsi beaucoup plus efficiente. Le travail présenté dans cette thèse est au centre de la vision de calcul conditionnel distribué émis par Yoshua Bengio. Elle tente d'appliquer la recherche dans le domaine des mélanges d'experts aux modèles profonds pour améliorer leur vitesse ainsi que leur capacité d'optimisation. Nous croyons que la théorie et les expériences de cette thèse sont une étape importante sur la voie du calcul conditionnel distribué car elle cadre bien le problème, surtout en ce qui concerne la compétitivité des systèmes d'experts.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Real-world learning tasks often involve high-dimensional data sets with complex patterns of missing features. In this paper we review the problem of learning from incomplete data from two statistical perspectives---the likelihood-based and the Bayesian. The goal is two-fold: to place current neural network approaches to missing data within a statistical framework, and to describe a set of algorithms, derived from the likelihood-based framework, that handle clustering, classification, and function approximation from incomplete data in a principled and efficient manner. These algorithms are based on mixture modeling and make two distinct appeals to the Expectation-Maximization (EM) principle (Dempster, Laird, and Rubin 1977)---both for the estimation of mixture components and for coping with the missing data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Modeling and predicting co-occurrences of events is a fundamental problem of unsupervised learning. In this contribution we develop a statistical framework for analyzing co-occurrence data in a general setting where elementary observations are joint occurrences of pairs of abstract objects from two finite sets. The main challenge for statistical models in this context is to overcome the inherent data sparseness and to estimate the probabilities for pairs which were rarely observed or even unobserved in a given sample set. Moreover, it is often of considerable interest to extract grouping structure or to find a hierarchical data organization. A novel family of mixture models is proposed which explain the observed data by a finite number of shared aspects or clusters. This provides a common framework for statistical inference and structure discovery and also includes several recently proposed models as special cases. Adopting the maximum likelihood principle, EM algorithms are derived to fit the model parameters. We develop improved versions of EM which largely avoid overfitting problems and overcome the inherent locality of EM--based optimization. Among the broad variety of possible applications, e.g., in information retrieval, natural language processing, data mining, and computer vision, we have chosen document retrieval, the statistical analysis of noun/adjective co-occurrence and the unsupervised segmentation of textured images to test and evaluate the proposed algorithms.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Optimum experimental designs depend on the design criterion, the model and the design region. The talk will consider the design of experiments for regression models in which there is a single response with the explanatory variables lying in a simplex. One example is experiments on various compositions of glass such as those considered by Martin, Bursnall, and Stillman (2001). Because of the highly symmetric nature of the simplex, the class of models that are of interest, typically Scheff´e polynomials (Scheff´e 1958) are rather different from those of standard regression analysis. The optimum designs are also rather different, inheriting a high degree of symmetry from the models. In the talk I will hope to discuss a variety of modes for such experiments. Then I will discuss constrained mixture experiments, when not all the simplex is available for experimentation. Other important aspects include mixture experiments with extra non-mixture factors and the blocking of mixture experiments. Much of the material is in Chapter 16 of Atkinson, Donev, and Tobias (2007). If time and my research allows, I would hope to finish with a few comments on design when the responses, rather than the explanatory variables, lie in a simplex. References Atkinson, A. C., A. N. Donev, and R. D. Tobias (2007). Optimum Experimental Designs, with SAS. Oxford: Oxford University Press. Martin, R. J., M. C. Bursnall, and E. C. Stillman (2001). Further results on optimal and efficient designs for constrained mixture experiments. In A. C. Atkinson, B. Bogacka, and A. Zhigljavsky (Eds.), Optimal Design 2000, pp. 225–239. Dordrecht: Kluwer. Scheff´e, H. (1958). Experiments with mixtures. Journal of the Royal Statistical Society, Ser. B 20, 344–360. 1

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In membrane distillation in a conventional membrane module, the enthalpies of vaporisation and condensation are supplied and removed by changes in the temperatures of the feed and permeate streams, respectively. Less than 5% of the feed can be distilled in a single pass, because the potential changes in the enthalpies of the liquid streams are much smaller than the enthalpy of vaporisation. Furthermore, the driving force for mass transfer reduces as the feed stream temperature and vapour pressure fall during distillation. These restrictions can be avoided if the enthalpy of vaporisation is uncoupled from the heat capacities of the feed and permeate streams. A specified distillation can then be effected continuously in a single module. Calculations are presented which estimate the performance of a flat plate unit in which the enthalpy of distillation is supplied and removed by the condensing and boiling of thermal fluids in separate circuits, and the imposed temperature difference is independent of position. Because the mass flux through the membrane is dependent on vapour pressure, membrane distillation is suited to applications with a high membrane temperature. The maximum mass flux in the proposed module geometry is predicted to be 30 kg/m2 per h at atmospheric pressure when the membrane temperature is 65°C. Operation at higher membrane temperatures is predicted to raise the mass flux, for example to 85 kg/m2 per h at a membrane temperature of 100°C. This would require pressurisation to 20 bar to prevent boiling at the heating plate of the feed channel. Pre-pressurisation of the membrane pores and control of the dissolved gas concentrations in the feed and the recyled permeate should be investigated as a means to achieve high temperature membrane distillation without pore penetration and wetting.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Solution calorimetry offers a reproducible technique for measuring the enthalpy of solution (ΔsolH) of a solute dissolving into a solvent. The ΔsolH of two solutes, propranolol HCl and mannitol were determined in simulated intestinal fluid (SIF) solutions designed to model the fed and fasted states within the gut, and in Hanks’ balanced salt solution (HBSS) of varying pH. The bile salt and lipid within the SIF solutions formed mixed micelles. Both solutes exhibited endothermic reactions in all solvents. The ΔsolH for propranolol HCl in the SIF solutions differed from those in the HBSS and was lower in the fed state than the fasted state SIF solution, revealing an interaction between propranolol and the micellar phase in both SIF solutions. In contrast, for mannitol the ΔsolH was constant in all solutions indicating minimal interaction between mannitol and the micellar phases of the SIF solutions. In this study, solution calorimetry proved to be a simple method for measuring the enthalpy associated with the dissolution of model drugs in complex biological media such as SIF solutions. In addition, the derived power–time curves allowed the time taken for the powdered solutes to form solutions to be estimated.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Probabilistic topic models have become a standard in modern machine learning with wide applications in organizing and summarizing ‘documents’ in high-dimensional data such as images, videos, texts, gene expression data, and so on. Representing data by dimensional reduction of mixture proportion extracted from topic models is not only richer in semantics than bag-of-word interpretation, but also more informative for classification tasks. This paper describes the Topic Model Kernel (TMK), a high dimensional mapping for Support Vector Machine classification of data generated from probabilistic topic models. The applicability of our proposed kernel is demonstrated in several classification tasks from real world datasets. We outperform existing kernels on the distributional features and give the comparative results on non-probabilistic data types.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Probabilistic topic models have become a standard in modern machine learning to deal with a wide range of applications. Representing data by dimensional reduction of mixture proportion extracted from topic models is not only richer in semantics interpretation, but could also be informative for classification tasks. In this paper, we describe the Topic Model Kernel (TMK), a topicbased kernel for Support Vector Machine classification on data being processed by probabilistic topic models. The applicability of our proposed kernel is demonstrated in several classification tasks with real world datasets. TMK outperforms existing kernels on the distributional features and give comparative results on nonprobabilistic data types.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The cerium oxide has a high potential for use in removing pollutants after combustion, removal of organic matter in waste water and the fuel-cell technology. The nickel oxide is an attractive material due to its excellent chemical stability and their optical properties, electrical and magnetic. In this work, CeO2-NiO- systems on molars reasons 1:1(I), 1:2(II) e 1:3(III) metal-citric acid were synthesized using the Pechini method. We used techniques of TG / DTG and ATD to monitor the degradation process of organic matter to the formation of the oxide. By thermogravimetric analysis and applying the dynamic method proposed by Coats-Redfern, it was possible to study the reactions of thermal decomposition in order to propose the possible mechanism by which the reaction takes place, as well as the determination of kinetic parameters as activation energy, Ea, pre-exponential factor and parameters of activation. It was observed that both variables exert a significant influence on the formation of complex polymeric precursor. The model that best fitted the experimental data in the dynamic mode was R3, which consists of nuclear growth, which formed the nuclei grow to a continuous reaction interface, it proposes a spherical symmetry (order 2 / 3). The values of enthalpy of activation of the system showed that the reaction in the state of transition is exothermic. The variables of composition, together with the variable temperature of calcination were studied by different techniques such as XRD, IV and SEM. Also a study was conducted microstructure by the Rietveld method, the calculation routine was developed to run the package program FullProf Suite, and analyzed by pseudo-Voigt function. It was found that the molar ratio of variable metal-citric acid in the system CeO2-NiO (I), (II), (III) has strong influence on the microstructural properties, size of crystallites and microstrain network, and can be used to control these properties

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Rio Grande do Norte State stands out as one great producer of structural clay of the brazilian northeastern. The Valley Assu ceramic tiles production stands out obtained from ilitics ball clays that abound in the region under study. Ceramics formulation and the design of experiments with mixture approach, has been applied for researchers, come as an important aid to decrease the number of experiments necessary to the optimization. In this context, the objective of this work is to evaluate the effects of the formulation, temperature and heating rate in the physical-mechanical properties of the red ceramic body used for roofing tile fabrication of the Valley Assu, using design of mixture experiments. Four clays samples used in two ceramics industry of the region were use as raw material and characterized by X-ray diffraction, chemical composition, differential thermal analysis (DTA), thermogravimetric analysis (TGA), particle size distribution analysis and plasticity techniques. Afterwards, they were defined initial molded bodies and made specimens were then prepared by uniaxial pressing at 25 MPa before firing at 850, 950 and 1050 ºC in a laboratory furnace, with heating rate in the proportions of 5, 10 e 15 ºC/min. The following tecnologicals properties were evaluated: linear firing shrinkage, water absorption and flexural strength. Results show that the temperature 1050 ºC and heating rate of 5 ºC/min was the best condition, therefore presented significance in all physical-mechanical properties. The model was accepted as valid based of the production of three new formulations with fractions mass diferents of the initial molded bodies and heated with temperature at 1050 ºC and heating rate of 5 ºC/min. Considering the formulation, temperature and heating rate as variables of the equations, another model was suggested, where from the aplication of design of experiments with mixtures was possible to get a best formulation, whose experimental error is the minor in relation to the too much formulations

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The behavior of the fluid flux in oil fields is influenced by different factors and it has a big impact on the recovery of hydrocarbons. There is a need of evaluating and adapting the actual technology to the worldwide reservoirs reality, not only on the exploration (reservoir discovers) but also on the development of those that were already discovered, however not yet produced. The in situ combustion (ISC) is a suitable technique for these recovery of hydrocarbons, although it remains complex to be implemented. The main objective of this research was to study the application of the ISC as an advanced oil recovery technique through a parametric analysis of the process using vertical wells within a semi synthetic reservoir that had the characteristics from the brazilian northwest, in order to determine which of those parameters could influence the process, verifying the technical and economical viability of the method on the oil industry. For that analysis, a commercial reservoir simulation program for thermal processes was used, called steam thermal and advanced processes reservoir simulator (STARS) from the computer modeling group (CMG). This study aims, through the numerical analysis, find results that help improve mainly the interpretation and comprehension of the main problems related to the ISC method, which are not yet dominated. From the results obtained, it was proved that the mediation promoted by the thermal process ISC over the oil recovery is very important, with rates and cumulated production positively influenced by the method application. It was seen that the application of the method improves the oil mobility as a function of the heating when the combustion front forms inside the reservoir. Among all the analyzed parameters, the activation energy presented the bigger influence, it means, the lower the activation energy the bigger the fraction of recovered oil, as a function of the chemical reactions speed rise. It was also verified that the higher the enthalpy of the reaction, the bigger the fraction of recovered oil, due to a bigger amount of released energy inside the system, helping the ISC. The reservoir parameters: porosity and permeability showed to have lower influence on the ISC. Among the operational parameters that were analyzed, the injection rate was the one that showed a stronger influence on the ISC method, because, the higher the value of the injection rate, the higher was the result obtained, mainly due to maintaining the combustion front. In connection with the oxygen concentration, an increase of the percentage of this parameter translates into a higher fraction of recovered oil, because the quantity of fuel, helping the advance and the maintenance of the combustion front for a longer period of time. About the economic analysis, the ISC method showed to be economically feasible when evaluated through the net present value (NPV), considering the injection rates: the higher the injection rate, the higher the financial incomes of the final project

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The biodiesel use has become important due to its renewable character and to reduce environmental impacts during the fuel burning. Theses benefit will be valid if the fuel shows good performance, chemistry stability and compatibility with engines. Biodiesel is a good fuel to diesel engines due to its lubricity. Then, the aimed of this study was to verify the physicalchemistry properties of biodiesel and their correlations with possible elastomers damage after biodiesel be used as fuel in an injection system. The methodology was divided in three steps: biodiesels synthesis by transesterification of three vegetable oil (soybean, palm and sunflower) and their physical-chemistry characterization (viscosity, oxidative stability, flash point, acidity, humidity and density); pressurized test of compatibility between elastomers (NBR and VITON) and biodiesel, and the last one, analyze of biodiesels lubricity by tribological test ball-plan( HFRR). Also, the effect of mixture of biodiesel and diesel in different concentrations was evaluated. The results showed that VITON showed better compatibility with all biodiesel blends in relation to NBR, however when VITON had contact with sunflower biodiesel and its blends the swelling degree suffer higher influences due to biodiesel humidity. For others biodiesels and theirs blends, this elastomer kept its mechanical properties constant. The better tribological performance was observed for blends with high biodiesel concentration, lower friction coefficient was obtained when palm biodiesel was used. The main mechanisms observed during the HFRR tests were abrasive and oxidative wear

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The nonionic surfactants when in aqueous solution, have the property of separating into two phases, one called diluted phase, with low concentration of surfactant, and the other one rich in surfactants called coacervate. The application of this kind of surfactant in extraction processes from aqueous solutions has been increasing over time, which implies the need for knowledge of the thermodynamic properties of these surfactants. In this study were determined the cloud point of polyethoxylated surfactants from nonilphenolpolietoxylated family (9,5 , 10 , 11, 12 and 13), the family from octilphenolpolietoxylated (10 e 11) and polyethoxylated lauryl alcohol (6 , 7, 8 and 9) varying the degree of ethoxylation. The method used to determine the cloud point was the observation of the turbidity of the solution heating to a ramp of 0.1 ° C / minute and for the pressure studies was used a cell high-pressure maximum ( 300 bar). Through the experimental data of the studied surfactants were used to the Flory - Huggins models, UNIQUAC and NRTL to describe the curves of cloud point, and it was studied the influence of NaCl concentration and pressure of the systems in the cloud point. This last parameter is important for the processes of oil recovery in which surfactant in solution are used in high pressures. While the effect of NaCl allows obtaining cloud points for temperatures closer to the room temperature, it is possible to use in processes without temperature control. The numerical method used to adjust the parameters was the Levenberg - Marquardt. For the model Flory- Huggins parameter settings were determined as enthalpy of the mixing, mixing entropy and the number of aggregations. For the UNIQUAC and NRTL models were adjusted interaction parameters aij using a quadratic dependence with temperature. The parameters obtained had good adjust to the experimental data RSMD < 0.3 %. The results showed that both, ethoxylation degree and pressure increase the cloudy points, whereas the NaCl decrease

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dynamic light scattering was used to monitor relaxation processes in chitosan solutions at concentrations within the semi-dilute and concentrated regimes, Kowhlrausch-Williams-Watts (KWW) equation being successfully fitted to intensity correlation function data. The dependence of KWW equation parameters on chitosan concentration indicated that an increase in concentration from semi-dilute to concentrated regimes resulted in narrowing the distribution of relaxation rates; temperature dependence indicated the relaxation process as described as an energy activated process, whose parameters were function of the interaction between chitosan chains (enthalpy of activation) and rigidity of chitosan conformations (pre-exponential factor)