981 resultados para Mixture modelling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, the framework is described for the modelling of granular material by employing Computational Fluid Dynamics (CFD). This is achieved through the use and implementation in the continuum theory of constitutive relations, which are derived in a granular dynamics framework and parametrise particle interactions that occur at the micro-scale level. The simulation of a process often met in bulk solids handling industrial plants involving granular matter, (i.e. filling of a flat-bottomed bin with a binary material mixture through pneumatic conveying-emptying of the bin in core flow mode-pneumatic conveying of the material coming out of a the bin) is presented. The results of the presented simulation demonstrate the capability of the numerical model to represent successfully key granular processes (i.e. segregation/degradation), the prediction of which is of great importance in the process engineering industry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present work uses the discrete element method (DEM) to describe assemblies of particulate bulk materials. Working numerical descriptions of entire processes using this scheme are infeasible because of the very large number of elements (1012 or more in a moderately sized industrial silo). However it is possible to capture much of the essential bulk mechanics through selective DEM on important regions of an assembly, thereafter using the information in continuum numerical descriptions of particulate processes. The continuum numerical model uses population balances of the various components in bulk solid mixtures. It depends on constitutive relationships for the internal transfer, creation and/or destruction of components within the mixture. In this paper we show the means of generating such relationships for two important flow phenomena – segregation whereby particles differing in some important property (often size) separate into discrete phases, and degradation, whereby particles break into sub-elements, through impact on each other or shearing. We perform DEM simulations under a range of representative conditions, extracting the important parameters for the relevant transfer, creation and/or destruction of particles in certain classes within the assembly over time. Continuum predictions of segregation and degradation using this scheme are currently being successfully validated against bulk experimental data and are beginning to be used in schemes to improve the design and operation of bulk solids process plant.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, a Computational Fluid Dynamics framework is presented for the modelling of key processes which involve granular material (i.e. segregation, degradation, caking). Appropriate physical models and sophisticated algorithms have been developed for the correct representation of the different material components in a granular mixture. The various processes, which arise from the micromechanical properties of the different mixture species can be obtained and parametrised in a DEM / experimental framework, thus enabling the continuum theory to correctly account for the micromechanical properties of a granular system. The present study establishes the link between the micromechanics and continuum theory and demonstrates the model capabilities in simulations of processes which are of great importance to the process engineering industry and involve granular materials in complex geometries.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, the application of a continuum model is presented, which deals with the discharge of multi-component granular mixtures in core flow mode. The full model description is given (including the constitutive models for the segregation mechanism) and the interactions between particles at the microscopic level are parametrised in order to predict the development of stagnant zone boundaries during core flow discharges. Finally, the model is applied to a real industrial problem and predictions are made for the segregation patterns developed during mixture discharge in core flow mode.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We address the problem of non-linearity in 2D Shape modelling of a particular articulated object: the human body. This issue is partially resolved by applying a different Point Distribution Model (PDM) depending on the viewpoint. The remaining non-linearity is solved by using Gaussian Mixture Models (GMM). A dynamic-based clustering is proposed and carried out in the Pose Eigenspace. A fundamental question when clustering is to determine the optimal number of clusters. From our point of view, the main aspect to be evaluated is the mean gaussianity. This partitioning is then used to fit a GMM to each one of the view-based PDM, derived from a database of Silhouettes and Skeletons. Dynamic correspondences are then obtained between gaussian models of the 4 mixtures. Finally, we compare this approach with other two methods we previously developed to cope with non-linearity: Nearest Neighbor (NN) Classifier and Independent Component Analysis (ICA).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mixture of Gaussians (MoG) modelling [13] is a popular approach to background subtraction in video sequences. Although the algorithm shows good empirical performance, it lacks theoretical justification. In this paper, we give a justification for it from an online stochastic expectation maximization (EM) viewpoint and extend it to a general framework of regularized online classification EM for MoG with guaranteed convergence. By choosing a special regularization function, l1 norm, we derived a new set of updating equations for l1 regularized online MoG. It is shown empirically that l1 regularized online MoG converge faster than the original online MoG .

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main objective of the study presented in this paper was to investigate the feasibility using support vector machines (SVM) for the prediction of the fresh properties of self-compacting concrete. The radial basis function (RBF) and polynomial kernels were used to predict these properties as a function of the content of mix components. The fresh properties were assessed with the slump flow, T50, T60, V-funnel time, Orimet time, and blocking ratio (L-box). The retention of these tests was also measured at 30 and 60 min after adding the first water. The water dosage varied from 188 to 208 L/m3, the dosage of superplasticiser (SP) from 3.8 to 5.8 kg/m3, and the volume of coarse aggregates from 220 to 360 L/m3. In total, twenty mixes were used to measure the fresh state properties with different mixture compositions. RBF kernel was more accurate compared to polynomial kernel based support vector machines with a root mean square error (RMSE) of 26.9 (correlation coefficient of R2 = 0.974) for slump flow prediction, a RMSE of 0.55 (R2 = 0.910) for T50 (s) prediction, a RMSE of 1.71 (R2 = 0.812) for T60 (s) prediction, a RMSE of 0.1517 (R2 = 0.990) for V-funnel time prediction, a RMSE of 3.99 (R2 = 0.976) for Orimet time prediction, and a RMSE of 0.042 (R2 = 0.988) for L-box ratio prediction, respectively. A sensitivity analysis was performed to evaluate the effects of the dosage of cement and limestone powder, the water content, the volumes of coarse aggregate and sand, the dosage of SP and the testing time on the predicted test responses. The analysis indicates that the proposed SVM RBF model can gain a high precision, which provides an alternative method for predicting the fresh properties of SCC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tese apresentada como requisito parcial para obtenção do grau de Doutor em Estatística e Gestão de Informação pelo Instituto Superior de Estatística e Gestão de Informação da Universidade Nova de Lisboa

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The algorithmic approach to data modelling has developed rapidly these last years, in particular methods based on data mining and machine learning have been used in a growing number of applications. These methods follow a data-driven methodology, aiming at providing the best possible generalization and predictive abilities instead of concentrating on the properties of the data model. One of the most successful groups of such methods is known as Support Vector algorithms. Following the fruitful developments in applying Support Vector algorithms to spatial data, this paper introduces a new extension of the traditional support vector regression (SVR) algorithm. This extension allows for the simultaneous modelling of environmental data at several spatial scales. The joint influence of environmental processes presenting different patterns at different scales is here learned automatically from data, providing the optimum mixture of short and large-scale models. The method is adaptive to the spatial scale of the data. With this advantage, it can provide efficient means to model local anomalies that may typically arise in situations at an early phase of an environmental emergency. However, the proposed approach still requires some prior knowledge on the possible existence of such short-scale patterns. This is a possible limitation of the method for its implementation in early warning systems. The purpose of this paper is to present the multi-scale SVR model and to illustrate its use with an application to the mapping of Cs137 activity given the measurements taken in the region of Briansk following the Chernobyl accident.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Comets are the spectacular objects in the night sky since the dawn of mankind. Due to their giant apparitions and enigmatic behavior, followed by coincidental calamities, they were termed as notorious and called as `bad omens'. With a systematic study of these objects modern scienti c community understood that these objects are part of our solar system. Comets are believed to be remnant bodies of at the end of evolution of solar system and possess the material of solar nebula. Hence, these are considered as most pristine objects which can provide the information about the conditions of solar nebula. These are small bodies of our solar system, with a typical size of about a kilometer to a few tens of kilometers orbiting the Sun in highly elliptical orbits. The solid body of a comet is nucleus which is a conglomerated mixture of water ice, dust and some other gases. When the cometary nucleus advances towards the Sun in its orbit the ices sublimates and produces the gaseous envelope around the nucleus which is called coma. The gravity of cometary nucleus is very small and hence can not in uence the motion of gases in the cometary coma. Though the cometary nucleus is a few kilometers in size they can produce a transient, extensive, and expanding atmosphere with size several orders of magnitude larger in space. By ejecting gas and dust into space comets became the most active members of the solar system. The solar radiation and the solar wind in uences the motion of dust and ions and produces dust and ion tails, respectively. Comets have been observed in di erent spectral regions from rocket, ground and space borne optical instruments. The observed emission intensities are used to quantify the chemical abundances of di erent species in the comets. The study of various physical and chemical processes that govern these emissions is essential before estimating chemical abundances in the coma. Cameron band emission of CO molecule has been used to derive CO2 abundance in the comets based on the assumption that photodissociation of CO2 mainly produces these emissions. Similarly, the atomic oxygen visible emissions have been used to probe H2O in the cometary coma. The observed green ([OI] 5577 A) to red-doublet emission ([OI] 6300 and 6364 A) ratio has been used to con rm H2O as the parent species of these emissions. In this thesis a model is developed to understand the photochemistry of these emissions and applied to several comets. The model calculated emission intensities are compared with the observations done by space borne instruments like International Ultraviolet Explorer (IUE) and Hubble Space Telescope (HST) and also by various ground based telescopes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The contribution investigates the problem of estimating the size of a population, also known as the missing cases problem. Suppose a registration system is targeting to identify all cases having a certain characteristic such as a specific disease (cancer, heart disease, ...), disease related condition (HIV, heroin use, ...) or a specific behavior (driving a car without license). Every case in such a registration system has a certain notification history in that it might have been identified several times (at least once) which can be understood as a particular capture-recapture situation. Typically, cases are left out which have never been listed at any occasion, and it is this frequency one wants to estimate. In this paper modelling is concentrating on the counting distribution, e.g. the distribution of the variable that counts how often a given case has been identified by the registration system. Besides very simple models like the binomial or Poisson distribution, finite (nonparametric) mixtures of these are considered providing rather flexible modelling tools. Estimation is done using maximum likelihood by means of the EM algorithm. A case study on heroin users in Bangkok in the year 2001 is completing the contribution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The rate at which a given site in a gene sequence alignment evolves over time may vary. This phenomenon-known as heterotachy-can bias or distort phylogenetic trees inferred from models of sequence evolution that assume rates of evolution are constant. Here, we describe a phylogenetic mixture model designed to accommodate heterotachy. The method sums the likelihood of the data at each site over more than one set of branch lengths on the same tree topology. A branch-length set that is best for one site may differ from the branch-length set that is best for some other site, thereby allowing different sites to have different rates of change throughout the tree. Because rate variation may not be present in all branches, we use a reversible-jump Markov chain Monte Carlo algorithm to identify those branches in which reliable amounts of heterotachy occur. We implement the method in combination with our 'pattern-heterogeneity' mixture model, applying it to simulated data and five published datasets. We find that complex evolutionary signals of heterotachy are routinely present over and above variation in the rate or pattern of evolution across sites, that the reversible-jump method requires far fewer parameters than conventional mixture models to describe it, and serves to identify the regions of the tree in which heterotachy is most pronounced. The reversible-jump procedure also removes the need for a posteriori tests of 'significance' such as the Akaike or Bayesian information criterion tests, or Bayes factors. Heterotachy has important consequences for the correct reconstruction of phylogenies as well as for tests of hypotheses that rely on accurate branch-length information. These include molecular clocks, analyses of tempo and mode of evolution, comparative studies and ancestral state reconstruction. The model is available from the authors' website, and can be used for the analysis of both nucleotide and morphological data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A connection between a fuzzy neural network model with the mixture of experts network (MEN) modelling approach is established. Based on this linkage, two new neuro-fuzzy MEN construction algorithms are proposed to overcome the curse of dimensionality that is inherent in the majority of associative memory networks and/or other rule based systems. The first construction algorithm employs a function selection manager module in an MEN system. The second construction algorithm is based on a new parallel learning algorithm in which each model rule is trained independently, for which the parameter convergence property of the new learning method is established. As with the first approach, an expert selection criterion is utilised in this algorithm. These two construction methods are equivalent in their effectiveness in overcoming the curse of dimensionality by reducing the dimensionality of the regression vector, but the latter has the additional computational advantage of parallel processing. The proposed algorithms are analysed for effectiveness followed by numerical examples to illustrate their efficacy for some difficult data based modelling problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El estudio desarrollado en este trabajo de tesis se centra en la modelización numérica de la fase de propagación de los deslizamientos rápidos de ladera a través del método sin malla Smoothed Particle Hydrodynamics (SPH). Este método tiene la gran ventaja de permitir el análisis de problemas de grandes deformaciones evitando operaciones costosas de remallado como en el caso de métodos numéricos con mallas tal como el método de los Elementos Finitos. En esta tesis, particular atención viene dada al rol que la reología y la presión de poros desempeñan durante estos eventos. El modelo matemático utilizado se basa en la formulación de Biot-Zienkiewicz v - pw, que representa el comportamiento, expresado en términos de velocidad del esqueleto sólido y presiones de poros, de la mezcla de partículas sólidas en un medio saturado. Las ecuaciones que gobiernan el problema son: • la ecuación de balance de masa de la fase del fluido intersticial, • la ecuación de balance de momento de la fase del fluido intersticial y de la mezcla, • la ecuación constitutiva y • una ecuación cinemática. Debido a sus propiedades geométricas, los deslizamientos de ladera se caracterizan por tener una profundidad muy pequeña frente a su longitud y a su anchura, y, consecuentemente, el modelo matemático mencionado anteriormente se puede simplificar integrando en profundidad las ecuaciones, pasando de un modelo 3D a 2D, el cual presenta una combinación excelente de precisión, sencillez y costes computacionales. El modelo propuesto en este trabajo se diferencia de los modelos integrados en profundidad existentes por incorporar un ulterior modelo capaz de proveer información sobre la presión del fluido intersticial a cada paso computacional de la propagación del deslizamiento. En una manera muy eficaz, la evolución de los perfiles de la presión de poros está numéricamente resuelta a través de un esquema explicito de Diferencias Finitas a cada nodo SPH. Este nuevo enfoque es capaz de tener en cuenta la variación de presión de poros debida a cambios de altura, de consolidación vertical o de cambios en las tensiones totales. Con respecto al comportamiento constitutivo, uno de los problemas principales al modelizar numéricamente deslizamientos rápidos de ladera está en la dificultad de simular con la misma ley constitutiva o reológica la transición de la fase de iniciación, donde el material se comporta como un sólido, a la fase de propagación donde el material se comporta como un fluido. En este trabajo de tesis, se propone un nuevo modelo reológico basado en el modelo viscoplástico de Perzyna, pensando a la viscoplasticidad como a la llave para poder simular tanto la fase de iniciación como la de propagación con el mismo modelo constitutivo. Con el fin de validar el modelo matemático y numérico se reproducen tanto ejemplos de referencia con solución analítica como experimentos de laboratorio. Finalmente, el modelo se aplica a casos reales, con especial atención al caso del deslizamiento de 1966 en Aberfan, mostrando como los resultados obtenidos simulan con éxito estos tipos de riesgos naturales. The study developed in this thesis focuses on the modelling of landslides propagation with the Smoothed Particle Hydrodynamics (SPH) meshless method which has the great advantage of allowing to deal with large deformation problems by avoiding expensive remeshing operations as happens for mesh methods such as, for example, the Finite Element Method. In this thesis, special attention is given to the role played by rheology and pore water pressure during these natural hazards. The mathematical framework used is based on the v - pw Biot-Zienkiewicz formulation, which represents the behaviour, formulated in terms of soil skeleton velocity and pore water pressure, of the mixture of solid particles and pore water in a saturated media. The governing equations are: • the mass balance equation for the pore water phase, • the momentum balance equation for the pore water phase and the mixture, • the constitutive equation and • a kinematic equation. Landslides, due to their shape and geometrical properties, have small depths in comparison with their length or width, therefore, the mathematical model aforementioned can then be simplified by depth integrating the equations, switching from a 3D to a 2D model, which presents an excellent combination of accuracy, computational costs and simplicity. The proposed model differs from previous depth integrated models by including a sub-model able to provide information on pore water pressure profiles at each computational step of the landslide's propagation. In an effective way, the evolution of the pore water pressure profiles is numerically solved through a set of 1D Finite Differences explicit scheme at each SPH node. This new approach is able to take into account the variation of the pore water pressure due to changes of height, vertical consolidation or changes of total stress. Concerning the constitutive behaviour, one of the main issues when modelling fast landslides is the difficulty to simulate with the same constitutive or rheological model the transition from the triggering phase, where the landslide behaves like a solid, to the propagation phase, where the landslide behaves in a fluid-like manner. In this work thesis, a new rheological model is proposed, based on the Perzyna viscoplastic model, thinking of viscoplasticity as the key to close the gap between the triggering and the propagation phase. In order to validate the mathematical model and the numerical approach, benchmarks and laboratory experiments are reproduced and compared to analytical solutions when possible. Finally, applications to real cases are studied, with particular attention paid to the Aberfan flowslide of 1966, showing how the mathematical model accurately and successfully simulate these kind of natural hazards.