868 resultados para process modelling
Resumo:
The development of a new process model of cement grinding in two-stage mills is discussed. The new model has been used to simulate cement grinding and predicting mill performance in open and closed circuit configuration. The new model considered the two-compartment mill as perfectly mixed slices in series. The breakage rate function uses the back calculation technique to determine offline using drop weight and abrasion tests.
Resumo:
The birth, death and catastrophe process is an extension of the birth-death process that incorporates the possibility of reductions in population of arbitrary size. We will consider a general form of this model in which the transition rates are allowed to depend on the current population size in an arbitrary manner. The linear case, where the transition rates are proportional to current population size, has been studied extensively. In particular, extinction probabilities, the expected time to extinction, and the distribution of the population size conditional on nonextinction (the quasi-stationary distribution) have all been evaluated explicitly. However, whilst these characteristics are of interest in the modelling and management of populations, processes with linear rate coefficients represent only a very limited class of models. We address this limitation by allowing for a wider range of catastrophic events. Despite this generalisation, explicit expressions can still be found for the expected extinction times.
Resumo:
Modelling and optimization of the power draw of large SAG/AG mills is important due to the large power draw which modern mills require (5-10 MW). The cost of grinding is the single biggest cost within the entire process of mineral extraction. Traditionally, modelling of the mill power draw has been done using empirical models. Although these models are reliable, they cannot model mills and operating conditions which are not within the model database boundaries. Also, due to its static nature, the impact of the changing conditions within the mill on the power draw cannot be determined using such models. Despite advances in computing power, discrete element method (DEM) modelling of large mills with many thousands of particles could be a time consuming task. The speed of computation is determined principally by two parameters: number of particles involved and material properties. The computational time step is determined by the size of the smallest particle present in the model and material properties (stiffness). In the case of small particles, the computational time step will be short, whilst in the case of large particles; the computation time step will be larger. Hence, from the point of view of time required for modelling (which usually corresponds to time required for 3-4 mill revolutions), it will be advantageous that the smallest particles in the model are not unnecessarily too small. The objective of this work is to compare the net power draw of the mill whose charge is characterised by different size distributions, while preserving the constant mass of the charge and mill speed. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
Nucleation is the first stage in any granulation process where binder liquid first comes into contact with the powder. This paper investigates the nucleation process where binder liquid is added to a fine powder with a spray nozzle. The dimensionless spray flux approach of Hapgood et al. (Powder Technol. 141 (2004) 20) is extended to account for nonuniform spray patterns and allow for overlap of nuclei granules rather than spray drops. A dimensionless nuclei distribution function which describes the effects of the design and operating parameters of the nucleation process (binder spray characteristics, the nucleation area ratio between droplets and nuclei and the powder bed velocity) on the fractional surface area coverage of nuclei on a moving powder bed is developed. From this starting point, a Monte Carlo nucleation model that simulates full nuclei size distributions as a function of the design and operating parameters that were implemented in the dimensionless nuclei distribution function is developed. The nucleation model was then used to investigate the effects of the design and operating parameters on the formed nuclei size distributions and to correlate these effects to changes of the dimensionless nuclei distribution function. Model simulations also showed that it is possible to predict nuclei size distributions beyond the drop controlled nucleation regime in Hapgood's nucleation regime map. Qualitative comparison of model simulations and experimental nucleation data showed similar shapes of the nuclei size distributions. In its current form, the nucleation model can replace the nucleation term in one-dimensional population balance models describing wet granulation processes. Implementation of more sophisticated nucleation kinetics can make the model applicable to multi-dimensional population balance models.
Resumo:
Minimal representations are known to have no redundant elements, and are therefore of great importance. Based on the notions of performance and size indices and measures for process systems, the paper proposes conditions for a process model being minimal in a set of functionally equivalent models with respect to a size norm. Generalized versions of known procedures to obtain minimal process models for a given modelling goal, model reduction based on sensitivity analysis and incremental model building are proposed and discussed. The notions and procedures are illustrated and compared on a simple example, that of a simple nonlinear fermentation process with different modelling goals and on a case study of a heat exchanger modelling. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
Solids concentration and particle size distribution gradually change in the vertical dimension of industrial flotation cells, subject primarily to the flotation cell size and design and the cell operating conditions. As entrainment is a two-step process and involves only the suspended solids in the top pulp region near the pulp-froth interface, the solids suspension characteristics have a significant impact on the overall entrainment. In this paper, a classification function is proposed to describe the state of solids suspension in flotation cells, similar to the definition of degree of entrainment for classification in the froth phase found in the literature. A mathematical model for solids suspension is also developed, in which the classification function is expressed as an exponential function of the particle size. Experimental data collected from three different Outokumpu tank flotation cells in three different concentrators are well fitted by the proposed exponential model. Under the prevailing experimental conditions, it was found that the solids content in the top region was relatively independent of cell operating conditions such as froth height and air rate but dependent on the cell size. Moreover, the results obtained from the total solids tend to be similar to those from a particular gangue mineral and hence may be applied to all minerals in entrainment calculation. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
On-site wastewater treatment and dispersal systems (OWTS) are used in non-sewered populated areas in Australia to treat and dispose of household wastewater. The most common OWTS in Australia is the septic tank-soil absorption system (SAS) - which relies on the soil to treat and disperse effluent. The mechanisms governing purification and hydraulic performance of a SAS are complex and have been shown to be highly influenced by the biological zone (biomat) which develops on the soil surface within the trench or bed. Studies suggest that removal mechanisms in the biomat zone, primarily adsorption and filtering, are important processes in the overall purification abilities of a SAS. There is growing concern that poorly functioning OWTS are impacting upon the environment, although to date, only a few investigations have been able to demonstrate pollution of waterways by on-site systems. In this paper we review some key hydrological and biogeochemical mechanisms in SAS, and the processes leading to hydraulic failure. The nutrient and pathogen removal efficiencies in soil absorption systems are also reviewed, and a critical discussion of the evidence of failure and environmental and public health impacts arising from SAS operation is presented. Future research areas identified from the review include the interactions between hydraulic and treatment mechanisms, and the biomat and sub-biomat zone gas composition and its role in effluent treatment.
Resumo:
Background and Aims The morphogenesis and architecture of a rice plant, Oryza sativa, are critical factors in the yield equation, but they are not well studied because of the lack of appropriate tools for 3D measurement. The architecture of rice plants is characterized by a large number of tillers and leaves. The aims of this study were to specify rice plant architecture and to find appropriate functions to represent the 3D growth across all growth stages. Methods A japonica type rice, 'Namaga', was grown in pots under outdoor conditions. A 3D digitizer was used to measure the rice plant structure at intervals from the young seedling stage to maturity. The L-system formalism was applied to create '3D virtual rice' plants, incorporating models of phenological development and leaf emergence period as a function of temperature and photoperiod, which were used to determine the timing of tiller emergence. Key Results The relationships between the nodal positions and leaf lengths, leaf angles and tiller angles were analysed and used to determine growth functions for the models. The '3D virtual rice' reproduces the structural development of isolated plants and provides a good estimation of the fillering process, and of the accumulation of leaves. Conclusions The results indicated that the '3D virtual rice' has a possibility to demonstrate the differences in the structure and development between cultivars and under different environmental conditions. Future work, necessary to reflect both cultivar and environmental effects on the model performance, and to link with physiological models, is proposed in the discussion.
Resumo:
This paper presents a new method for producing a functional-structural plant model that simulates response to different growth conditions, yet does not require detailed knowledge of underlying physiology. The example used to present this method is the modelling of the mountain birch tree. This new functional-structural modelling approach is based on linking an L-system representation of the dynamic structure of the plant with a canonical mathematical model of plant function. Growth indicated by the canonical model is allocated to the structural model according to probabilistic growth rules, such as rules for the placement and length of new shoots, which were derived from an analysis of architectural data. The main advantage of the approach is that it is relatively simple compared to the prevalent process-based functional-structural plant models and does not require a detailed understanding of underlying physiological processes, yet it is able to capture important aspects of plant function and adaptability, unlike simple empirical models. This approach, combining canonical modelling, architectural analysis and L-systems, thus fills the important role of providing an intermediate level of abstraction between the two extremes of deeply mechanistic process-based modelling and purely empirical modelling. We also investigated the relative importance of various aspects of this integrated modelling approach by analysing the sensitivity of the standard birch model to a number of variations in its parameters, functions and algorithms. The results show that using light as the sole factor determining the structural location of new growth gives satisfactory results. Including the influence of additional regulating factors made little difference to global characteristics of the emergent architecture. Changing the form of the probability functions and using alternative methods for choosing the sites of new growth also had little effect. (c) 2004 Elsevier B.V. All rights reserved.
Resumo:
We demonstrate a portable process for developing a triple bottom line model to measure the knowledge production performance of individual research centres. For the first time, this study also empirically illustrates how a fully units-invariant model of Data Envelopment Analysis (DEA) can be used to measure the relative efficiency of research centres by capturing the interaction amongst a common set of multiple inputs and outputs. This study is particularly timely given the increasing transparency required by governments and industries that fund research activities. The process highlights the links between organisational objectives, desired outcomes and outputs while the emerging performance model represents an executive managerial view. This study brings consistency to current measures that often rely on ratios and univariate analyses that are not otherwise conducive to relative performance analysis.
Resumo:
Let (Phi(t))(t is an element of R+) be a Harris ergodic continuous-time Markov process on a general state space, with invariant probability measure pi. We investigate the rates of convergence of the transition function P-t(x, (.)) to pi; specifically, we find conditions under which r(t) vertical bar vertical bar P-t (x, (.)) - pi vertical bar vertical bar -> 0 as t -> infinity, for suitable subgeometric rate functions r(t), where vertical bar vertical bar - vertical bar vertical bar denotes the usual total variation norm for a signed measure. We derive sufficient conditions for the convergence to hold, in terms of the existence of suitable points on which the first hitting time moments are bounded. In particular, for stochastically ordered Markov processes, explicit bounds on subgeometric rates of convergence are obtained. These results are illustrated in several examples.
Resumo:
The aim of the study presented was to implement a process model to simulate the dynamic behaviour of a pilot-scale process for anaerobic two-stage digestion of sewage sludge. The model implemented was initiated to support experimental investigations of the anaerobic two-stage digestion process. The model concept implemented in the simulation software package MATLAB(TM)/Simulink(R) is a derivative of the IWA Anaerobic Digestion Model No.1 (ADM1) that has been developed by the IWA task group for mathematical modelling of anaerobic processes. In the present study the original model concept has been adapted and applied to replicate a two-stage digestion process. Testing procedures, including balance checks and 'benchmarking' tests were carried out to verify the accuracy of the implementation. These combined measures ensured a faultless model implementation without numerical inconsistencies. Parameters for both, the thermophilic and the mesophilic process stage, have been estimated successfully using data from lab-scale experiments described in literature. Due to the high number of parameters in the structured model, it was necessary to develop a customised procedure that limited the range of parameters to be estimated. The accuracy of the optimised parameter sets has been assessed against experimental data from pilot-scale experiments. Under these conditions, the model predicted reasonably well the dynamic behaviour of a two-stage digestion process in pilot scale. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
Evolutionary algorithms perform optimization using a population of sample solution points. An interesting development has been to view population-based optimization as the process of evolving an explicit, probabilistic model of the search space. This paper investigates a formal basis for continuous, population-based optimization in terms of a stochastic gradient descent on the Kullback-Leibler divergence between the model probability density and the objective function, represented as an unknown density of assumed form. This leads to an update rule that is related and compared with previous theoretical work, a continuous version of the population-based incremental learning algorithm, and the generalized mean shift clustering framework. Experimental results are presented that demonstrate the dynamics of the new algorithm on a set of simple test problems.
Resumo:
A one-dimensional computational model of pilling of a fibre assembly has been created. The model follows a set of individual fibres, as free ends and loops appear as fuzz and arc progressively withdrawn from the body of the assembly, and entangle to form pills, which eventually break off or are pulled out. The time dependence of the computation is given by ticks, which correspond to cycles of a wear and laundering process. The movement of the fibres is treated as a reptation process. A set of standard values is used as inputs to the computation. Predictions arc given of the change with a number Of cycles of mass of fuzz, mass of pills, and mass removed from the assembly. Changes in the standard values allow sensitivity studies to be carried out.
Resumo:
The patterns of rock comminution within tumbling mills, as well as the nature of forces, are of significant practical importance. Discrete element modelling (DEM) has been used to analyse the pattern of specific energy applied to rock, in terms of spatial distribution within a pilot AG/SAG mill. We also analysed in some detail the nature of the forces, which may result in rock comminution. In order to examine the distribution of energy applied within the mill, the DEM models were compared with measured particle mass losses, in small scale AG and SAG mill experiments. The intensity of contact stresses was estimated using the Hertz theory of elastic contacts. The results indicate that in the case of the AG mill, the highest intensity stresses and strains are likely to occur deep within the charge, and close to the base. This effect is probably more pronounced for large AG mills. In the SAG mill case, the impacts of the steel balls on the surface of the charge are likely to be the most potent. In both cases, the spatial pattern of medium-to-high energy collisions is affected by the rotational speed of the mill. Based on an assumed damage threshold for rock, in terms of specific energy introduced per single collision, the spatial pattern of productive collisions within each charge was estimated and compared with rates of mass loss. We also investigated the nature of the comminution process within AG vs. SAG mill, in order to explain the observed differences in energy utilisation efficiency, between two types of milling. All experiments were performed using a laboratory scale mill of 1.19 m diameter and 0.31 m length, equipped with 14 square section lifters of height 40 mm. (C) 2006 Elsevier Ltd. All rights reserved.