883 resultados para mathematical modelling
Resumo:
This paper presents an overview of depth averaged modelling of fast catastrophic landslides where coupling of solid skeleton and pore fluid (air and water) is important. The first goal is to show how Biot-Zienkiewicz models can be applied to develop depth integrated, coupled models. The second objective of the paper is to consider a link which can be established between rheological and constitutive models. Perzyna´s viscoplasticity can be considered a general framework within which rheological models such as Bingham and cohesive frictional fluids can be derived. Among the several alternative numerical models, we will focus here on SPH which has not been widely applied by engineers to model landslide propagation. We propose an improvement, based on combining Finite Difference meshes associated to SPH nodes to describe pore pressure evolution inside the landslide mass. We devote a Section to analyze the performance of the models, considering three sets of tests and examples which allows to assess the model performance and limitations: (i) Problems having an analytical solution, (ii) Small scale laboratory tests, and (iii) Real cases for which we have had access to reliable information
Resumo:
Numerical modelling methodologies are important by their application to engineering and scientific problems, because there are processes where analytical mathematical expressions cannot be obtained to model them. When the only available information is a set of experimental values for the variables that determine the state of the system, the modelling problem is equivalent to determining the hyper-surface that best fits the data. This paper presents a methodology based on the Galerkin formulation of the finite elements method to obtain representations of relationships that are defined a priori, between a set of variables: y = z(x1, x2,...., xd). These representations are generated from the values of the variables in the experimental data. The approximation, piecewise, is an element of a Sobolev space and has derivatives defined in a general sense into this space. The using of this approach results in the need of inverting a linear system with a structure that allows a fast solver algorithm. The algorithm can be used in a variety of fields, being a multidisciplinary tool. The validity of the methodology is studied considering two real applications: a problem in hydrodynamics and a problem of engineering related to fluids, heat and transport in an energy generation plant. Also a test of the predictive capacity of the methodology is performed using a cross-validation method.
Resumo:
A mathematical model that describes the operation of a sequential leach bed process for anaerobic digestion of organic fraction of municipal solid waste (MSW) is developed and validated. This model assumes that ultimate mineralisation of the organic component of the waste occurs in three steps, namely solubilisation of particulate matter, fermentation to volatile organic acids (modelled as acetic acid) along with liberation of carbon dioxide and hydrogen, and methanogenesis from acetate and hydrogen. The model incorporates the ionic equilibrium equations arising due to dissolution of carbon dioxide, generation of alkalinity from breakdown of solids and dissociation of acetic acid. Rather than a charge balance, a mass balance on the hydronium and hydroxide ions is used to calculate pH. The flow of liquid through the bed is modelled as occurring through two zones-a permeable zone with high flushing rates and the other more stagnant. Some of the kinetic parameters for the biological processes were obtained from batch MSW digestion experiments. The parameters for flow model were obtained from residence time distribution studies conducted using tritium as a tracer. The model was validated using data from leach bed digestion experiments in which a leachate volume equal to 10% of the fresh waste bed volume was sequenced. The model was then tested, without altering any kinetic or flow parameters, by varying volume of leachate that is sequenced between the beds. Simulations for sequencing/recirculating 5 and 30% of the bed volume are presented and compared with experimental results. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Modelling of froth transportation, as part of modelling of froth recovery, provides a scale-up procedure for flotation cell design. It can also assist in improving control of flotation operation. Mathematical models of froth velocity on the surface and froth residence time distribution in a cylindrical tank flotation cell are proposed, based on mass balance principle of the air entering the froth. The models take into account factors such as cell size, concentrate launder configuration, use of a froth crowder, cell operating conditions including froth height and air rate, and bubble bursting on the surface. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
Cluster analysis via a finite mixture model approach is considered. With this approach to clustering, the data can be partitioned into a specified number of clusters g by first fitting a mixture model with g components. An outright clustering of the data is then obtained by assigning an observation to the component to which it has the highest estimated posterior probability of belonging; that is, the ith cluster consists of those observations assigned to the ith component (i = 1,..., g). The focus is on the use of mixtures of normal components for the cluster analysis of data that can be regarded as being continuous. But attention is also given to the case of mixed data, where the observations consist of both continuous and discrete variables.
Resumo:
In this work we discuss the effects of white and coloured noise perturbations on the parameters of a mathematical model of bacteriophage infection introduced by Beretta and Kuang in [Math. Biosc. 149 (1998) 57]. We numerically simulate the strong solutions of the resulting systems of stochastic ordinary differential equations (SDEs), with respect to the global error, by means of numerical methods of both Euler-Taylor expansion and stochastic Runge-Kutta type. (C) 2003 IMACS. Published by Elsevier B.V. All rights reserved.
Resumo:
Experimental aerodynamic studies of the flows around new aerocapture spacecraft configurations are presently being done in the superorbital expansion tubes at The University of Queensland. Short duration flows at speeds of 10--13 km/s are produced in the expansion tube facility and are then applied to the model spacecraft. Although high-temperature effects, such as molecular dissociation, have long been a part of the computational modelling of the expansion tube flows for speeds below 10 km/s, radiation may now be a significant mechanism of energy transfer within the shock layer on the model. This paper will study the coupling of radiation energy transport for an optically thin gas to the flow dynamics in order to obtain accurate predictions of thermal loads on the spacecraft. The results show that the effect of radiation on the flowfields of subscale models for expansion tube experiments can be assessed by measurements of total heat transfer and radiative heat transfer.
Resumo:
Granulation is one of the fundamental operations in particulate processing and has a very ancient history and widespread use. Much fundamental particle science has occurred in the last two decades to help understand the underlying phenomena. Yet, until recently the development of granulation systems was mostly based on popular practice. The use of process systems approaches to the integrated understanding of these operations is providing improved insight into the complex nature of the processes. Improved mathematical representations, new solution techniques and the application of the models to industrial processes are yielding better designs, improved optimisation and tighter control of these systems. The parallel development of advanced instrumentation and the use of inferential approaches provide real-time access to system parameters necessary for improvements in operation. The use of advanced models to help develop real-time plant diagnostic systems provides further evidence of the utility of process system approaches to granulation processes. This paper highlights some of those aspects of granulation. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
Solids concentration and particle size distribution gradually change in the vertical dimension of industrial flotation cells, subject primarily to the flotation cell size and design and the cell operating conditions. As entrainment is a two-step process and involves only the suspended solids in the top pulp region near the pulp-froth interface, the solids suspension characteristics have a significant impact on the overall entrainment. In this paper, a classification function is proposed to describe the state of solids suspension in flotation cells, similar to the definition of degree of entrainment for classification in the froth phase found in the literature. A mathematical model for solids suspension is also developed, in which the classification function is expressed as an exponential function of the particle size. Experimental data collected from three different Outokumpu tank flotation cells in three different concentrators are well fitted by the proposed exponential model. Under the prevailing experimental conditions, it was found that the solids content in the top region was relatively independent of cell operating conditions such as froth height and air rate but dependent on the cell size. Moreover, the results obtained from the total solids tend to be similar to those from a particular gangue mineral and hence may be applied to all minerals in entrainment calculation. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
Functional-structural plant models that include detailed mechanistic representation of underlying physiological processes can be expensive to construct and the resulting models can also be extremely complicated. On the other hand, purely empirical models are not able to simulate plant adaptability and response to different conditions. In this paper, we present an intermediate approach to modelling plant function that can simulate plant response without requiring detailed knowledge of underlying physiology. Plant function is modelled using a 'canonical' modelling approach, which uses compartment models with flux functions of a standard mathematical form, while plant structure is modelled using L-systems. Two modelling examples are used to demonstrate that canonical modelling can be used in conjunction with L-systems to create functional-structural plant models where function is represented either in an accurate and descriptive way, or in a more mechanistic and explanatory way. We conclude that canonical modelling provides a useful, flexible and relatively simple approach to modelling plant function at an intermediate level of abstraction.
Resumo:
This paper presents a new method for producing a functional-structural plant model that simulates response to different growth conditions, yet does not require detailed knowledge of underlying physiology. The example used to present this method is the modelling of the mountain birch tree. This new functional-structural modelling approach is based on linking an L-system representation of the dynamic structure of the plant with a canonical mathematical model of plant function. Growth indicated by the canonical model is allocated to the structural model according to probabilistic growth rules, such as rules for the placement and length of new shoots, which were derived from an analysis of architectural data. The main advantage of the approach is that it is relatively simple compared to the prevalent process-based functional-structural plant models and does not require a detailed understanding of underlying physiological processes, yet it is able to capture important aspects of plant function and adaptability, unlike simple empirical models. This approach, combining canonical modelling, architectural analysis and L-systems, thus fills the important role of providing an intermediate level of abstraction between the two extremes of deeply mechanistic process-based modelling and purely empirical modelling. We also investigated the relative importance of various aspects of this integrated modelling approach by analysing the sensitivity of the standard birch model to a number of variations in its parameters, functions and algorithms. The results show that using light as the sole factor determining the structural location of new growth gives satisfactory results. Including the influence of additional regulating factors made little difference to global characteristics of the emergent architecture. Changing the form of the probability functions and using alternative methods for choosing the sites of new growth also had little effect. (c) 2004 Elsevier B.V. All rights reserved.
Resumo:
Birds have four spectrally distinct types of single cones that they use for colour vision. It is often desirable to be able to model the spectral sensitivities of the different cone types, which vary considerably between species. However, although there are several mathematical models available for describing the spectral absorption of visual pigments, there is no model describing the spectral absorption of the coloured oil droplets found in three of the four single cone types. In this paper, we describe such a model and illustrate its use in estimating the spectral sensitivities of single cones. Furthermore, we show that the spectral locations of the wavelengths of maximum absorbance (lambda(max)) of the short- (SWS), medium- (MWS) and long- (LWS) wavelength-sensitive visual pigments and the cut-off wavelengths (lambda(cut)) of their respective C-, Y- and R-type oil droplets can be predicted from the lambda(max) of the ultraviolet- (UVS)/violet- ( VS) sensitive visual pigment.
Resumo:
In cell lifespan studies the exponential nature of cell survival curves is often interpreted as showing the rate of death is independent of the age of the cells within the population. Here we present an alternative model where cells that die are replaced and the age and lifespan of the population pool is monitored until a, steady state is reached. In our model newly generated individual cells are given a determined lifespan drawn from a number of known distributions including the lognormal, which is frequently found in nature. For lognormal lifespans the analytic steady-state survival curve obtained can be well-fit by a single or double exponential, depending on the mean and standard deviation. Thus, experimental evidence for exponential lifespans of one and/or two populations cannot be taken as definitive evidence for time and age independence of cell survival. A related model for a dividing population in steady state is also developed. We propose that the common adoption of age-independent, constant rates of change in biological modelling may be responsible for significant errors, both of interpretation and of mathematical deduction. We suggest that additional mathematical and experimental methods must be used to resolve the relationship between time and behavioural changes by cells that are predominantly unsynchronized.
Resumo:
An innovative method for modelling biological processes under anaerobic conditions is presented and discussed. The method is based on titrimetric and off-gas measurements. Titrimetric data is recorded as the addition rate of hydroxyl ions or protons that is required to maintain pH in a bioreactor at a constant level. An off-gas analysis arrangement measures, among other things, the transfer rate of carbon dioxide. The integration of these signals results in a continuous signal which is solely related to the biological reactions. When coupled with a mathematical model of the biological reactions, the signal allows a detailed characterisation of these reactions, which would otherwise be difficult to achieve. Two applications of the method to the enhanced biological phosphorus removal processes are presented and discussed to demonstrate the principle and effectiveness of the method.
Resumo:
Count data with excess zeros relative to a Poisson distribution are common in many biomedical applications. A popular approach to the analysis of such data is to use a zero-inflated Poisson (ZIP) regression model. Often, because of the hierarchical Study design or the data collection procedure, zero-inflation and lack of independence may occur simultaneously, which tender the standard ZIP model inadequate. To account for the preponderance of zero counts and the inherent correlation of observations, a class of multi-level ZIP regression model with random effects is presented. Model fitting is facilitated using an expectation-maximization algorithm, whereas variance components are estimated via residual maximum likelihood estimating equations. A score test for zero-inflation is also presented. The multi-level ZIP model is then generalized to cope with a more complex correlation structure. Application to the analysis of correlated count data from a longitudinal infant feeding study illustrates the usefulness of the approach.