146 resultados para computational modelling
Resumo:
Carbon monoxide, the chief killer in fires, and other species are modelled for a series of enclosure fires. The conditions emulate building fires where CO is formed in the rich, turbulent, nonpremixed flame and is transported frozen to lean mixtures by the ceiling jet which is cooled by radiation and dilution. Conditional moment closure modelling is used and computational domain minimisation criteria are developed which reduce the computational cost of this method. The predictions give good agreement for CO and other species in the lean, quenched-gas stream, holding promise that this method may provide a practical means of modelling real, three-dimensional fire situations. (c) 2005 The Combustion Institute. Published by Elsevier Inc. All rights reserved.
Resumo:
Ab initio calculations have been performed to determine the energetics of oxygen atoms adsorbed onto graphene planes and the possible reaction path extracting carbon atorns in the form of carbon monoxide. Front the energetics it is confirmed that this reaction path will not significantly contribute to the gasification of well ordered carbonaceous chars. Modelling results which explore this limit Lire presented. (C) 2002 Elsevier Science Ltd, All rights reserved.
Resumo:
Dynamic spatial analysis addresses computational aspects of space–time processing. This paper describes the development of a spatial analysis tool and modelling framework that together offer a solution for simulating landscape processes. A better approach to integrating landscape spatial analysis with Geographical Information Systems is advocated in this paper. Enhancements include special spatial operators and map algebra language constructs to handle dispersal and advective flows over landscape surfaces. These functional components to landscape modelling are developed in a modular way and are linked together in a modelling framework that performs dynamic simulation. The concepts and modelling framework are demonstrated using a hydrological modelling example. The approach provides a modelling environment for scientists and land resource managers to write and to visualize spatial process models with ease.
Resumo:
Cluster analysis via a finite mixture model approach is considered. With this approach to clustering, the data can be partitioned into a specified number of clusters g by first fitting a mixture model with g components. An outright clustering of the data is then obtained by assigning an observation to the component to which it has the highest estimated posterior probability of belonging; that is, the ith cluster consists of those observations assigned to the ith component (i = 1,..., g). The focus is on the use of mixtures of normal components for the cluster analysis of data that can be regarded as being continuous. But attention is also given to the case of mixed data, where the observations consist of both continuous and discrete variables.
Resumo:
Modelling and optimization of the power draw of large SAG/AG mills is important due to the large power draw which modern mills require (5-10 MW). The cost of grinding is the single biggest cost within the entire process of mineral extraction. Traditionally, modelling of the mill power draw has been done using empirical models. Although these models are reliable, they cannot model mills and operating conditions which are not within the model database boundaries. Also, due to its static nature, the impact of the changing conditions within the mill on the power draw cannot be determined using such models. Despite advances in computing power, discrete element method (DEM) modelling of large mills with many thousands of particles could be a time consuming task. The speed of computation is determined principally by two parameters: number of particles involved and material properties. The computational time step is determined by the size of the smallest particle present in the model and material properties (stiffness). In the case of small particles, the computational time step will be short, whilst in the case of large particles; the computation time step will be larger. Hence, from the point of view of time required for modelling (which usually corresponds to time required for 3-4 mill revolutions), it will be advantageous that the smallest particles in the model are not unnecessarily too small. The objective of this work is to compare the net power draw of the mill whose charge is characterised by different size distributions, while preserving the constant mass of the charge and mill speed. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
Simplicity in design and minimal floor space requirements render the hydrocyclone the preferred classifier in mineral processing plants. Empirical models have been developed for design and process optimisation but due to the complexity of the flow behaviour in the hydrocyclone these do not provide information on the internal separation mechanisms. To study the interaction of design variables, the flow behaviour needs to be considered, especially when modelling the new three-product cyclone. Computational fluid dynamics (CFD) was used to model the three-product cyclone, in particular the influence of the dual vortex finder arrangement on flow behaviour. From experimental work performed on the UG2 platinum ore, significant differences in the classification performance of the three-product cyclone were noticed with variations in the inner vortex finder length. Because of this simulations were performed for a range of inner vortex finder lengths. Simulations were also conducted on a conventional hydrocyclone of the same size to enable a direct comparison of the flow behaviour between the two cyclone designs. Significantly, high velocities were observed for the three-product cyclone with an inner vortex finder extended deep into the conical section of the cyclone. CFD studies revealed that in the three-product cyclone, a cylindrical shaped air-core is observed similar to conventional hydrocyclones. A constant diameter air-core was observed throughout the inner vortex finder length, while no air-core was present in the annulus. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
Count data with excess zeros relative to a Poisson distribution are common in many biomedical applications. A popular approach to the analysis of such data is to use a zero-inflated Poisson (ZIP) regression model. Often, because of the hierarchical Study design or the data collection procedure, zero-inflation and lack of independence may occur simultaneously, which tender the standard ZIP model inadequate. To account for the preponderance of zero counts and the inherent correlation of observations, a class of multi-level ZIP regression model with random effects is presented. Model fitting is facilitated using an expectation-maximization algorithm, whereas variance components are estimated via residual maximum likelihood estimating equations. A score test for zero-inflation is also presented. The multi-level ZIP model is then generalized to cope with a more complex correlation structure. Application to the analysis of correlated count data from a longitudinal infant feeding study illustrates the usefulness of the approach.
Resumo:
The planning and management of water resources in the Pioneer Valley, north-eastern Australia requires a tool for assessing the impact of groundwater and stream abstractions on water supply reliabilities and environmental flows in Sandy Creek (the main surface water system studied). Consequently, a fully coupled stream-aquifer model has been constructed using the code MODHMS, calibrated to near-stream observations of watertable behaviour and multiple components of gauged stream flow. This model has been tested using other methods of estimation, including stream depletion analysis and radon isotope tracer sampling. The coarseness of spatial discretisation, which is required for practical reasons of computational efficiency, limits the model's capacity to simulate small-scale processes (e.g., near-stream groundwater pumping, bank storage effects), and alternative approaches are required to complement the model's range of applicability. Model predictions of groundwater influx to Sandy Creek are compared with baseflow estimates from three different hydrograph separation techniques, which were found to be unable to reflect the dynamics of Sandy Creek stream-aquifer interactions. The model was also used to infer changes in the water balance of the system caused by historical land use change. This led to constraints on the recharge distribution which can be implemented to improve model calibration performance. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
First-year undergraduate engineering students' understanding of the units of factors and terms in first-order ordinary differential equations used in modelling contexts was investigated using diagnostic quiz questions. Few students appeared to realize that the units of each term in such equations must be the same, or if they did, nevertheless failed to apply that knowledge when needed. In addition, few students were able to determine the units of a proportionality factor in a simple equation. These results indicate that lecturers of modelling courses cannot take this foundational knowledge for granted and should explicitly include it in instruction.
Resumo:
We demonstrate an end-to-end computational model of the HEG shock tunnel as a way to extract more precise test flow conditions and as a way of getting predictions of new operating conditions. For a selection of established operating conditions, the L1d program was used to simulate the one-dimensional gas-dynamic processes within the whole of the facility. The program reproduces the compression tube performance reliably and, with the inclusion of a loss factor near the upstream-end of the compression tube, it provides a good estimate of the equilibrium pressure in the shock-reflection region over the set of six standard operating conditions for HEG.
Resumo:
This paper provides a computational framework, based on Defeasible Logic, to capture some aspects of institutional agency. Our background is Kanger-Lindahl-P\"orn account of organised interaction, which describes this interaction within a multi-modal logical setting. This work focuses in particular on the notions of counts-as link and on those of attempt and of personal and direct action to realise states of affairs. We show how standard Defeasible Logic can be extended to represent these concepts: the resulting system preserves some basic properties commonly attributed to them. In addition, the framework enjoys nice computational properties, as it turns out that the extension of any theory can be computed in time linear to the size of the theory itself.
Resumo:
A simplified model for anisotropic mantle convection based on a novel class of rheologies, originally developed for folding instabilities in multilayered rock (MUHLHAUS et al., 2002), is extended ¨ through the introduction of a thermal anisotropy dependent on the local layering. To examine the effect of the thermal anisotropy on the evolution of mantle material, a parallel implementation of this model was undertaken using the Escript modelling toolkit and the Finley finite-element computational kernel (DAVIES et al., 2004). For the cases studied, there appears too little if any effect. For comparative purposes, the effects of anisotropic shear viscosity and the introduced thermal anisotropy are also presented. These results contribute to the characterization of viscous anisotropic mantle convection subject to variation in thermal conductivities and shear viscosities.