985 resultados para hybrid modelling
Resumo:
Crop modelling has evolved over the last 30 or so years in concert with advances in crop physiology, crop ecology and computing technology. Having reached a respectable degree of acceptance, it is appropriate to review briefly the course of developments in crop modelling and to project what might be major contributions of crop modelling in the future. Two major opportunities are envisioned for increased modelling activity in the future. One opportunity is in a continuing central, heuristic role to support scientific investigation, to facilitate decision making by crop managers, and to aid in education. Heuristic activities will also extend to the broader system-level issues of environmental and ecological aspects of crop production. The second opportunity is projected as a prime contributor in understanding and advancing the genetic regulation of plant performance and plant improvement. Physiological dissection and modelling of traits provides an avenue by which crop modelling could contribute to enhancing integration of molecular genetic technologies in crop improvement. Crown Copyright (C) 2002 Published by Elsevier Science B.V. All rights reserved.
Resumo:
We present the first mathematical model on the transmission dynamics of Schistosoma japonicum. The work extends Barbour's classic model of schistosome transmission. It allows for the mammalian host heterogeneity characteristic of the S. japonicum life cycle, and solves the problem of under-specification of Barbour's model by the use of Chinese data we are collecting on human-bovine transmission in the Poyang Lake area of Jiangxi Province in China. The model predicts that in the lake/marshland areas of the Yangtze River basin: (1) once-early mass chemotherapy of humans is little better than twice-yearly mass chemotherapy in reducing human prevalence. Depending on the heterogeneity of prevalence within the population, targeted treatment of high prevalence groups, with lower overall coverage, can be more effective than mass treatment with higher overall coverage. Treatment confers a short term benefit only, with prevalence rising to endemic levels once chemotherapy programs are stopped (2) depending on the relative contributions of bovines and humans, bovine treatment can benefit humans almost as much as human treatment. Like human treatment, bovine treatment confers a short-term benefit. A combination of human and bovine treatment will dramatically reduce human prevalence and maintains the reduction for a longer period of time than treatment of a single host, although human prevalence rises once treatment ceases; (3) assuming 75% coverage of bovines, a bovine vaccine which acts on worm fecundity must have about 75% efficacy to reduce the reproduction rate below one and ensure mid-term reduction and long-term elimination of the parasite. Such a vaccination program should be accompanied by an initial period of human treatment to instigate a short-term reduction in prevalence, following which the reduction is enhanced by vaccine effects; (4) if the bovine vaccine is only 45% efficacious (the level of current prototype vaccines) it will lower the endemic prevalence, but will not result in elimination. If it is accompanied by an initial period of human treatment and by a 45% improvement in human sanitation or a 30% reduction in contaminated water contact by humans, elimination is then possible. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Activated sludge flocculation was modelled using population balances. The model followed the dynamics of activated sludge flocculation providing a good approximation of the change in mean floe size with time. Increasing the average velocity gradient decreased the final floe size. The breakage rate coefficient and collision efficiency also varied with the average velocity gradient. A power law relationship was found for the increase in breakage rate coefficient with increasing average velocity gradient. Further investigation will be conducted to determine the relationship between the collision efficiency and particle size to provide a better approximation of dynamic changes in the floe size distribution during flocculation. (C) 2002 Published by Elsevier Science B.V.
Resumo:
A technique based on laser light diffraction is shown to be successful in collecting on-line experimental data. Time series of floc size distributions (FSD) under different shear rates (G) and calcium additions were collected. The steady state mass mean diameter decreased with increasing shear rate G and increased when calcium additions exceeded 8 mg/l. A so-called population balance model (PBM) was used to describe the experimental data, This kind of model describes both aggregation and breakage through birth and death terms. A discretised PBM was used since analytical solutions of the integro-partial differential equations are non-existing. Despite the complexity of the model, only 2 parameters need to be estimated: the aggregation rate and the breakage rate. The model seems, however, to lack flexibility. Also, the description of the floc size distribution (FSD) in time is not accurate.
Resumo:
We focus on mixtures of factor analyzers from the perspective of a method for model-based density estimation from high-dimensional data, and hence for the clustering of such data. This approach enables a normal mixture model to be fitted to a sample of n data points of dimension p, where p is large relative to n. The number of free parameters is controlled through the dimension of the latent factor space. By working in this reduced space, it allows a model for each component-covariance matrix with complexity lying between that of the isotropic and full covariance structure models. We shall illustrate the use of mixtures of factor analyzers in a practical example that considers the clustering of cell lines on the basis of gene expressions from microarray experiments. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
We compare Bayesian methodology utilizing free-ware BUGS (Bayesian Inference Using Gibbs Sampling) with the traditional structural equation modelling approach based on another free-ware package, Mx. Dichotomous and ordinal (three category) twin data were simulated according to different additive genetic and common environment models for phenotypic variation. Practical issues are discussed in using Gibbs sampling as implemented by BUGS to fit subject-specific Bayesian generalized linear models, where the components of variation may be estimated directly. The simulation study (based on 2000 twin pairs) indicated that there is a consistent advantage in using the Bayesian method to detect a correct model under certain specifications of additive genetics and common environmental effects. For binary data, both methods had difficulty in detecting the correct model when the additive genetic effect was low (between 10 and 20%) or of moderate range (between 20 and 40%). Furthermore, neither method could adequately detect a correct model that included a modest common environmental effect (20%) even when the additive genetic effect was large (50%). Power was significantly improved with ordinal data for most scenarios, except for the case of low heritability under a true ACE model. We illustrate and compare both methods using data from 1239 twin pairs over the age of 50 years, who were registered with the Australian National Health and Medical Research Council Twin Registry (ATR) and presented symptoms associated with osteoarthritis occurring in joints of the hand.
Resumo:
Conceptual modelling is an activity undertaken during information systems development work to build a representation of selected semantics about some real-world domain. Ontological theories have been developed to account for the structure and behavior of the real world in general. In this paper, I discuss why ontological theories can be used to inform conceptual modelling research, practice, and pedagogy. I provide examples from my research to illustrate how a particular ontological theory has enabled me to improve my understanding of certain conceptual modelling practices and grammars. I describe, also, how some colleagues and I have used this theory to generate several counter-intuitive, sometimes surprising predictions about widely advocated conceptual modelling practices - predictions that subsequently were supported in empirical research we undertook. Finally, I discuss several possibilities and pitfalls I perceived to be associated with our using ontological theories to underpin research on conceptual modelling.
Resumo:
This paper proposes a template for modelling complex datasets that integrates traditional statistical modelling approaches with more recent advances in statistics and modelling through an exploratory framework. Our approach builds on the well-known and long standing traditional idea of 'good practice in statistics' by establishing a comprehensive framework for modelling that focuses on exploration, prediction, interpretation and reliability assessment, a relatively new idea that allows individual assessment of predictions. The integrated framework we present comprises two stages. The first involves the use of exploratory methods to help visually understand the data and identify a parsimonious set of explanatory variables. The second encompasses a two step modelling process, where the use of non-parametric methods such as decision trees and generalized additive models are promoted to identify important variables and their modelling relationship with the response before a final predictive model is considered. We focus on fitting the predictive model using parametric, non-parametric and Bayesian approaches. This paper is motivated by a medical problem where interest focuses on developing a risk stratification system for morbidity of 1,710 cardiac patients given a suite of demographic, clinical and preoperative variables. Although the methods we use are applied specifically to this case study, these methods can be applied across any field, irrespective of the type of response.
Resumo:
Colour pattern variation is a striking and widespread phenomenon. Differential predation risk between individuals is often invoked to explain colour variation, but empirical support for this hypothesis is equivocal. We investigated differential conspicuousness and predation risk in two species of Australian rock dragons, Ctenophorus decresii and C. vadnappa. To humans, the coloration of males of these species varies between 'bright' and 'dull'. Visual modelling based on objective colour measurements and the spectral sensitivities of avian visual pigments showed that dragon colour variants are differentially conspicuous to the visual system of avian predators when viewed against the natural background. We conducted field experiments to test for differential predation risk, using plaster models of 'bright' and 'dull' males. 'Bright' models were attacked significantly more often than 'dull' models suggesting that differential conspicuousness translates to differential predation risk in the wild. We also examined the influence of natural geographical range on predation risk. Results from 22 localities suggest that predation rates vary according to whether predators are familiar with the prey species. This study is among the first to demonstrate both differential conspicuousness and differential predation risk in the wild using an experimental protocol. (C) 2003 Published by Elsevier Ltd on behalf of The Association for the Study of Animal Behaviour.
Resumo:
Crushing and grinding are the most energy intensive part of the mineral recovery process. A major part of rock size reduction occurs in tumbling mills. Empirical models for the power draw of tumbling mills do not consider the effect of lifters. Discrete element modelling was used to investigate the effect of lifter condition on the power draw of tumbling mill. Results obtained with PFC3D code show that lifter condition will have a significant influence on the power draw and on the mode of energy consumption in the mill. Relatively high lifters will consume less power than low lifters, under otherwise identical conditions. The fraction of the power that will be consumed as friction will increase as the height of the lifters decreases. This will result in less power being used for high intensity comminution caused by the impacts. The fraction of the power that will be used to overcome frictional resistance is determined by the material's coefficient of friction. Based on the modelled results, it appears that the effective coefficient of friction for in situ mill is close to 0.1. (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
The PFC3D (particle flow code) that models the movement and interaction of particles by the DEM techniques was employed to simulate the particle movement and to calculate the velocity and energy distribution of collision in two types of impact crusher: the Canica vertical shaft crusher and the BJD horizontal shaft swing hammer mill. The distribution of collision energies was then converted into a product size distribution for a particular ore type using JKMRC impact breakage test data. Experimental data of the Canica VSI crusher treating quarry and the BJD hammer mill treating coal were used to verify the DEM simulation results. Upon the DEM procedures being validated, a detailed simulation study was conducted to investigate the effects of the machine design and operational conditions on velocity and energy distributions of collision inside the milling chamber and on the particle breakage behaviour. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
A new model to predict the extent of crushing around a blasthole is presented. The model is based on the back-analysis of a comprehensive experimental program that included the direct measurement of the zone of crushing from 92 blasting tests on concrete blocks using two commercial explosives. The concrete blocks varied from low, medium to high strength and measured 1.5 in in length, 1.0 m in width and 1.1 m in height. A dimensionless parameter called the crushing zone index (CZI) is introduced. This index measures the crushing potential of a charged blasthole and is a function of the borehole pressure, the unconfined compressive strength of the rock material, dynamic Young's modulus and Poisson's ratio. It is shown that the radius of crushing is a function of the CZI and the blasthole radius. A good correlation between the new model and measured results was obtained. A number of previously proposed models could not approximate the conditions measured in the experimental work and there are noted discrepancies between the different approaches reviewed, particularly for smaller diameter holes and low strength rock conditions. The new model has been verified with full scale tests reported in the literature. Results from this validation and model evaluations show its applicability to production blasting. (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
The power required to operate large mills is typically 5-10 MW. Hence, optimisation of power consumption will have a significant impact on overall economic performance and environmental impact. Power draw modelling results using the discrete element code PFC3D have been compared with results derived from the widely used empirical Model of Morrell. This is achieved by calculating the power draw for a range of operating conditions for constant mill size and fill factor using two modelling approaches. fThe discrete element modelling results show that, apart from density, selection of the appropriate material damping ratio is critical for the accuracy of modelling of the mill power draw. The relative insensitivity of the power draw to the material stiffness allows selection of moderate stiffness values, which result in acceptable computation time. The results obtained confirm that modelling of the power draw for a vertical slice of the mill, of thickness 20% of the mill length, is a reliable substitute for modelling the full mill. The power draw predictions from PFC3D show good agreement with those obtained using the empirical model. Due to its inherent flexibility, power draw modelling using PFC3D appears to be a viable and attractive alternative to empirical models where necessary code and computer power are available.
Resumo:
This article presents Monte Carlo techniques for estimating network reliability. For highly reliable networks, techniques based on graph evolution models provide very good performance. However, they are known to have significant simulation cost. An existing hybrid scheme (based on partitioning the time space) is available to speed up the simulations; however, there are difficulties with optimizing the important parameter associated with this scheme. To overcome these difficulties, a new hybrid scheme (based on partitioning the edge set) is proposed in this article. The proposed scheme shows orders of magnitude improvement of performance over the existing techniques in certain classes of network. It also provides reliability bounds with little overhead.