100 resultados para Simulation and modelling
Resumo:
Crop modelling has evolved over the last 30 or so years in concert with advances in crop physiology, crop ecology and computing technology. Having reached a respectable degree of acceptance, it is appropriate to review briefly the course of developments in crop modelling and to project what might be major contributions of crop modelling in the future. Two major opportunities are envisioned for increased modelling activity in the future. One opportunity is in a continuing central, heuristic role to support scientific investigation, to facilitate decision making by crop managers, and to aid in education. Heuristic activities will also extend to the broader system-level issues of environmental and ecological aspects of crop production. The second opportunity is projected as a prime contributor in understanding and advancing the genetic regulation of plant performance and plant improvement. Physiological dissection and modelling of traits provides an avenue by which crop modelling could contribute to enhancing integration of molecular genetic technologies in crop improvement. Crown Copyright (C) 2002 Published by Elsevier Science B.V. All rights reserved.
Resumo:
Dynamic spatial analysis addresses computational aspects of space–time processing. This paper describes the development of a spatial analysis tool and modelling framework that together offer a solution for simulating landscape processes. A better approach to integrating landscape spatial analysis with Geographical Information Systems is advocated in this paper. Enhancements include special spatial operators and map algebra language constructs to handle dispersal and advective flows over landscape surfaces. These functional components to landscape modelling are developed in a modular way and are linked together in a modelling framework that performs dynamic simulation. The concepts and modelling framework are demonstrated using a hydrological modelling example. The approach provides a modelling environment for scientists and land resource managers to write and to visualize spatial process models with ease.
Resumo:
New tools derived from advances in molecular biology have not been widely adopted in plant breeding for complex traits because of the inability to connect information at gene level to the phenotype in a manner that is useful for selection. In this study, we explored whether physiological dissection and integrative modelling of complex traits could link phenotype complexity to underlying genetic systems in a way that enhanced the power of molecular breeding strategies. A crop and breeding system simulation study on sorghum, which involved variation in 4 key adaptive traits-phenology, osmotic adjustment, transpiration efficiency, stay-green-and a broad range of production environments in north-eastern Australia, was used. The full matrix of simulated phenotypes, which consisted of 547 location-season combinations and 4235 genotypic expression states, was analysed for genetic and environmental effects. The analysis was conducted in stages assuming gradually increased understanding of gene-to-phenotype relationships, which would arise from physiological dissection and modelling. It was found that environmental characterisation and physiological knowledge helped to explain and unravel gene and environment context dependencies in the data. Based on the analyses of gene effects, a range of marker-assisted selection breeding strategies was simulated. It was shown that the inclusion of knowledge resulting from trait physiology and modelling generated an enhanced rate of yield advance over cycles of selection. This occurred because the knowledge associated with component trait physiology and extrapolation to the target population of environments by modelling removed confounding effects associated with environment and gene context dependencies for the markers used. Developing and implementing this gene-to-phenotype capability in crop improvement requires enhanced attention to phenotyping, ecophysiological modelling, and validation studies to test the stability of candidate genetic regions.
Resumo:
New tools derived from advances in molecular biology have not been widely adopted in plant breeding because of the inability to connect information at gene level to the phenotype in a manner that is useful for selection. We explore whether a crop growth and development modelling framework can link phenotype complexity to underlying genetic systems in a way that strengthens molecular breeding strategies. We use gene-to-phenotype simulation studies on sorghum to consider the value to marker-assisted selection of intrinsically stable QTLs that might be generated by physiological dissection of complex traits. The consequences on grain yield of genetic variation in four key adaptive traits – phenology, osmotic adjustment, transpiration efficiency, and staygreen – were simulated for a diverse set of environments by placing the known extent of genetic variation in the context of the physiological determinants framework of a crop growth and development model. It was assumed that the three to five genes associated with each trait, had two alleles per locus acting in an additive manner. The effects on average simulated yield, generated by differing combinations of positive alleles for the traits incorporated, varied with environment type. The full matrix of simulated phenotypes, which consisted of 547 location-season combinations and 4235 genotypic expression states, was analysed for genetic and environmental effects. The analysis was conducted in stages with gradually increased understanding of gene-to-phenotype relationships, which would arise from physiological dissection and modelling. It was found that environmental characterisation and physiological knowledge helped to explain and unravel gene and environment context dependencies. We simulated a marker-assisted selection (MAS) breeding strategy based on the analyses of gene effects. When marker scores were allocated based on the contribution of gene effects to yield in a single environment, there was a wide divergence in rate of yield gain over all environments with breeding cycle depending on the environment chosen for the QTL analysis. It was suggested that knowledge resulting from trait physiology and modelling would overcome this dependency by identifying stable QTLs. The improved predictive power would increase the utility of the QTLs in MAS. Developing and implementing this gene-to-phenotype capability in crop improvement requires enhanced attention to phenotyping, ecophysiological modelling, and validation studies to test the stability of candidate QTLs.
Resumo:
The two-node tandem Jackson network serves as a convenient reference model for the analysis and testing of different methodologies and techniques in rare event simulation. In this paper we consider a new approach to efficiently estimate the probability that the content of the second buffer exceeds some high level L before it becomes empty, starting from a given state. The approach is based on a Markov additive process representation of the buffer processes, leading to an exponential change of measure to be used in an importance sampling procedure. Unlike changes of measures proposed and studied in recent literature, the one derived here is a function of the content of the first buffer. We prove that when the first buffer is finite, this method yields asymptotically efficient simulation for any set of arrival and service rates. In fact, the relative error is bounded independent of the level L; a new result which is not established for any other known method. When the first buffer is infinite, we propose a natural extension of the exponential change of measure for the finite buffer case. In this case, the relative error is shown to be bounded (independent of L) only when the second server is the bottleneck; a result which is known to hold for some other methods derived through large deviations analysis. When the first server is the bottleneck, experimental results using our method seem to suggest that the relative error is bounded linearly in L.
Resumo:
The splitting method is a simulation technique for the estimation of very small probabilities. In this technique, the sample paths are split into multiple copies, at various stages in the simulation. Of vital importance to the efficiency of the method is the Importance Function (IF). This function governs the placement of the thresholds or surfaces at which the paths are split. We derive a characterisation of the optimal IF and show that for multi-dimensional models the natural choice for the IF is usually not optimal. We also show how nearly optimal splitting surfaces can be derived or simulated using reverse time analysis. Our numerical experiments illustrate that by using the optimal IF, one can obtain a significant improvement in simulation efficiency.
Resumo:
This article presents Monte Carlo techniques for estimating network reliability. For highly reliable networks, techniques based on graph evolution models provide very good performance. However, they are known to have significant simulation cost. An existing hybrid scheme (based on partitioning the time space) is available to speed up the simulations; however, there are difficulties with optimizing the important parameter associated with this scheme. To overcome these difficulties, a new hybrid scheme (based on partitioning the edge set) is proposed in this article. The proposed scheme shows orders of magnitude improvement of performance over the existing techniques in certain classes of network. It also provides reliability bounds with little overhead.
Resumo:
We present a novel method, called the transform likelihood ratio (TLR) method, for estimation of rare event probabilities with heavy-tailed distributions. Via a simple transformation ( change of variables) technique the TLR method reduces the original rare event probability estimation with heavy tail distributions to an equivalent one with light tail distributions. Once this transformation has been established we estimate the rare event probability via importance sampling, using the classical exponential change of measure or the standard likelihood ratio change of measure. In the latter case the importance sampling distribution is chosen from the same parametric family as the transformed distribution. We estimate the optimal parameter vector of the importance sampling distribution using the cross-entropy method. We prove the polynomial complexity of the TLR method for certain heavy-tailed models and demonstrate numerically its high efficiency for various heavy-tailed models previously thought to be intractable. We also show that the TLR method can be viewed as a universal tool in the sense that not only it provides a unified view for heavy-tailed simulation but also can be efficiently used in simulation with light-tailed distributions. We present extensive simulation results which support the efficiency of the TLR method.
Resumo:
A simulation-based modelling approach is used to examine the effects of stratified seed dispersal (representing the distribution of the majority of dispersal around the maternal parent and also rare long-distance dispersal) on the genetic structure of maternally inherited genomes and the colonization rate of expanding plant populations. The model is parameterized to approximate postglacial oak colonization in the UK, but is relevant to plant populations that exhibit stratified seed dispersal. The modelling approach considers the colonization of individual plants over a large area (three 500 km x 10 km rolled transects are used to approximate a 500 km x 300 km area). Our approach shows how the interaction of plant population dynamics with stratified dispersal can result in a spatially patchy haplotype structure. We show that while both colonization speeds and the resulting genetic structure are influenced by the characteristics of the dispersal kernel, they are robust to changes in the periodicity of long-distance events, provided the average number of long-distance dispersal events remains constant. We also consider the effects of additional physical and environmental mechanisms on plant colonization. Results show significant changes in genetic structure when the initial colonization of different haplotypes is staggered over time and when a barrier to colonization is introduced. Environmental influences on survivorship and fecundity affect both the genetic structure and the speed of colonization. The importance of these mechanisms in relation to the postglacial spread and genetic structure of oak in the UK is discussed.
Resumo:
Process optimisation and optimal control of batch and continuous drum granulation processes are studied in this paper. The main focus of the current research has been: (i) construction of optimisation and control relevant, population balance models through the incorporation of moisture content, drum rotation rate and bed depth into the coalescence kernels; (ii) investigation of optimal operational conditions using constrained optimisation techniques; (iii) development of optimal control algorithms based on discretized population balance equations; and (iv) comprehensive simulation studies on optimal control of both batch and continuous granulation processes. The objective of steady state optimisation is to minimise the recycle rate with minimum cost for continuous processes. It has been identified that the drum rotation-rate, bed depth (material charge), and moisture content of solids are practical decision (design) parameters for system optimisation. The objective for the optimal control of batch granulation processes is to maximize the mass of product-sized particles with minimum time and binder consumption. The objective for the optimal control of the continuous process is to drive the process from one steady state to another in a minimum time with minimum binder consumption, which is also known as the state-driving problem. It has been known for some time that the binder spray-rate is the most effective control (manipulative) variable. Although other possible manipulative variables, such as feed flow-rate and additional powder flow-rate have been investigated in the complete research project, only the single input problem with the binder spray rate as the manipulative variable is addressed in the paper to demonstrate the methodology. It can be shown from simulation results that the proposed models are suitable for control and optimisation studies, and the optimisation algorithms connected with either steady state or dynamic models are successful for the determination of optimal operational conditions and dynamic trajectories with good convergence properties. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
We model nongraphitized carbon black surfaces and investigate adsorption of argon on these surfaces by using the grand canonical Monte Carlo simulation. In this model, the nongraphitized surface is modeled as a stack of graphene layers with some carbon atoms of the top graphene layer being randomly removed. The percentage of the surface carbon atoms being removed and the effective size of the defect ( created by the removal) are the key parameters to characterize the nongraphitized surface. The patterns of adsorption isotherm and isosteric heat are particularly studied, as a function of these surface parameters as well as pressure and temperature. It is shown that the adsorption isotherm shows a steplike behavior on a perfect graphite surface and becomes smoother on nongraphitized surfaces. Regarding the isosteric heat versus loading, we observe for the case of graphitized thermal carbon black the increase of heat in the submonolayer coverage and then a sharp decline in the heat when the second layer is starting to form, beyond which it increases slightly. On the other hand, the isosteric heat versus loading for a highly nongraphitized surface shows a general decline with respect to loading, which is due to the energetic heterogeneity of the surface. It is only when the fluid-fluid interaction is greater than the surface energetic factor that we see a minimum-maximum in the isosteric heat versus loading. These simulation results of isosteric heat agree well with the experimental results of graphitization of Spheron 6 (Polley, M. H.; Schaeffer, W. D.; Smith, W. R. J. Phys. Chem. 1953, 57, 469; Beebe, R. A.; Young, D. M. J. Phys. Chem. 1954, 58, 93). Adsorption isotherms and isosteric heat in pores whose walls have defects are also studied from the simulation, and the pattern of isotherm and isosteric heat could be used to identify the fingerprint of the surface.
Resumo:
Many populations have a negative impact on their habitat, or upon other species in the environment, if their numbers become too large. For this reason they are often managed using some form of control. The objective is to keep numbers at a sustainable level, while ensuring survival of the population.+Here we present models that allow population management programs to be assessed. Two common control regimes will be considered: reduction and suppression. Under the suppression regime the previous population is maintained close to a particular threshold through near continuous control, while under the reduction regime, control begins once the previous population reaches a certain threshold and continues until it falls below a lower pre-defined level. We discuss how to best choose the control parameters, and we provide tools that allow population managers to select reduction levels and control rates. Additional tools will be provided to assess the effect of different control regimes, in terms of population persistence and cost.In particular we consider the effects of each regime on the probability of extinction and the expected time to extinction, and compare the control methods in terms of the expected total cost of each regime over the life of the population. The usefulness of our results will be illustrated with reference to the control of a koala population inhabiting Kangaroo Island, Australia.
Resumo:
Methods of analysing and optimising flotation circuits have improved significantly over the last 15 years. Mineral flotation is now generally better understood through major advances in measuring and modelling the sub-processes within the flotation system. JKSimFloat V6 is a user-friendly Windows-based software package incorporating simulation, mass balancing, and, currently under development, liberation data viewing and model fitting. This paper presents an overview of the development of the program up to its current status, and the plans established for the future. The application of the simulator, in particular, at various operations is also discussed with emphasis on the use of the program in improving flotation circuit performance.