966 resultados para simulation models


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The use of maize simulation models to determine the optimum plant population for rainfed environments allows the evaluation of plant populations over multiple years and locations at a lower cost than traditional field experimentation. However the APSIM maize model that has been used to conduct some of these 'virtual' experiments assumes that the maximum rate of soil water extraction by the crop root system is constant across plant populations. This untested assumption may cause grain yield to be overestimated in lower plant populations. A field experiment was conducted to determine whether maximum rates of water extraction vary with plant population, and the maximum rate of soil water extraction was estimated for three plant populations (2.4, 3.5 and 5.5 plants m(-2)) under water limited conditions. Maximum soil water extraction rates in the field experiment decreased linearly with plant population, and no difference was detected between plant populations for the crop lower limit of soil water extraction. Re-analysis of previous maize simulation experiments demonstrated that the use of inappropriately high extraction-rate parameters at low plant populations inflated predictions of grain yield, and could cause erroneous recommendations to be made for plant population. The results demonstrate the importance of validating crop simulation models across the range of intended treatments. (C) 2013 Elsevier E.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Incursions of plant pests and diseases pose serious threats to food security, agricultural productivity and the natural environment. One of the challenges in confidently delimiting and eradicating incursions is how to choose from an arsenal of surveillance and quarantine approaches in order to best control multiple dispersal pathways. Anthropogenic spread (propagules carried on humans or transported on produce or equipment) can be controlled with quarantine measures, which in turn can vary in intensity. In contrast, environmental spread processes are more difficult to control, but often have a temporal signal (e.g. seasonality) which can introduce both challenges and opportunities for surveillance and control. This leads to complex decisions regarding when, where and how to search. Recent modelling investigations of surveillance performance have optimised the output of simulation models, and found that a risk-weighted randomised search can perform close to optimally. However, exactly how quarantine and surveillance strategies should change to reflect different dispersal modes remains largely unaddressed. Here we develop a spatial simulation model of a plant fungal-pathogen incursion into an agricultural region, and its subsequent surveillance and control. We include structural differences in dispersal via the interplay of biological, environmental and anthropogenic connectivity between host sites (farms). Our objective was to gain broad insights into the relative roles played by different spread modes in propagating an invasion, and how incorporating knowledge of these spread risks may improve approaches to quarantine restrictions and surveillance. We find that broad heuristic rules for quarantine restrictions fail to contain the pathogen due to residual connectivity between sites, but surveillance measures enable early detection and successfully lead to suppression of the pathogen in all farms. Alternative surveillance strategies attain similar levels of performance by incorporating environmental or anthropogenic dispersal risk in the prioritisation of sites. Our model provides the basis to develop essential insights into the effectiveness of different surveillance and quarantine decisions for fungal pathogen control. Parameterised for authentic settings it will aid our understanding of how the extent and resolution of interventions should suitably reflect the spatial structure of dispersal processes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

While environmental variation is an ubiquitous phenomenon in the natural world which has for long been appreciated by the scientific community recent changes in global climatic conditions have begun to raise consciousness about the economical, political and sociological ramifications of global climate change. Climate warming has already resulted in documented changes in ecosystem functioning, with direct repercussions on ecosystem services. While predicting the influence of ecosystem changes on vital ecosystem services can be extremely difficult, knowledge of the organisation of ecological interactions within natural communities can help us better understand climate driven changes in ecosystems. The role of environmental variation as an agent mediating population extinctions is likely to become increasingly important in the future. In previous studies population extinction risk in stochastic environmental conditions has been tied to an interaction between population density dependence and the temporal autocorrelation of environmental fluctuations. When populations interact with each other, forming ecological communities, the response of such species assemblages to environmental stochasticity can depend, e.g., on trophic structure in the food web and the similarity in species-specific responses to environmental conditions. The results presented in this thesis indicate that variation in the correlation structure between species-specific environmental responses (environmental correlation) can have important qualitative and quantitative effects on community persistence and biomass stability in autocorrelated (coloured) environments. In addition, reddened environmental stochasticity and ecological drift processes (such as demographic stochasticity and dispersal limitation) have important implications for patterns in species relative abundances and community dynamics over time and space. Our understanding of patterns in biodiversity at local and global scale can be enhanced by considering the relevance of different drift processes for community organisation and dynamics. Although the results laid out in this thesis are based on mathematical simulation models, they can be valuable in planning effective empirical studies as well as in interpreting existing empirical results. Most of the metrics considered here are directly applicable to empirical data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A diffusion/replacement model for new consumer durables designed to be used as a long-term forecasting tool is developed. The model simulates new demand as well as replacement demand over time. The model is called DEMSIM and is built upon a counteractive adoption model specifying the basic forces affecting the adoption behaviour of individual consumers. These forces are the promoting forces and the resisting forces. The promoting forces are further divided into internal and external influences. These influences are operationalized within a multi-segmental diffusion model generating the adoption behaviour of the consumers in each segment as an expected value. This diffusion model is combined with a replacement model built upon the same segmental structure as the diffusion model. This model generates, in turn, the expected replacement behaviour in each segment. To be able to use DEMSIM as a forecasting tool in early stages of a diffusion process estimates of the model parameters are needed as soon as possible after product launch. However, traditional statistical techniques are not very helpful in estimating such parameters in early stages of a diffusion process. To enable early parameter calibration an optimization algorithm is developed by which the main parameters of the diffusion model can be estimated on the basis of very few sales observations. The optimization is carried out in iterative simulation runs. Empirical validations using the optimization algorithm reveal that the diffusion model performs well in early long-term sales forecasts, especially as it comes to the timing of future sales peaks.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this study we analyze how the ion concentrations in forest soil solution are determined by hydrological and biogeochemical processes. A dynamic model ACIDIC was developed, including processes common to dynamic soil acidification models. The model treats up to eight interacting layers and simulates soil hydrology, transpiration, root water and nutrient uptake, cation exchange, dissolution and reactions of Al hydroxides in solution, and the formation of carbonic acid and its dissociation products. It includes also a possibility to a simultaneous use of preferential and matrix flow paths, enabling the throughfall water to enter the deeper soil layers in macropores without first reacting with the upper layers. Three different combinations of routing the throughfall water via macro- and micropores through the soil profile is presented. The large vertical gradient in the observed total charge was simulated succesfully. According to the simulations, gradient is mostly caused by differences in the intensity of water uptake, sulfate adsorption and organic anion retention at the various depths. The temporal variations in Ca and Mg concentrations were simulated fairly well in all soil layers. For H+, Al and K there were much more variation in the observed than in the simulated concentrations. Flow in macropores is a possible explanation for the apparent disequilibrium of the cation exchange for H+ and K, as the solution H+ and K concentrations have great vertical gradients in soil. The amount of exchangeable H+ increased in the O and E horizons and decreased in the Bs1 and Bs2 horizons, the net change in whole soil profile being a decrease. A large part of the decrease of the exchangeable H+ in the illuvial B horizon was caused by sulfate adsorption. The model produces soil water amounts and solution ion concentrations which are comparable to the measured values, and it can be used in both hydrological and chemical studies of soils.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We provide a comparative performance evaluation of packet queuing and link admission strategies for low-speed wide area network Links (e.g. 9600 bps, 64 kbps) that interconnect relatively highspeed, connectionless local area networks (e.g. 10 Mbps). In particular, we are concerned with the problem of providing differential quality of service to interLAN remote terminal and file transfer sessions, and throughput fairness between interLAN file transfer sessions. We use analytical and simulation models to study a variety of strategies. Our work also serves to address the performance comparison of connectionless vs. connection-oriented interconnection of CLNS LANS. When provision of priority at the physical transmission level is not feasible, we show, for low-speed WAN links (e.g. 9600 bps), the superiority of connection-oriented interconnection of connectionless LANs, with segregation of traffic streams with different QoS requirements into different window flow controlled connections. Such an implementation can easily be obtained by transporting IP packets over an X.25 WAN. For 64 kbps WAN links, there is a drop in file transfer throughputs, owing to connection overheads, but the other advantages are retained, The same solution also helps to provide throughput fairness between interLAN file transfer sessions. We also provide a corroboration of some of our modelling results with results from an experimental test-bed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Sugganahalli, a rural vernacular community in a warm-humid region in South India, is under transition towards adopting modern construction practices. Vernacular local building elements like rubble walls and mud roofs are given way to burnt brick walls and reinforced cement concrete (RCC)/tin roofs. Over 60% of Indian population is rural, and implications of such transitions on thermal comfort and energy in buildings are crucial to understand. Vernacular architecture evolves adopting local resources in response to the local climate adopting passive solar designs. This paper investigates the effectiveness of passive solar elements on the indoor thermal comfort by adopting modern climate-responsive design strategies. Dynamic simulation models validated by measured data have also been adopted to determine the impact of the transition from vernacular to modern material-configurations. Age-old traditional design considerations were found to concur with modern understanding into bio-climatic response and climate-responsiveness. Modern transitions were found to increase the average indoor temperatures in excess of 7 degrees C. Such transformations tend to shift the indoor conditions to a psychrometric zone that is likely to require active air-conditioning. Also, the surveyed thermal sensation votes were found to lie outside the extended thermal comfort boundary for hot developing countries provided by Givoni in the bio-climatic chart.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The sensitivity of combustion phasing and combustion descriptors to ignition timing, load and mixture quality on fuelling a multi-cylinder natural gas engine with bio-derived H-2 and CO rich syngas is addressed. While the descriptors for conventional fuels are well established and are in use for closed loop engine control, presence of H-2 in syngas potentially alters the mixture properties and hence combustion phasing, necessitating the current study. The ability of the descriptors to predict abnormal combustion, hitherto missing in the literature, is also addressed. Results from experiments using multi-cylinder engines and numerical studies using zero dimensional Wiebe function based simulation models are reported. For syngas with 20% H-2 and CO and 2% CH4 (producer gas), an ignition retard of 5 +/- 1 degrees was required compared to natural gas ignition timing to achieve peak load of 72.8 kWe. It is found that, for syngas, whose flammability limits are 0.42-1.93, the optimal engine operation was at an equivalence ratio of 1.12. The same methodology is extended to a two cylinder engine towards addressing the influence of syngas composition, especially H-2 fraction (varying from 13% to 37%), on the combustion phasing. The study confirms the utility of pressure trace derived combustion descriptors, except for the pressure trace first derivative, in describing the MBT operating condition of the engine when fuelled with an alternative fuel. Both experiments and analysis suggest most of the combustion descriptors to be independent of the engine load and mixture quality. A near linear relationship with ignition angle is observed. The general trend(s) of the combustion descriptors for syngas fuelled operation are similar to those of conventional fuels; the differences in sensitivity of the descriptors for syngas fuelled engine operation requires re-calibration of control logic for MBT conditions. Copyright (C) 2014, Hydrogen Energy Publications, LLC. Published by Elsevier Ltd. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The advent of nanotechnology has necessitated a better understanding of how material microstructure changes at the atomic level would affect the macroscopic properties that control the performance. Such a challenge has uncovered many phenomena that were not previously understood and taken for granted. Among them are the basic foundation of dislocation theories which are now known to be inadequate. Simplifying assumptions invoked at the macroscale may not be applicable at the micro- and/or nanoscale. There are implications of scaling hierrachy associated with in-homegeneity and nonequilibrium. of physical systems. What is taken to be homogeneous and equilibrium at the macroscale may not be so when the physical size of the material is reduced to microns. These fundamental issues cannot be dispensed at will for the sake of convenience because they could alter the outcome of predictions. Even more unsatisfying is the lack of consistency in modeling physical systems. This could translate to the inability for identifying the relevant manufacturing parameters and rendering the end product unpractical because of high cost. Advanced composite and ceramic materials are cases in point. Discussed are potential pitfalls for applying models at both the atomic and continuum levels. No encouragement is made to unravel the truth of nature. Let it be partiuclates, a smooth continuum or a combination of both. The present trend of development in scaling tends to seek for different characteristic lengths of material microstructures with or without the influence of time effects. Much will be learned from atomistic simulation models to show how results could differ as boundary conditions and scales are changed. Quantum mechanics, continuum and cosmological models provide evidence that no general approach is in sight. Of immediate interest is perhaps the establishment of greater precision in terminology so as to better communicate results involving multiscale physical events.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Classical fracture mechanics is based on the premise that small scale features could be averaged to give a larger scale property such that the assumption of material homogeneity would hold. Involvement of the material microstructure, however, necessitates different characteristic lengths for describing different geometric features. Macroscopic parameters could not be freely exchanged with those at the microscopic scale level. Such a practice could cause misinterpretation of test data. Ambiguities arising from the lack of a more precise range of limitations for the definitions of physical parameters are discussed in connection with material length scales. Physical events overlooked between the macroscopic and microscopic scale could be the link that is needed to bridge the gap. The classical models for the creation of free surface for a liquid and solid are oversimplified. They consider only the translational motion of individual atoms. Movements of groups or clusters of molecules deserve attention. Multiscale cracking behavior also requires the distinction of material damage involving at least two different scales in a single simulation. In this connection, special attention should be given to the use of asymptotic solution in contrast to the full field solution when applying fracture criteria. The former may leave out detail features that would have otherwise been included by the latter. Illustrations are provided for predicting the crack initiation sites of piezoceramics. No definite conclusions can be drawn from the atomistic simulation models such as those used in molecular dynamics until the non-equilibrium boundary conditions can be better understood. The specification of strain rates and temperatures should be synchronized as the specimen size is reduced to microns. Many of the results obtained at the atomic scale should be first identified with those at the mesoscale before they are assumed to be connected with macroscopic observations. Hopefully, "mesofracture mechanics" could serve as the link to bring macrofracture mechanics closer to microfracture mechanics.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

How to regulate phytoplankton growth in water supply reservoirs has continued to occupy managers and strategists for some fifty years or so, now, and mathematical models have always featured in their design and operational constraints. In recent years, rather more sophisticated simulation models have begun to be available and these, ideally, purport to provide the manager with improved forecasting of plankton blooms, the likely species and the sort of decision support that might permit management choices to be selected with increased confidence. This account describes the adaptation and application of one such model, PROTECH (Phytoplankton RespOnses To Environmental CHange) to the problems of plankton growth in reservoirs. This article supposes no background knowledge of the main algal types; neither does it attempt to catalogue the problems that their abundance may cause in lakes and reservoirs.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This article outlines the outcome of work that set out to provide one of the specified integral contributions to the overarching objectives of the EU- sponsored LIFE98 project described in this volume. Among others, these included a requirement to marry automatic monitoring and dynamic modelling approaches in the interests of securing better management of water quality in lakes and reservoirs. The particular task given to us was to devise the elements of an active management strategy for the Queen Elizabeth II Reservoir. This is one of the larger reservoirs supplying the population of the London area: after purification and disinfection, its water goes directly to the distribution network and to the consumers. The quality of the water in the reservoir is of primary concern, for the greater is the content of biogenic materials, including phytoplankton, then the more prolonged is the purification and the more expensive is the treatment. Whatever good that phytoplankton may do by way of oxygenation and oxidative purification, it is eventually relegated to an impurity that has to be removed from the final product. Indeed, it has been estimated that the cost of removing algae and microorganisms from water represents about one quarter of its price at the tap. In chemically fertile waters, such as those typifying the resources of the Thames Valley, there is thus a powerful and ongoing incentive to be able to minimise plankton growth in storage reservoirs. Indeed, the Thames Water company and its predecessor undertakings, have a long and impressive history of confronting and quantifying the fundamentals of phytoplankton growth in their reservoirs and of developing strategies for operation and design to combat them. The work to be described here follows in this tradition. However, the use of the model PROTECH-D to investigate present phytoplankton growth patterns in the Queen Elizabeth II Reservoir questioned the interpretation of some of the recent observations. On the other hand, it has reinforced the theories underpinning the original design of this and those Thames-Valley storage reservoirs constructed subsequently. The authors recount these experiences as an example of how simulation models can hone the theoretical base and its application to the practical problems of supplying water of good quality at economic cost, before the engineering is initiated.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Some results of a line of research explored by the author in recent years, and concerning the small-scale fisheries of Mexico are discussed. Clarity of goals for fisheries management is stressed as a departure point before taking any step towards model building. Age-structured simulation models require input data and parameters such as growth rates, natural mortality, age at first capture and maturity, longevity, the longest possible catch records series, and estimates of numbers caught per age group. The link between each cohort and the following can then be established by means of the Ricker stock recruitment or the Beverton-Holt models. Simulation experiments can then be carried out by changing fishing mortality. Whenever data on profits and costs and catch are available, these can also be analyzed. The use of simulation models is examined with emphasis on the benefits derived from their use for fisheries management.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A discussion is presented on the 2 approaches - holism and reductionism - in the study of environmental sciences, making reference to various projects presently being conducted by ICLARM and its collaborators using the holistic approach. Schematic representations are given of ICLARM's FISHBASE, the ECOPATH II model of the Peruvian upwelling ecosystem and submodels which may be incorporated in large simulation models of the upwelling system, and also material flows in a rice-fish/shrimp integrated farming systems of the Mekong Delta, Vietnam.