883 resultados para Simulation and modelling
Resumo:
Self-contained Non-Equilibrium Molecular Dynamics (NEMD) simulations using Lennard-Jones potentials were performed to identify the origin and mechanisms of atomic scale interfacial behavior between sliding metals. The mixing sequence and velocity profiles were compared via MD simulations for three cases, viz.: sell-mated, similar and hard-softvcrystal pairs. The results showed shear instability, atomic scale mixing, and generation of eddies at the sliding interface. Vorticity at the interface suggests that atomic flow during sliding is similar to fluid flow under Kelvin-Helmholtz instability and this is supported by velocity profiles from the simulations. The initial step-function velocity profile spreads during sliding. However the velocity profile does not change much at later stages of the simulation and it eventually stops spreading. The steady state friction coefficient during simulation was monitored as a function of sliding velocity. Frictional behavior can be explained on the basis of plastic deformation and adiabatic effects. The mixing layer growth kinetics was also investigated.
Resumo:
In wheat, tillering and water-soluble carbohydrates (WSCs) in the stem are potential traits for adaptation to different environments and are of interest as targets for selective breeding. This study investigated the observation that a high stem WSC concentration (WSCc) is often related to low tillering. The proposition tested was that stem WSC accumulation is plant density dependent and could be an emergent property of tillering, whether driven by genotype or by environment. A small subset of recombinant inbred lines (RILs) contrasting for tillering was grown at different plant densities or on different sowing dates in multiple field experiments. Both tillering and WSCc were highly influenced by the environment, with a smaller, distinct genotypic component; the genotypeenvironment range covered 350750 stems m(2) and 25210mg g(1) WSCc. Stem WSCc was inversely related to stem number m(2), but genotypic rankings for stem WSCc persisted when RILs were compared at similar stem density. Low tilleringhigh WSCc RILs had similar leaf area index, larger individual leaves, and stems with larger internode cross-section and wall area when compared with high tilleringlow WSCc RILs. The maximum number of stems per plant was positively associated with growth and relative growth rate per plant, tillering rate and duration, and also, in some treatments, with leaf appearance rate and final leaf number. A common threshold of the red:far red ratio (0.390.44; standard error of the difference0.055) coincided with the maximum stem number per plant across genotypes and plant densities, and could be effectively used in crop simulation modelling as a ocut-off' rule for tillering. The relationship between tillering, WSCc, and their component traits, as well as the possible implications for crop simulation and breeding, is discussed.
Resumo:
Inter-annual rainfall variability is a major challenge to sustainable and productive grazing management on rangelands. In Australia, rainfall variability is particularly pronounced and failure to manage appropriately leads to major economic loss and environmental degradation. Recommended strategies to manage sustainably include stocking at long-term carrying capacity (LTCC) or varying stock numbers with forage availability. These strategies are conceptually simple but difficult to implement, given the scale and spatial heterogeneity of grazing properties and the uncertainty of the climate. This paper presents learnings and insights from northern Australia gained from research and modelling on managing for rainfall variability. A method to objectively estimate LTCC in large, heterogeneous paddocks is discussed, and guidelines and tools to tactically adjust stocking rates are presented. The possible use of seasonal climate forecasts (SCF) in management is also considered. Results from a 13-year grazing trial in Queensland show that constant stocking at LTCC was far more profitable and largely maintained land condition compared with heavy stocking (HSR). Variable stocking (VAR) with or without the use of SCF was marginally more profitable, but income variability was greater and land condition poorer than constant stocking at LTCC. Two commercial scale trials in the Northern Territory with breeder cows highlighted the practical difficulties of variable stocking and provided evidence that heavier pasture utilisation rates depress reproductive performance. Simulation modelling across a range of regions in northern Australia also showed a decline in resource condition and profitability under heavy stocking rates. Modelling further suggested that the relative value of variable v. constant stocking depends on stocking rate and land condition. Importantly, variable stocking may possibly allow slightly higher stocking rates without pasture degradation. Enterprise-level simulations run for breeder herds nevertheless show that poor economic performance can occur under constant stocking and even under variable stocking in some circumstances. Modelling and research results both suggest that a form of constrained flexible stocking should be applied to manage for climate variability. Active adaptive management and research will be required as future climate changes make managing for rainfall variability increasingly challenging.
Resumo:
Agricultural systems models worldwide are increasingly being used to explore options and solutions for the food security, climate change adaptation and mitigation and carbon trading problem domains. APSIM (Agricultural Production Systems sIMulator) is one such model that continues to be applied and adapted to this challenging research agenda. From its inception twenty years ago, APSIM has evolved into a framework containing many of the key models required to explore changes in agricultural landscapes with capability ranging from simulation of gene expression through to multi-field farms and beyond. Keating et al. (2003) described many of the fundamental attributes of APSIM in detail. Much has changed in the last decade, and the APSIM community has been exploring novel scientific domains and utilising software developments in social media, web and mobile applications to provide simulation tools adapted to new demands. This paper updates the earlier work by Keating et al. (2003) and chronicles the changing external challenges and opportunities being placed on APSIM during the last decade. It also explores and discusses how APSIM has been evolving to a “next generation” framework with improved features and capabilities that allow its use in many diverse topics.
Resumo:
Stochastic volatility models are of fundamental importance to the pricing of derivatives. One of the most commonly used models of stochastic volatility is the Heston Model in which the price and volatility of an asset evolve as a pair of coupled stochastic differential equations. The computation of asset prices and volatilities involves the simulation of many sample trajectories with conditioning. The problem is treated using the method of particle filtering. While the simulation of a shower of particles is computationally expensive, each particle behaves independently making such simulations ideal for massively parallel heterogeneous computing platforms. In this paper, we present our portable Opencl implementation of the Heston model and discuss its performance and efficiency characteristics on a range of architectures including Intel cpus, Nvidia gpus, and Intel Many-Integrated-Core (mic) accelerators.
Resumo:
Modelling of city traffic involves capturing of all the dynamics that exist in real-time traffic. Probabilistic models and queuing theory have been used for mathematical representation of the traffic system. This paper proposes the concept of modelling the traffic system using bond graphs wherein traffic flow is based on energy conservation. The proposed modelling approach uses switched junctions to model complex traffic networks. This paper presents the modelling, simulation and experimental validation aspects.
Resumo:
To establish itself within the host system, Mycobacterium tuberculosis (Mtb) has formulated various means of attacking the host system. One such crucial strategy is the exploitation of the iron resources of the host system. Obtaining and maintaining the required concentration of iron becomes a matter of contest between the host and the pathogen, both trying to achieve this through complex molecular networks. The extent of complexity makes it important to obtain a systems perspective of the interplay between the host and the pathogen with respect to iron homeostasis. We have reconstructed a systems model comprising 92 components and 85 protein-protein or protein-metabolite interactions, which have been captured as a set of 194 rules. Apart from the interactions, these rules also account for protein synthesis and decay, RBC circulation and bacterial production and death rates. We have used a rule-based modelling approach, Kappa, to simulate the system separately under infection and non-infection conditions. Various perturbations including knock-outs and dual perturbation were also carried out to monitor the behavioral change of important proteins and metabolites. From this, key components as well as the required controlling factors in the model that are critical for maintaining iron homeostasis were identified. The model is able to re-establish the importance of iron-dependent regulator (ideR) in Mtb and transferrin (Tf) in the host. Perturbations, where iron storage is increased, appear to enhance nutritional immunity and the analysis indicates how they can be harmful for the host. Instead, decreasing the rate of iron uptake by Tf may prove to be helpful. Simulation and perturbation studies help in identifying Tf as a possible drug target. Regulating the mycobactin (myB) concentration was also identified as a possible strategy to control bacterial growth. The simulations thus provide significant insight into iron homeostasis and also for identifying possible drug targets for tuberculosis.
Resumo:
Structural Health Monitoring (SHM) systems require integration of non-destructive technologies into structural design and operational processes. Modeling and simulation of complex NDE inspection processes are important aspects in the development and deployment of SHM technologies. Ray tracing techniques are vital simulation tools to visualize the wave path inside a material. These techniques also help in optimizing the location of transducers and their orientation with respect to the zone of interrogation. It helps in increasing the chances of detection and identification of a flaw in that zone. While current state-of-the-art techniques such as ray tracing based on geometric principle help in such visualization, other information such as signal losses due to spherical or cylindrical shape of wave front are rarely taken into consideration. The problem becomes a little more complicated in the case of dispersive guided wave propagation and near-field defect scattering. We review the existing models and tools to perform ultrasonic NDE simulation in structural components. As an initial step, we develop a ray-tracing approach, where phase and spectral information are preserved. This enables one to study wave scattering beyond simple time of flight calculation of rays. Challenges in terms of theory and modelling of defects of various kinds are discussed. Various additional considerations such as signal decay and physics of scattering are reviewed and challenges involved in realistic computational implementation are discussed. Potential application of this approach to SHM system design is highlighted and by applying this to complex structural components such as airframe structures, SHM is demonstrated to provide additional value in terms of lighter weight and/or longevity enhancement resulting from an extension of the damage tolerance design principle not compromising safety and reliability.
Resumo:
The pulsed liquid fluidized bed was studied using numerical simulation and experimental methods, The area-averaged two-fluid model (TFM) was used to simulate the pulsed fluidization. The bed expansion and collapse processes were simulated first and the phenomena obtained from the calculation were consistent with our previous experiments and observations. In the pulsed fluidization, the variation of bed height, the variations of particle velocity and concentration distribution were obtained and analyzed. Experiments were carried out to validate the simulation results. The pressure variation with time at different locations was measured using pressure transducers and compared with the simulated results. The variations of bed height and particle concentration distribution were recorded using a digital video camera recorder. The results were consistent with the simulation results as a whole.
Resumo:
A full two-fluid model of reacting gas-particle flows with an algebraic unified second-order moment (AUSM) turbulence-chemistry model is used to simulate Beijing coal combustion and NOx formation. The sub-models are the k-epsilon-kp two-phase turbulence model, the EBU-Arrhenius volatile and CO combustion model, the six-flux radiation model, coal devolatilization model and char combustion model. The blocking effect on NOx formation is discussed. In addition, the chemical equilibrium analysis is used to predict NOx concentration at different temperature. Results of CID simulation and chemical equilibrium analysis show that, optimizing air dynamic parameters can delay the NOx formation and decrease NOx emission, but it is effective only in a restricted range. In order to decrease NOx emission near to zero, the re-burning or other chemical methods must be used.
Resumo:
Climate change is expected to have significant impact on the future thermal performance of buildings. Building simulation and sensitivity analysis can be employed to predict these impacts, guiding interventions to adapt buildings to future conditions. This article explores the use of simulation to study the impact of climate change on a theoretical office building in the UK, employing a probabilistic approach. The work studies (1) appropriate performance metrics and underlying modelling assumptions, (2) sensitivity of computational results to identify key design parameters and (3) the impact of zonal resolution. The conclusions highlight the importance of assumptions in the field of electricity conversion factors, proper management of internal heat gains, and the need to use an appropriately detailed zonal resolution. © 2010 Elsevier B.V. All rights reserved.
Resumo:
Computer modelling approaches have significant potential to enable decision-making about various aspects of responsive manufacturing. In order to understand the system prior to the selection of any responsiveness strategy, multiple process segments of organisations need to be modelled. The article presents a novel systematic approach for creating coherent sets of unified enterprise, simulation and other supporting models that collectively facilitate responsiveness. In this approach, enterprise models are used to explicitly define relatively enduring relationships between (i) production planning and control (PPC) processes, that implement a particular strategy and (ii) process-oriented elements of production systems, that are work loaded by the PPC processes. Coherent simulation models, can in part be derived from the enterprise models, so that they computer execute production system behaviours. In this way, time-based performance outcomes can be simulated; so that the impacts of alternative PPC strategies on the planning and controlling historical or forecasted patterns of workflow, through (current and possible future) production system models, can be analysed. The article describes the unified modelling approach conceived and its application in a furniture industry case study small and medium enterprise (SME). Copyright © 2010 Inderscience Enterprises Ltd.