196 resultados para dynamics simulation
Resumo:
This paper presents a new approach to the LU decomposition method for the simulation of stationary and ergodic random fields. The approach overcomes the size limitations of LU and is suitable for any size simulation. The proposed approach can facilitate fast updating of generated realizations with new data, when appropriate, without repeating the full simulation process. Based on a novel column partitioning of the L matrix, expressed in terms of successive conditional covariance matrices, the approach presented here demonstrates that LU simulation is equivalent to the successive solution of kriging residual estimates plus random terms. Consequently, it can be used for the LU decomposition of matrices of any size. The simulation approach is termed conditional simulation by successive residuals as at each step, a small set (group) of random variables is simulated with a LU decomposition of a matrix of updated conditional covariance of residuals. The simulated group is then used to estimate residuals without the need to solve large systems of equations.
Resumo:
Predicting plant leaf area production is required for modelling carbon balance and tiller dynamics in plant canopies. Plant leaf area production can be studied using a framework based on radiation intercepted, radiation use efficiency (RUE) and leaf area ratio (LAR) (ratio of leaf area to net above-ground biomass). The objective of this study was to test this framework for predicting leaf area production of sorghum during vegetative development by examining the stability of the contributing components over a large range of plant density. Four densities, varying from 2 to 16 plants m(-2), were implemented in a field experiment. Plants were either allowed to tiller or were maintained as uniculm by systematic tiller removal. In all cases, intercepted radiation was recorded daily and leaf area and shoot dry matter partitioning were quantified weekly at individual culm level. Up to anthesis, a unique relationship applied between fraction of intercepted radiation and leaf area index, and between shoot dry weight accumulation and amount of intercepted radiation, regardless of plant density. Partitioning of shoot assimilate between leaf, stem and head was also common across treatments up to anthesis, at both plant and culm levels. The relationship with thermal time (TT) from emergence of specific leaf area (SLA) and LAR of tillering plants did not change with plant density. In contrast, SLA of uniculm plants was appreciably lower under low-density conditions at any given TT from emergence. This was interpreted as a consequence of assimilate surplus arising from the inability of the plant to compensate by increasing the leaf area a culm could produce. It is argued that the stability of the extinction coefficient, RUE and plant LAR of tillering plants observed in these conditions provides a reliable way to predict leaf area production regardless of plant density. Crown Copyright (C) 2002 Published by Elsevier Science B.V. All rights reserved.
Resumo:
The Agricultural Production Systems slMulator, APSIM, is a cropping system modelling environment that simulates the dynamics of soil-plant-management interactions within a single crop or a cropping system. Adaptation of previously developed crop models has resulted in multiple crop modules in APSIM, which have low scientific transparency and code efficiency. A generic crop model template (GCROP) has been developed to capture unifying physiological principles across crops (plant types) and to provide modular and efficient code for crop modelling. It comprises a standard crop interface to the APSIM engine, a generic crop model structure, a crop process library, and well-structured crop parameter files. The process library contains the major science underpinning the crop models and incorporates generic routines based on physiological principles for growth and development processes that are common across crops. It allows APSIM to simulate different crops using the same set of computer code. The generic model structure and parameter files provide an easy way to test, modify, exchange and compare modelling approaches at process level without necessitating changes in the code. The standard interface generalises the model inputs and outputs, and utilises a standard protocol to communicate with other APSIM modules through the APSIM engine. The crop template serves as a convenient means to test new insights and compare approaches to component modelling, while maintaining a focus on predictive capability. This paper describes and discusses the scientific basis, the design, implementation and future development of the crop template in APSIM. On this basis, we argue that the combination of good software engineering with sound crop science can enhance the rate of advance in crop modelling. Crown Copyright (C) 2002 Published by Elsevier Science B.V. All rights reserved.
Resumo:
The Load-Unload Response Ratio (LURR) method is an intermediate-term earthquake prediction approach that has shown considerable promise. It involves calculating the ratio of a specified energy release measure during loading and unloading where loading and unloading periods are determined from the earth tide induced perturbations in the Coulomb Failure Stress on optimally oriented faults. In the lead-up to large earthquakes, high LURR values are frequently observed a few months or years prior to the event. These signals may have a similar origin to the observed accelerating seismic moment release (AMR) prior to many large earthquakes or may be due to critical sensitivity of the crust when a large earthquake is imminent. As a first step towards studying the underlying physical mechanism for the LURR observations, numerical studies are conducted using the particle based lattice solid model (LSM) to determine whether LURR observations can be reproduced. The model is initialized as a heterogeneous 2-D block made up of random-sized particles bonded by elastic-brittle links. The system is subjected to uniaxial compression from rigid driving plates on the upper and lower edges of the model. Experiments are conducted using both strain and stress control to load the plates. A sinusoidal stress perturbation is added to the gradual compressional loading to simulate loading and unloading cycles and LURR is calculated. The results reproduce signals similar to those observed in earthquake prediction practice with a high LURR value followed by a sudden drop prior to macroscopic failure of the sample. The results suggest that LURR provides a good predictor for catastrophic failure in elastic-brittle systems and motivate further research to study the underlying physical mechanisms and statistical properties of high LURR values. The results provide encouragement for earthquake prediction research and the use of advanced simulation models to probe the physics of earthquakes.
Resumo:
A decision theory framework can be a powerful technique to derive optimal management decisions for endangered species. We built a spatially realistic stochastic metapopulation model for the Mount Lofty Ranges Southern Emu-wren (Stipiturus malachurus intermedius), a critically endangered Australian bird. Using diserete-time Markov,chains to describe the dynamics of a metapopulation and stochastic dynamic programming (SDP) to find optimal solutions, we evaluated the following different management decisions: enlarging existing patches, linking patches via corridors, and creating a new patch. This is the first application of SDP to optimal landscape reconstruction and one of the few times that landscape reconstruction dynamics have been integrated with population dynamics. SDP is a powerful tool that has advantages over standard Monte Carlo simulation methods because it can give the exact optimal strategy for every landscape configuration (combination of patch areas and presence of corridors) and pattern of metapopulation occupancy, as well as a trajectory of strategies. It is useful when a sequence of management actions can be performed over a given time horizon, as is the case for many endangered species recovery programs, where only fixed amounts of resources are available in each time step. However, it is generally limited by computational constraints to rather small networks of patches. The model shows that optimal metapopulation, management decisions depend greatly on the current state of the metapopulation,. and there is no strategy that is universally the best. The extinction probability over 30 yr for the optimal state-dependent management actions is 50-80% better than no management, whereas the best fixed state-independent sets of strategies are only 30% better than no management. This highlights the advantages of using a decision theory tool to investigate conservation strategies for metapopulations. It is clear from these results that the sequence of management actions is critical, and this can only be effectively derived from stochastic dynamic programming. The model illustrates the underlying difficulty in determining simple rules of thumb for the sequence of management actions for a metapopulation. This use of a decision theory framework extends the capacity of population viability analysis (PVA) to manage threatened species.
Resumo:
Predictions of flow patterns in a 600-mm scale model SAG mill made using four classes of discrete element method (DEM) models are compared to experimental photographs. The accuracy of the various models is assessed using quantitative data on shoulder, toe and vortex center positions taken from ensembles of both experimental and simulation results. These detailed comparisons reveal the strengths and weaknesses of the various models for simulating mills and allow the effect of different modelling assumptions to be quantitatively evaluated. In particular, very close agreement is demonstrated between the full 3D model (including the end wall effects) and the experiments. It is also demonstrated that the traditional two-dimensional circular particle DEM model under-predicts the shoulder, toe and vortex center positions and the power draw by around 10 degrees. The effect of particle shape and the dimensionality of the model are also assessed, with particle shape predominantly affecting the shoulder position while the dimensionality of the model affects mainly the toe position. Crown Copyright (C) 2003 Published by Elsevier Science B.V. All rights reserved.
Resumo:
Subcycling, or the use of different timesteps at different nodes, can be an effective way of improving the computational efficiency of explicit transient dynamic structural solutions. The method that has been most widely adopted uses a nodal partition. extending the central difference method, in which small timestep updates are performed interpolating on the displacement at neighbouring large timestep nodes. This approach leads to narrow bands of unstable timesteps or statistical stability. It also can be in error due to lack of momentum conservation on the timestep interface. The author has previously proposed energy conserving algorithms that avoid the first problem of statistical stability. However, these sacrifice accuracy to achieve stability. An approach to conserve momentum on an element interface by adding partial velocities is considered here. Applied to extend the central difference method. this approach is simple. and has accuracy advantages. The method can be programmed by summing impulses of internal forces, evaluated using local element timesteps, in order to predict a velocity change at a node. However, it is still only statistically stable, so an adaptive timestep size is needed to monitor accuracy and to be adjusted if necessary. By replacing the central difference method with the explicit generalized alpha method. it is possible to gain stability by dissipating the high frequency response that leads to stability problems. However. coding the algorithm is less elegant, as the response depends on previous partial accelerations. Extension to implicit integration, is shown to be impractical due to the neglect of remote effects of internal forces acting across a timestep interface. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
We detected and mapped a dynamically spreading wave of gray matter loss in the brains of patients with Alzheimer's disease (AD). The loss pattern was visualized in four dimensions as it spread over time from temporal and limbic cortices into frontal and occipital brain regions, sparing sensorimotor cortices. The shifting deficits were asymmetric (left hemisphere >right hemisphere) and correlated with progressively declining cognitive status ( p 15% loss). The maps distinguished different phases of AD and differentiated AD from normal aging. Local gray matter loss rates (5.3 +/- 2.3% per year in AD v 0.9 +/- 0.9% per year in controls) were faster in the left hemisphere ( p < 0.029) than the right. Transient barriers to disease progression appeared at limbic/frontal boundaries. This degenerative sequence, observed in vivo as it developed, provides the first quantitative, dynamic visualization of cortical atrophic rates in normal elderly populations and in those with dementia.
Resumo:
Trans-membrane proteins of the p24 family are abundant, oligomeric proteins predominantly found in cis-Golgi membranes. They are not easily studied in vivo and their functions are controversial. We found that p25 can be targeted to the plasma membrane after inactivation of its canonical KKXX motif (KK to SS, p25SS), and that p25SS causes the co-transport of other p24 proteins beyond the Golgi complex, indicating that wild-type p25 plays a crucial role in retaining p24 proteins in cis-Golgi membranes. We then made use of these observations to study the intrinsic properties of these proteins, when present in a different membrane context. At the cell surface, the p25SS mutant segregates away from both the transferrin receptor and markers of lipid rafts, which are enriched in cholesterol and glycosphingolipids. This suggests that p25SS localizes to, or contributes to form, specialized membrane domains, presumably corresponding to oligomers of p25SS and other p24 proteins. Once at the cell surface, p25SS is endocytosed, together with other p24 proteins, and eventually accumulates in late endosomes, where it remains confined to well-defined membrane regions visible by electron microscopy. We find that this p25SS accumulation causes a concomitant accumulation of cholesterol in late endosomes, and an inhibition of their motility - two processes that are functionally linked. Yet, the p25SS-rich regions themselves seem to-exclude not only Lamp1 but also accumulated cholesterol. One may envision that p25SS accumulation, by excluding cholesterol from oligomers, eventually overloads neighboring late endosomal membranes with cholesterol beyond their capacity (see Discussion). In any case, our data show that p25 and presumably other p24 proteins are endowed with the intrinsic capacity to form highly specialized domains that control membrane composition and dynamics. We propose that p25 and other p24 proteins control the fidelity of membrane transport by maintaining cholesterol-poor membranes in the Golgi complex.
Resumo:
Recent work by Siegelmann has shown that the computational power of recurrent neural networks matches that of Turing Machines. One important implication is that complex language classes (infinite languages with embedded clauses) can be represented in neural networks. Proofs are based on a fractal encoding of states to simulate the memory and operations of stacks. In the present work, it is shown that similar stack-like dynamics can be learned in recurrent neural networks from simple sequence prediction tasks. Two main types of network solutions are found and described qualitatively as dynamical systems: damped oscillation and entangled spiraling around fixed points. The potential and limitations of each solution type are established in terms of generalization on two different context-free languages. Both solution types constitute novel stack implementations - generally in line with Siegelmann's theoretical work - which supply insights into how embedded structures of languages can be handled in analog hardware.
Stability and simulation-based design of steel scaffolding without using the effective length method
Resumo:
For dynamic simulations to be credible, verification of the computer code must be an integral part of the modelling process. This two-part paper describes a novel approach to verification through program testing and debugging. In Part 1, a methodology is presented for detecting and isolating coding errors using back-to-back testing. Residuals are generated by comparing the output of two independent implementations, in response to identical inputs. The key feature of the methodology is that a specially modified observer is created using one of the implementations, so as to impose an error-dependent structure on these residuals. Each error can be associated with a fixed and known subspace, permitting errors to be isolated to specific equations in the code. It is shown that the geometric properties extend to multiple errors in either one of the two implementations. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
In Part 1 of this paper a methodology for back-to-back testing of simulation software was described. Residuals with error-dependent geometric properties were generated. A set of potential coding errors was enumerated, along with a corresponding set of feature matrices, which describe the geometric properties imposed on the residuals by each of the errors. In this part of the paper, an algorithm is developed to isolate the coding errors present by analysing the residuals. A set of errors is isolated when the subspace spanned by their combined feature matrices corresponds to that of the residuals. Individual feature matrices are compared to the residuals and classified as 'definite', 'possible' or 'impossible'. The status of 'possible' errors is resolved using a dynamic subset testing algorithm. To demonstrate and validate the testing methodology presented in Part 1 and the isolation algorithm presented in Part 2, a case study is presented using a model for biological wastewater treatment. Both single and simultaneous errors that are deliberately introduced into the simulation code are correctly detected and isolated. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
In a 2-yr multiple-site field study conducted in western Nebraska during 1999 and 2000, optimum dryland corn (Zea mays L.) population varied from less than 1.7 to more than 5.6 plants m(-2), depending largely on available water resources. The objective of this study was to use a modeling approach to investigate corn population recommendations for a wide range of seasonal variation. A corn growth simulation model (APSIM-maize) was coupled to long-term sequences of historical climatic data from western Nebraska to provide probabilistic estimates of dryland yield for a range of corn populations. Simulated populations ranged from 2 to 5 plants m(-2). Simulations began with one of three levels of available soil water at planting, either 80, 160, or 240 mm in the surface 1.5 m of a loam soil. Gross margins were maximized at 3 plants m(-2) when starting available water was 160 or 240 mm, and the expected probability of a financial loss at this population was reduced from about 10% at 160 mm to 0% at 240 mm. When starting available water was 80 mm, average gross margins were less than $15 ha(-1), and risk of financial loss exceeded 40%. Median yields were greatest when starting available soil water was 240 mm. However, perhaps the greater benefit of additional soil water at planting was reduction in the risk of making a financial loss. Dryland corn growers in western Nebraska are advised to use a population of 3 plants m(-2) as a base recommendation.