99 resultados para Computational simulation
Resumo:
What interactions are sufficient to simulate arbitrary quantum dynamics in a composite quantum system? Dodd [Phys. Rev. A 65, 040301(R) (2002)] provided a partial solution to this problem in the form of an efficient algorithm to simulate any desired two-body Hamiltonian evolution using any fixed two-body entangling N-qubit Hamiltonian, and local unitaries. We extend this result to the case where the component systems are qudits, that is, have D dimensions. As a consequence we explain how universal quantum computation can be performed with any fixed two-body entangling N-qudit Hamiltonian, and local unitaries.
Resumo:
In computer simulations of smooth dynamical systems, the original phase space is replaced by machine arithmetic, which is a finite set. The resulting spatially discretized dynamical systems do not inherit all functional properties of the original systems, such as surjectivity and existence of absolutely continuous invariant measures. This can lead to computational collapse to fixed points or short cycles. The paper studies loss of such properties in spatial discretizations of dynamical systems induced by unimodal mappings of the unit interval. The problem reduces to studying set-valued negative semitrajectories of the discretized system. As the grid is refined, the asymptotic behavior of the cardinality structure of the semitrajectories follows probabilistic laws corresponding to a branching process. The transition probabilities of this process are explicitly calculated. These results are illustrated by the example of the discretized logistic mapping.
Resumo:
Developments in computer and three dimensional (3D) digitiser technologies have made it possible to keep track of the broad range of data required to simulate an insect moving around or over the highly heterogeneous habitat of a plant's surface. Properties of plant parts vary within a complex canopy architecture, and insect damage can induce further changes that affect an animal's movements, development and likelihood of survival. Models of plant architectural development based on Lindenmayer systems (L-systems) serve as dynamic platforms for simulation of insect movement, providing ail explicit model of the developing 3D structure of a plant as well as allowing physiological processes associated with plant growth and responses to damage to be described and Simulated. Simple examples of the use of the L-system formalism to model insect movement, operating Lit different spatial scales-from insects foraging on an individual plant to insects flying around plants in a field-are presented. Such models can be used to explore questions about the consequences of changes in environmental architecture and configuration on host finding, exploitation and its population consequences. In effect this model is a 'virtual ecosystem' laboratory to address local as well as landscape-level questions pertinent to plant-insect interactions, taking plant architecture into account. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Models of plant architecture allow us to explore how genotype environment interactions effect the development of plant phenotypes. Such models generate masses of data organised in complex hierarchies. This paper presents a generic system for creating and automatically populating a relational database from data generated by the widely used L-system approach to modelling plant morphogenesis. Techniques from compiler technology are applied to generate attributes (new fields) in the database, to simplify query development for the recursively-structured branching relationship. Use of biological terminology in an interactive query builder contributes towards making the system biologist-friendly. (C) 2002 Elsevier Science Ireland Ltd. All rights reserved.
Resumo:
CULTURE is an Artificial Life simulation that aims to provide primary school children with opportunities to become actively engaged in the high-order thinking processes of problem solving and critical thinking. A preliminary evaluation of CULTURE has found that it offers the freedom for children to take part in process-oriented learning experiences. Through providing children with opportunities to make inferences, validate results, explain discoveries and analyse situations, CULTURE encourages the development of high-order thinking skills. The evaluation found that CULTURE allows users to autonomously explore the important scientific concepts of life and living, and energy and change within a software environment that children find enjoyable and easy to use.
Resumo:
This paper presents results on the simulation of the solid state sintering of copper wires using Monte Carlo techniques based on elements of lattice theory and cellular automata. The initial structure is superimposed onto a triangular, two-dimensional lattice, where each lattice site corresponds to either an atom or vacancy. The number of vacancies varies with the simulation temperature, while a cluster of vacancies is a pore. To simulate sintering, lattice sites are picked at random and reoriented in terms of an atomistic model governing mass transport. The probability that an atom has sufficient energy to jump to a vacant lattice site is related to the jump frequency, and hence the diffusion coefficient, while the probability that an atomic jump will be accepted is related to the change in energy of the system as a result of the jump, as determined by the change in the number of nearest neighbours. The jump frequency is also used to relate model time, measured in Monte Carlo Steps, to the actual sintering time. The model incorporates bulk, grain boundary and surface diffusion terms and includes vacancy annihilation on the grain boundaries. The predictions of the model were found to be consistent with experimental data, both in terms of the microstructural evolution and in terms of the sintering time. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
The effect of heat treatment on the structure of an Australian semi-anthracite char was studied in detail in the 850-1150degreesC temperature range using XRD, HRTEM, and electrical resistivity techniques. It was found that the carbon crystallite size in the char does not change significantly during heat treatment in the temperature range studied, for both the raw coal and its ash-free derivative obtained by acid treatment. However, the fraction of the organized carbon in the raw coal chars, determined by XRD, increased with increase of heat treatment time and temperature, while that for the ash-free coal chars remained almost unchanged. This suggests the occurrence of catalytic ordering during heat treatment, supported by the observation that the electrical resistivity of the raw coal chars decreased with heat treatment, while that of the ash-free coal chars did not vary significantly. Further confirmatory evidence was provided by high resolution transmission electron micrographs depicting well-organized carbon layers surrounding iron particles. It is also found that the fraction of organized carbon does not reach unity, but attains an apparent equilibrium value that increases with increase in temperature, providing an apparent heat of ordering of 71.7 kJ mol(-1) in the temperature range studied. Good temperature-independent correlation was found between the electrical resistivity and the organized carbon fraction, indicating that electrical resistivity is indeed structure sensitive. Good correlation was also found between the electrical resistivity and the reactivity of coal char. All these results strongly suggest that the thermal deactivation is the result of a crystallite-perfecting process, which is effectively catalyzed by the inorganic matter in the coal char. Based on kinetic interpretation of the data it is concluded that the process is diffusion controlled, most likely involving transport of iron in the inter-crystallite nanospaces in the temperature range studied. The activation energy of this transport process is found to be very low, at about 11.8 kJ mol(-1), which is corroborated by model-free correlation of the temporal variation of organized carbon fraction as well as electrical resistivity data using the superposition method, and is suggestive of surface transport of iron. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Multi-environment trials (METs) used to evaluate breeding lines vary in the number of years that they sample. We used a cropping systems model to simulate the target population of environments (TPE) for 6 locations over 108 years for 54 'near-isolines' of sorghum in north-eastern Australia. For a single reference genotype, each of 547 trials was clustered into 1 of 3 'drought environment types' (DETs) based on a seasonal water stress index. Within sequential METs of 2 years duration, the frequencies of these drought patterns often differed substantially from those derived for the entire TPE. This was reflected in variation in the mean yield of the reference genotype. For the TPE and for 2-year METs, restricted maximum likelihood methods were used to estimate components of genotypic and genotype by environment variance. These also varied substantially, although not in direct correlation with frequency of occurrence of different DETs over a 2-year period. Combined analysis over different numbers of seasons demonstrated the expected improvement in the correlation between MET estimates of genotype performance and the overall genotype averages as the number of seasons in the MET was increased.
Resumo:
This paper presents a new approach to the LU decomposition method for the simulation of stationary and ergodic random fields. The approach overcomes the size limitations of LU and is suitable for any size simulation. The proposed approach can facilitate fast updating of generated realizations with new data, when appropriate, without repeating the full simulation process. Based on a novel column partitioning of the L matrix, expressed in terms of successive conditional covariance matrices, the approach presented here demonstrates that LU simulation is equivalent to the successive solution of kriging residual estimates plus random terms. Consequently, it can be used for the LU decomposition of matrices of any size. The simulation approach is termed conditional simulation by successive residuals as at each step, a small set (group) of random variables is simulated with a LU decomposition of a matrix of updated conditional covariance of residuals. The simulated group is then used to estimate residuals without the need to solve large systems of equations.
Resumo:
Numerical modeling of the eddy currents induced in the human body by the pulsed field gradients in MRI presents a difficult computational problem. It requires an efficient and accurate computational method for high spatial resolution analyses with a relatively low input frequency. In this article, a new technique is described which allows the finite difference time domain (FDTD) method to be efficiently applied over a very large frequency range, including low frequencies. This is not the case in conventional FDTD-based methods. A method of implementing streamline gradients in FDTD is presented, as well as comparative analyses which show that the correct source injection in the FDTD simulation plays a crucial rule in obtaining accurate solutions. In particular, making use of the derivative of the input source waveform is shown to provide distinct benefits in accuracy over direct source injection. In the method, no alterations to the properties of either the source or the transmission media are required. The method is essentially frequency independent and the source injection method has been verified against examples with analytical solutions. Results are presented showing the spatial distribution of gradient-induced electric fields and eddy currents in a complete body model.
Resumo:
In this paper we refer to the gene-to-phenotype modeling challenge as the GP problem. Integrating information across levels of organization within a genotype-environment system is a major challenge in computational biology. However, resolving the GP problem is a fundamental requirement if we are to understand and predict phenotypes given knowledge of the genome and model dynamic properties of biological systems. Organisms are consequences of this integration, and it is a major property of biological systems that underlies the responses we observe. We discuss the E(NK) model as a framework for investigation of the GP problem and the prediction of system properties at different levels of organization. We apply this quantitative framework to an investigation of the processes involved in genetic improvement of plants for agriculture. In our analysis, N genes determine the genetic variation for a set of traits that are responsible for plant adaptation to E environment-types within a target population of environments. The N genes can interact in epistatic NK gene-networks through the way that they influence plant growth and development processes within a dynamic crop growth model. We use a sorghum crop growth model, available within the APSIM agricultural production systems simulation model, to integrate the gene-environment interactions that occur during growth and development and to predict genotype-to-phenotype relationships for a given E(NK) model. Directional selection is then applied to the population of genotypes, based on their predicted phenotypes, to simulate the dynamic aspects of genetic improvement by a plant-breeding program. The outcomes of the simulated breeding are evaluated across cycles of selection in terms of the changes in allele frequencies for the N genes and the genotypic and phenotypic values of the populations of genotypes.
Resumo:
The Load-Unload Response Ratio (LURR) method is an intermediate-term earthquake prediction approach that has shown considerable promise. It involves calculating the ratio of a specified energy release measure during loading and unloading where loading and unloading periods are determined from the earth tide induced perturbations in the Coulomb Failure Stress on optimally oriented faults. In the lead-up to large earthquakes, high LURR values are frequently observed a few months or years prior to the event. These signals may have a similar origin to the observed accelerating seismic moment release (AMR) prior to many large earthquakes or may be due to critical sensitivity of the crust when a large earthquake is imminent. As a first step towards studying the underlying physical mechanism for the LURR observations, numerical studies are conducted using the particle based lattice solid model (LSM) to determine whether LURR observations can be reproduced. The model is initialized as a heterogeneous 2-D block made up of random-sized particles bonded by elastic-brittle links. The system is subjected to uniaxial compression from rigid driving plates on the upper and lower edges of the model. Experiments are conducted using both strain and stress control to load the plates. A sinusoidal stress perturbation is added to the gradual compressional loading to simulate loading and unloading cycles and LURR is calculated. The results reproduce signals similar to those observed in earthquake prediction practice with a high LURR value followed by a sudden drop prior to macroscopic failure of the sample. The results suggest that LURR provides a good predictor for catastrophic failure in elastic-brittle systems and motivate further research to study the underlying physical mechanisms and statistical properties of high LURR values. The results provide encouragement for earthquake prediction research and the use of advanced simulation models to probe the physics of earthquakes.
Resumo:
The particle-based Lattice Solid Model (LSM) was developed to provide a basis to study the physics of rocks and the nonlinear dynamics of earthquakes (MORA and PLACE, 1994; PLACE and MORA, 1999). A new modular and flexible LSM approach has been developed that allows different microphysics to be easily included in or removed from the model. The approach provides a virtual laboratory where numerical experiments can easily be set up and all measurable quantities visualised. The proposed approach provides a means to simulate complex phenomena such as fracturing or localisation processes, and enables the effect of different micro-physics on macroscopic behaviour to be studied. The initial 2-D model is extended to allow three-dimensional simulations to be performed and particles of different sizes to be specified. Numerical bi-axial compression experiments under different confining pressure are used to calibrate the model. By tuning the different microscopic parameters (such as coefficient of friction, microscopic strength and distribution of grain sizes), the macroscopic strength of the material and can be adjusted to be in agreement with laboratory experiments, and the orientation of fractures is consistent with the theoretical value predicted based on Mohr-Coulomb diagram. Simulations indicate that 3-D numerical models have different macroscopic properties than in 2-D and, hence, the model must be recalibrated for 3-D simulations. These numerical experiments illustrate that the new approach is capable of simulating typical rock fracture behaviour. The new model provides a basis to investigate nucleation, rupture and slip pulse propagation in complex fault zones without the previous model limitations of a regular low-level surface geometry and being restricted to two-dimensions.
Resumo:
In order to understand the earthquake nucleation process, we need to understand the effective frictional behavior of faults with complex geometry and fault gouge zones. One important aspect of this is the interaction between the friction law governing the behavior of the fault on the microscopic level and the resulting macroscopic behavior of the fault zone. Numerical simulations offer a possibility to investigate the behavior of faults on many different scales and thus provide a means to gain insight into fault zone dynamics on scales which are not accessible to laboratory experiments. Numerical experiments have been performed to investigate the influence of the geometric configuration of faults with a rate- and state-dependent friction at the particle contacts on the effective frictional behavior of these faults. The numerical experiments are designed to be similar to laboratory experiments by DIETERICH and KILGORE (1994) in which a slide-hold-slide cycle was performed between two blocks of material and the resulting peak friction was plotted vs. holding time. Simulations with a flat fault without a fault gouge have been performed to verify the implementation. These have shown close agreement with comparable laboratory experiments. The simulations performed with a fault containing fault gouge have demonstrated a strong dependence of the critical slip distance D-c on the roughness of the fault surfaces and are in qualitative agreement with laboratory experiments.