996 resultados para Simulation outputs
Resumo:
Nitrous oxide (N2O) is primarily produced by the microbially-mediated nitrification and denitrification processes in soils. It is influenced by a suite of climate (i.e. temperature and rainfall) and soil (physical and chemical) variables, interacting soil and plant nitrogen (N) transformations (either competing or supplying substrates) as well as land management practices. It is not surprising that N2O emissions are highly variable both spatially and temporally. Computer simulation models, which can integrate all of these variables, are required for the complex task of providing quantitative determinations of N2O emissions. Numerous simulation models have been developed to predict N2O production. Each model has its own philosophy in constructing simulation components as well as performance strengths. The models range from those that attempt to comprehensively simulate all soil processes to more empirical approaches requiring minimal input data. These N2O simulation models can be classified into three categories: laboratory, field and regional/global levels. Process-based field-scale N2O simulation models, which simulate whole agroecosystems and can be used to develop N2O mitigation measures, are the most widely used. The current challenge is how to scale up the relatively more robust field-scale model to catchment, regional and national scales. This paper reviews the development history, main construction components, strengths, limitations and applications of N2O emissions models, which have been published in the literature. The three scale levels are considered and the current knowledge gaps and challenges in modelling N2O emissions from soils are discussed.
Resumo:
An electrified railway system includes complex interconnections and interactions of several subsystems. Computer simulation is the only viable means for system evaluation and analysis. This paper discusses the difficulties and requirements of effective simulation models for this specialized industrial application; and the development of a general-purpose multi-train simulator.
Resumo:
This paper discusses a new paradigm of real-time simulation of power systems in which equipment can be interfaced with a real-time digital simulator. In this scheme, one part of a power system can be simulated by using a real-time simulator; while the other part is implemeneted as a physical system. The only interface of the physical system with the computer-based simulator is through data-acquisition system. The physical system is driven by a voltage-source converter (VSC)that mimics the power system simulated in the real-time simulator. In this papar, the VSC operates in a voltage-control mode to track the point of common coupling voltage signal supplied by the digital simulator. This type of splitting a network in two parts and running a real-time simulation with a physical system in parallel is called a power network in loop here. this opens up the possibility of study of interconnection o f one or several distributed generators to a complex power network. The proposed implementation is verified through simulation studies using PSCAD/EMTDC and through hardware implementation on a TMS320G2812 DSP.
Resumo:
Electrostatic discharge is the sudden and brief electric current that flashes between two objects at different voltages. This is a serious issue ranging in application from solid-state electronics to spectacular and dangerous lightning strikes (arc flashes). The research herein presents work on the experimental simulation and measurement of the energy in an electrostatic discharge. The energy released in these discharges has been linked to ignitions and burning in a number of documented disasters and can be enormously hazardous in many other industrial scenarios. Simulations of electrostatic discharges were designed to specifications by IEC standards. This is typically based on the residual voltage/charge on the discharge capacitor, whereas this research examines the voltage and current in the actual spark in order to obtain a more precise comparative measurement of the energy dissipated.
Resumo:
The Streaming SIMD extension (SSE) is a special feature embedded in the Intel Pentium III and IV classes of microprocessors. It enables the execution of SIMD type operations to exploit data parallelism. This article presents improving computation performance of a railway network simulator by means of SSE. Voltage and current at various points of the supply system to an electrified railway line are crucial for design, daily operation and planning. With computer simulation, their time-variations can be attained by solving a matrix equation, whose size mainly depends upon the number of trains present in the system. A large coefficient matrix, as a result of congested railway line, inevitably leads to heavier computational demand and hence jeopardizes the simulation speed. With the special architectural features of the latest processors on PC platforms, significant speed-up in computations can be achieved.
Resumo:
With the advances in computer hardware and software development techniques in the past 25 years, digital computer simulation of train movement and traction systems has been widely adopted as a standard computer-aided engineering tool [1] during the design and development stages of existing and new railway systems. Simulators of different approaches and scales are used extensively to investigate various kinds of system studies. Simulation is now proven to be the cheapest means to carry out performance predication and system behaviour characterisation. When computers were first used to study railway systems, they were mainly employed to perform repetitive but time-consuming computational tasks, such as matrix manipulations for power network solution and exhaustive searches for optimal braking trajectories. With only simple high-level programming languages available at the time, full advantage of the computing hardware could not be taken. Hence, structured simulations of the whole railway system were not very common. Most applications focused on isolated parts of the railway system. It is more appropriate to regard those applications as primarily mechanised calculations rather than simulations. However, a railway system consists of a number of subsystems, such as train movement, power supply and traction drives, which inevitably contains many complexities and diversities. These subsystems interact frequently with each other while the trains are moving; and they have their special features in different railway systems. To further complicate the simulation requirements, constraints like track geometry, speed restrictions and friction have to be considered, not to mention possible non-linearities and uncertainties in the system. In order to provide a comprehensive and accurate account of system behaviour through simulation, a large amount of data has to be organised systematically to ensure easy access and efficient representation; the interactions and relationships among the subsystems should be defined explicitly. These requirements call for sophisticated and effective simulation models for each component of the system. The software development techniques available nowadays allow the evolution of such simulation models. Not only can the applicability of the simulators be largely enhanced by advanced software design, maintainability and modularity for easy understanding and further development, and portability for various hardware platforms are also encouraged. The objective of this paper is to review the development of a number of approaches to simulation models. Attention is, in particular, given to models for train movement, power supply systems and traction drives. These models have been successfully used to enable various ‘what-if’ issues to be resolved effectively in a wide range of applications, such as speed profiles, energy consumption, run times etc.
Resumo:
The Streaming SIMD extension (SSE) is a special feature that is available in the Intel Pentium III and P4 classes of microprocessors. As its name implies, SSE enables the execution of SIMD (Single Instruction Multiple Data) operations upon 32-bit floating-point data therefore, performance of floating-point algorithms can be improved. In electrified railway system simulation, the computation involves the solving of a huge set of simultaneous linear equations, which represent the electrical characteristic of the railway network at a particular time-step and a fast solution for the equations is desirable in order to simulate the system in real-time. In this paper, we present how SSE is being applied to the railway network simulation.
Resumo:
Abstract Computer simulation is a versatile and commonly used tool for the design and evaluation of systems with different degrees of complexity. Power distribution systems and electric railway network are areas for which computer simulations are being heavily applied. A dominant factor in evaluating the performance of a software simulator is its processing time, especially in the cases of real-time simulation. Parallel processing provides a viable mean to reduce the computing time and is therefore suitable for building real-time simulators. In this paper, we present different issues related to solving the power distribution system with parallel computing based on a multiple-CPU server and we will concentrate, in particular, on the speedup performance of such an approach.
Resumo:
Training designed to support and strengthen higher-order mental abilities now often involves immersion in Virtual Reality where dangerous real world scenarios can be safely replicated. However despite the growing popularity of advanced training simulations, methods for evaluating their use rely heavily on subjective measures or analysis of final outcomes. Without dynamic, objective performance measures the outcome of training in terms of impact on cognitive skills and ability to transfer newly acquired skills to the real world is unknown. The relationship between affective intensity and cognitive learning provides a potential new approach to ensure the processing of cognitions which occur prior to final outcomes, such as problem-solving and decision-making, are adequately evaluated. This paper describes the technical aspects of pilot work recently undertaken to develop a new measurement tool designed to objectively track individual affect levels during simulation-based training.
Resumo:
We assess the performance of an exponential integrator for advancing stiff, semidiscrete formulations of the unsaturated Richards equation in time. The scheme is of second order and explicit in nature but requires the action of the matrix function φ(A) where φ(z) = [exp(z) - 1]/z on a suitability defined vector v at each time step. When the matrix A is large and sparse, φ(A)v can be approximated by Krylov subspace methods that require only matrix-vector products with A. We prove that despite the use of this approximation the scheme remains second order. Furthermore, we provide a practical variable-stepsize implementation of the integrator by deriving an estimate of the local error that requires only a single additional function evaluation. Numerical experiments performed on two-dimensional test problems demonstrate that this implementation outperforms second-order, variable-stepsize implementations of the backward differentiation formulae.
Resumo:
Gaze and movement behaviors of association football goalkeepers were compared under two video simulation conditions (i.e., verbal and joystick movement responses) and three in situ conditions (i.e., verbal, simplified body movement, and interceptive response). The results showed that the goalkeepers spent more time fixating on information from the penalty kick taker’s movements than ball location for all perceptual judgment conditions involving limited movement (i.e., verbal responses, joystick movement, and simplified body movement). In contrast, an equivalent amount of time was spent fixating on the penalty taker’s relative motions and the ball location for the in situ interception condition, which required the goalkeepers to attempt to make penalty saves. The data suggest that gaze and movement behaviors function differently, depending on the experimental task constraints selected for empirical investigations. These findings highlight the need for research on perceptual— motor behaviors to be conducted in representative experimental conditions to allow appropriate generalization of conclusions to performance environments.
Resumo:
A Simulink Matlab control system of a heavy vehicle suspension has been developed. The aim of the exercise presented in this paper was to develop a Simulink Matlab control system of a heavy vehicle suspension. The objective facilitated by this outcome was the use of a working model of a heavy vehicle (HV) suspension that could be used for future research. A working computer model is easier and cheaper to re-configure than a HV axle group installed on a truck; it presents less risk should something go wrong and allows more scope for variation and sensitivity analysis before embarking on further "real-world" testing. Empirical data recorded as the input and output signals of a heavy vehicle (HV) suspension were used to develop the parameters for computer simulation of a linear time invariant system described by a second-order differential equation of the form: (i.e. a "2nd-order" system). Using the empirical data as an input to the computer model allowed validation of its output compared with the empirical data. The errors ranged from less than 1% to approximately 3% for any parameter, when comparing like-for-like inputs and outputs. The model is presented along with the results of the validation. This model will be used in future research in the QUT/Main Roads project Heavy vehicle suspensions – testing and analysis, particularly so for a theoretical model of a multi-axle HV suspension with varying values of dynamic load sharing. Allowance will need to be made for the errors noted when using the computer models in this future work.
Resumo:
Freeways are divided roadways designed to facilitate the uninterrupted movement of motor vehicles. However, many freeways now experience demand flows in excess of capacity, leading to recurrent congestion. The Highway Capacity Manual (TRB, 1994) uses empirical macroscopic relationships between speed, flow and density to quantify freeway operations and performance. Capacity may be predicted as the maximum uncongested flow achievable. Although they are effective tools for design and analysis, macroscopic models lack an understanding of the nature of processes taking place in the system. Szwed and Smith (1972, 1974) and Makigami and Matsuo (1990) have shown that microscopic modelling is also applicable to freeway operations. Such models facilitate an understanding of the processes whilst providing for the assessment of performance, through measures of capacity and delay. However, these models are limited to only a few circumstances. The aim of this study was to produce more comprehensive and practical microscopic models. These models were required to accurately portray the mechanisms of freeway operations at the specific locations under consideration. The models needed to be able to be calibrated using data acquired at these locations. The output of the models needed to be able to be validated with data acquired at these sites. Therefore, the outputs should be truly descriptive of the performance of the facility. A theoretical basis needed to underlie the form of these models, rather than empiricism, which is the case for the macroscopic models currently used. And the models needed to be adaptable to variable operating conditions, so that they may be applied, where possible, to other similar systems and facilities. It was not possible to produce a stand-alone model which is applicable to all facilities and locations, in this single study, however the scene has been set for the application of the models to a much broader range of operating conditions. Opportunities for further development of the models were identified, and procedures provided for the calibration and validation of the models to a wide range of conditions. The models developed, do however, have limitations in their applicability. Only uncongested operations were studied and represented. Driver behaviour in Brisbane was applied to the models. Different mechanisms are likely in other locations due to variability in road rules and driving cultures. Not all manoeuvres evident were modelled. Some unusual manoeuvres were considered unwarranted to model. However the models developed contain the principal processes of freeway operations, merging and lane changing. Gap acceptance theory was applied to these critical operations to assess freeway performance. Gap acceptance theory was found to be applicable to merging, however the major stream, the kerb lane traffic, exercises only a limited priority over the minor stream, the on-ramp traffic. Theory was established to account for this activity. Kerb lane drivers were also found to change to the median lane where possible, to assist coincident mergers. The net limited priority model accounts for this by predicting a reduced major stream flow rate, which excludes lane changers. Cowan's M3 model as calibrated for both streams. On-ramp and total upstream flow are required as input. Relationships between proportion of headways greater than 1 s and flow differed for on-ramps where traffic leaves signalised intersections and unsignalised intersections. Constant departure onramp metering was also modelled. Minimum follow-on times of 1 to 1.2 s were calibrated. Critical gaps were shown to lie between the minimum follow-on time, and the sum of the minimum follow-on time and the 1 s minimum headway. Limited priority capacity and other boundary relationships were established by Troutbeck (1995). The minimum average minor stream delay and corresponding proportion of drivers delayed were quantified theoretically in this study. A simulation model was constructed to predict intermediate minor and major stream delays across all minor and major stream flows. Pseudo-empirical relationships were established to predict average delays. Major stream average delays are limited to 0.5 s, insignificant compared with minor stream delay, which reach infinity at capacity. Minor stream delays were shown to be less when unsignalised intersections are located upstream of on-ramps than signalised intersections, and less still when ramp metering is installed. Smaller delays correspond to improved merge area performance. A more tangible performance measure, the distribution of distances required to merge, was established by including design speeds. This distribution can be measured to validate the model. Merging probabilities can be predicted for given taper lengths, a most useful performance measure. This model was also shown to be applicable to lane changing. Tolerable limits to merging probabilities require calibration. From these, practical capacities can be estimated. Further calibration is required of traffic inputs, critical gap and minimum follow-on time, for both merging and lane changing. A general relationship to predict proportion of drivers delayed requires development. These models can then be used to complement existing macroscopic models to assess performance, and provide further insight into the nature of operations.
Resumo:
A Geant4 based simulation tool has been developed to perform Monte Carlo modelling of a 6 MV VarianTM iX clinac. The computer aided design interface of Geant4 was used to accurately model the LINAC components, including the Millenium multi-leaf collimators (MLCs). The simulation tool was verified via simulation of standard commissioning dosimetry data acquired with an ionisation chamber in a water phantom. Verification of the MLC model was achieved by simulation of leaf leakage measurements performed using GafchromicTM film in a solid water phantom. An absolute dose calibration capability was added by including a virtual monitor chamber into the simulation. Furthermore, a DICOM-RT interface was integrated with the application to allow the simulation of treatment plans in radiotherapy. The ability of the simulation tool to accurately model leaf movements and doses at each control point was verified by simulation of a widely used intensity-modulated radiation therapy (IMRT) quality assurance (QA) technique, the chair test.
Resumo:
This paper presents a material model to simulate load induced cracking in Reinforced Concrete (RC) elements in ABAQUS finite element package. Two numerical material models are used and combined to simulate complete stress-strain behaviour of concrete under compression and tension including damage properties. Both numerical techniques used in the present material model are capable of developing the stress-strain curves including strain softening regimes only using ultimate compressive strength of concrete, which is easily and practically obtainable for many of the existing RC structures or those to be built. Therefore, the method proposed in this paper is valuable in assessing existing RC structures in the absence of more detailed test results. The numerical models are slightly modified from the original versions to be comparable with the damaged plasticity model used in ABAQUS. The model is validated using different experiment results for RC beam elements presented in the literature. The results indicate a good agreement with load vs. displacement curve and observed crack patterns.