919 resultados para Easy Java Simulations
Resumo:
This chapter is a tutorial that teaches you how to design extended finite state machine (EFSM) test models for a system that you want to test. EFSM models are more powerful and expressive than simple finite state machine (FSM) models, and are one of the most commonly used styles of models for model-based testing, especially for embedded systems. There are many languages and notations in use for writing EFSM models, but in this tutorial we write our EFSM models in the familiar Java programming language. To generate tests from these EFSM models we use ModelJUnit, which is an open-source tool that supports several stochastic test generation algorithms, and we also show how to write your own model-based testing tool. We show how EFSM models can be used for unit testing and system testing of embedded systems, and for offline testing as well as online testing.
Resumo:
Adolescent idiopathic scoliosis (AIS) is a three-dimensional spinal deformity involving the side-to-side curvature of the spine in the coronal plane and axial rotation of the vertebrae in the transverse plane. For patients with a severe or rapidly progressing deformity, corrective instrumented fusion surgery is performed. The wide choice of implants and large variability between patients make it difficult for surgeons to choose optimal treatment strategies. This paper describes the patient specific finite element modelling techniques employed and the results of preliminary analyses predicting the surgical outcomes for a series of AIS patients. This report highlights the importance of not only patient-specific anatomy and material parameters, but also patient-specific data for the clinical and physiological loading conditions experienced by the patient who has corrective scoliosis surgery.
Resumo:
In this work, the thermal expansion properties of carbon nanotube (CNT)-reinforced nanocomposites with CNT content ranging from 1 to 15 wt% were evaluated using a multi-scale numerical approach, in which the effects of two parameters, i.e., temperature and CNT content, were investigated extensively. For all CNT contents, the obtained results clearly revealed that within a wide low-temperature range (30°C ~ 62°C), thermal contraction is observed, while thermal expansion occurs in a high-temperature range (62°C ~ 120°C). It was found that at any specified CNT content, the thermal expansion properties vary with temperature - as temperature increases, the thermal expansion rate increases linearly. However, at a specified temperature, the absolute value of the thermal expansion rate decreases nonlinearly as the CNT content increases. Moreover, the results provided by the present multi-scale numerical model were in good agreement with those obtained from the corresponding theoretical analyses and experimental measurements in this work, which indicates that this multi-scale numerical approach provides a powerful tool to evaluate the thermal expansion properties of any type of CNT/polymer nanocomposites and therefore promotes the understanding on the thermal behaviors of CNT/polymer nanocomposites for their applications in temperature sensors, nanoelectronics devices, etc.
Resumo:
Management of groundwater systems requires realistic conceptual hydrogeological models as a framework for numerical simulation modelling, but also for system understanding and communicating this to stakeholders and the broader community. To help overcome these challenges we developed GVS (Groundwater Visualisation System), a stand-alone desktop software package that uses interactive 3D visualisation and animation techniques. The goal was a user-friendly groundwater management tool that could support a range of existing real-world and pre-processed data, both surface and subsurface, including geology and various types of temporal hydrological information. GVS allows these data to be integrated into a single conceptual hydrogeological model. In addition, 3D geological models produced externally using other software packages, can readily be imported into GVS models, as can outputs of simulations (e.g. piezometric surfaces) produced by software such as MODFLOW or FEFLOW. Boreholes can be integrated, showing any down-hole data and properties, including screen information, intersected geology, water level data and water chemistry. Animation is used to display spatial and temporal changes, with time-series data such as rainfall, standing water levels and electrical conductivity, displaying dynamic processes. Time and space variations can be presented using a range of contouring and colour mapping techniques, in addition to interactive plots of time-series parameters. Other types of data, for example, demographics and cultural information, can also be readily incorporated. The GVS software can execute on a standard Windows or Linux-based PC with a minimum of 2 GB RAM, and the model output is easy and inexpensive to distribute, by download or via USB/DVD/CD. Example models are described here for three groundwater systems in Queensland, northeastern Australia: two unconfined alluvial groundwater systems with intensive irrigation, the Lockyer Valley and the upper Condamine Valley, and the Surat Basin, a large sedimentary basin of confined artesian aquifers. This latter example required more detail in the hydrostratigraphy, correlation of formations with drillholes and visualisation of simulation piezometric surfaces. Both alluvial system GVS models were developed during drought conditions to support government strategies to implement groundwater management. The Surat Basin model was industry sponsored research, for coal seam gas groundwater management and community information and consultation. The “virtual” groundwater systems in these 3D GVS models can be interactively interrogated by standard functions, plus production of 2D cross-sections, data selection from the 3D scene, rear end database and plot displays. A unique feature is that GVS allows investigation of time-series data across different display modes, both 2D and 3D. GVS has been used successfully as a tool to enhance community/stakeholder understanding and knowledge of groundwater systems and is of value for training and educational purposes. Projects completed confirm that GVS provides a powerful support to management and decision making, and as a tool for interpretation of groundwater system hydrological processes. A highly effective visualisation output is the production of short videos (e.g. 2–5 min) based on sequences of camera ‘fly-throughs’ and screen images. Further work involves developing support for multi-screen displays and touch-screen technologies, distributed rendering, gestural interaction systems. To highlight the visualisation and animation capability of the GVS software, links to related multimedia hosted online sites are included in the references.
Resumo:
Crowds of non-combatants play a large and increasingly recognized role in modern military operations, and often create substantial difficulties for the combatant forces involved. However, realistic models of crowds are essentially absent from current military simulations. To address this problem we are developing a crowd simulation capable of generating crowds of non-combatant civilians that exhibit a variety of realistic individual and group behaviours at differing levels of fidelity. The crowd simulation is interoperable with existing military simulations using a standard distributed simulation architecture. Commercial game technology is utilized in the crowd simulation to model both urban terrain and the physical behaviours of the human characters that make up the crowd. The objective of this paper is to present the process involved with the design and development of a simulation that integrates commercially available game technology with current military simulations in order to generate realistic and believable crowd behaviour.
Resumo:
A numerical study is presented to examine the fingering instability of a gravity-driven thin liquid film flowing down the outer wall of a vertical cylinder. The lubrication approximation is employed to derive an evolution equation for the height of the film, which is dependent on a single parameter, the dimensionless cylinder radius. This equation is identified as a special case of that which describes thin film flow down an inclined plane. Fully three-dimensional simulations of the film depict a fingering pattern at the advancing contact line. We find the number of fingers observed in our simulations to be in excellent agreement with experimental observations and a linear stability analysis reported recently by Smolka & SeGall (Phys Fluids 23, 092103 (2011)). As the radius of the cylinder decreases, the modes of perturbation have an increased growth rate, thus increasing cylinder curvature partially acts to encourage the contact line instability. In direct competition with this behaviour, a decrease in cylinder radius means that fewer fingers are able to form around the circumference of the cylinder. Indeed, for a sufficiently small radius, a transition is observed, at which point the contact line is stable to transverse perturbations of all wavenumbers. In this regime, free surface instabilities lead to the development of wave patterns in the axial direction, and the flow features become perfectly analogous to the two-dimensional flow of a thin film down an inverted plane as studied by Lin & Kondic (Phys Fluids 22, 052105 (2010)). Finally, we simulate the flow of a single drop down the outside of the cylinder. Our results show that for drops with low volume, the cylinder curvature has the effect of increasing drop speed and hence promoting the phenomenon of pearling. On the other hand, drops with much larger volume evolve to form single long rivulets with a similar shape to a finger formed in the aforementioned simulations.
Resumo:
In the modern built environment, building construction and demolition consume a large amount of energy and emits greenhouse gasses due to widely used conventional construction materials such as reinforced and composite concrete. These materials consume high amount of natural resources and possess high embodied energy. More energy is required to recycle or reuse such materials at the cessation of use. Therefore, it is very important to use recyclable or reusable new materials in building construction in order to conserve natural resources and reduce the energy and emissions associated with conventional materials. Advancements in materials technology have resulted in the introduction of new composite and hybrid materials in infrastructure construction as alternatives to the conventional materials. This research project has developed a lightweight and prefabricatable Hybrid Composite Floor Plate System (HCFPS) as an alternative to conventional floor system, with desirable properties, easy to construct, economical, demountable, recyclable and reusable. Component materials of HCFPS include a central Polyurethane (PU) core, outer layers of Glass-fiber Reinforced Cement (GRC) and steel laminates at tensile regions. This research work explored the structural adequacy and performance characteristics of hybridised GRC, PU and steel laminate for the development of HCFPS. Performance characteristics of HCFPS were investigated using Finite Element (FE) method simulations supported by experimental testing. Parametric studies were conducted to develop the HCFPS to satisfy static performance using sectional configurations, spans, loading and material properties as the parameters. Dynamic response of HCFPS floors was investigated by conducting parametric studies using material properties, walking frequency and damping as the parameters. Research findings show that HCFPS can be used in office and residential buildings to provide acceptable static and dynamic performance. Design guidelines were developed for this new floor system. HCFPS is easy to construct and economical compared to conventional floor systems as it is lightweight and prefabricatable floor system. This floor system can also be demounted and reused or recycled at the cessation of use due to its component materials.
Resumo:
A procedure for the evaluation of multiple scattering contributions is described, for deep inelastic neutron scattering (DINS) studies using an inverse geometry time-of-flight spectrometer. The accuracy of a Monte Carlo code DINSMS, used to calculate the multiple scattering, is tested by comparison with analytic expressions and with experimental data collected from polythene, polycrystalline graphite and tin samples. It is shown that the Monte Carlo code gives an accurate representation of the measured data and can therefore be used to reliably correct DINS data.
Resumo:
Introduction: The accurate identification of tissue electron densities is of great importance for Monte Carlo (MC) dose calculations. When converting patient CT data into a voxelised format suitable for MC simulations, however, it is common to simplify the assignment of electron densities so that the complex tissues existing in the human body are categorized into a few basic types. This study examines the effects that the assignment of tissue types and the calculation of densities can have on the results of MC simulations, for the particular case of a Siemen’s Sensation 4 CT scanner located in a radiotherapy centre where QA measurements are routinely made using 11 tissue types (plus air). Methods: DOSXYZnrc phantoms are generated from CT data, using the CTCREATE user code, with the relationship between Hounsfield units (HU) and density determined via linear interpolation between a series of specified points on the ‘CT-density ramp’ (see Figure 1(a)). Tissue types are assigned according to HU ranges. Each voxel in the DOSXYZnrc phantom therefore has an electron density (electrons/cm3) defined by the product of the mass density (from the HU conversion) and the intrinsic electron density (electrons /gram) (from the material assignment), in that voxel. In this study, we consider the problems of density conversion and material identification separately: the CT-density ramp is simplified by decreasing the number of points which define it from 12 down to 8, 3 and 2; and the material-type-assignment is varied by defining the materials which comprise our test phantom (a Supertech head) as two tissues and bone, two plastics and bone, water only and (as an extreme case) lead only. The effect of these parameters on radiological thickness maps derived from simulated portal images is investigated. Results & Discussion: Increasing the degree of simplification of the CT-density ramp results in an increasing effect on the resulting radiological thickness calculated for the Supertech head phantom. For instance, defining the CT-density ramp using 8 points, instead of 12, results in a maximum radiological thickness change of 0.2 cm, whereas defining the CT-density ramp using only 2 points results in a maximum radiological thickness change of 11.2 cm. Changing the definition of the materials comprising the phantom between water and plastic and tissue results in millimetre-scale changes to the resulting radiological thickness. When the entire phantom is defined as lead, this alteration changes the calculated radiological thickness by a maximum of 9.7 cm. Evidently, the simplification of the CT-density ramp has a greater effect on the resulting radiological thickness map than does the alteration of the assignment of tissue types. Conclusions: It is possible to alter the definitions of the tissue types comprising the phantom (or patient) without substantially altering the results of simulated portal images. However, these images are very sensitive to the accurate identification of the HU-density relationship. When converting data from a patient’s CT into a MC simulation phantom, therefore, all possible care should be taken to accurately reproduce the conversion between HU and mass density, for the specific CT scanner used. Acknowledgements: This work is funded by the NHMRC, through a project grant, and supported by the Queensland University of Technology (QUT) and the Royal Brisbane and Women's Hospital (RBWH), Brisbane, Australia. The authors are grateful to the staff of the RBWH, especially Darren Cassidy, for assistance in obtaining the phantom CT data used in this study. The authors also wish to thank Cathy Hargrave, of QUT, for assistance in formatting the CT data, using the Pinnacle TPS. Computational resources and services used in this work were provided by the HPC and Research Support Group, QUT, Brisbane, Australia.
Resumo:
Introduction: Recent advances in the planning and delivery of radiotherapy treatments have resulted in improvements in the accuracy and precision with which therapeutic radiation can be administered. As the complexity of the treatments increases it becomes more difficult to predict the dose distribution in the patient accurately. Monte Carlo (MC) methods have the potential to improve the accuracy of the dose calculations and are increasingly being recognised as the ‘gold standard’ for predicting dose deposition in the patient [1]. This project has three main aims: 1. To develop tools that enable the transfer of treatment plan information from the treatment planning system (TPS) to a MC dose calculation engine. 2. To develop tools for comparing the 3D dose distributions calculated by the TPS and the MC dose engine. 3. To investigate the radiobiological significance of any errors between the TPS patient dose distribution and the MC dose distribution in terms of Tumour Control Probability (TCP) and Normal Tissue Complication Probabilities (NTCP). The work presented here addresses the first two aims. Methods: (1a) Plan Importing: A database of commissioned accelerator models (Elekta Precise and Varian 2100CD) has been developed for treatment simulations in the MC system (EGSnrc/BEAMnrc). Beam descriptions can be exported from the TPS using the widespread DICOM framework, and the resultant files are parsed with the assistance of a software library (PixelMed Java DICOM Toolkit). The information in these files (such as the monitor units, the jaw positions and gantry orientation) is used to construct a plan-specific accelerator model which allows an accurate simulation of the patient treatment field. (1b) Dose Simulation: The calculation of a dose distribution requires patient CT images which are prepared for the MC simulation using a tool (CTCREATE) packaged with the system. Beam simulation results are converted to absolute dose per- MU using calibration factors recorded during the commissioning process and treatment simulation. These distributions are combined according to the MU meter settings stored in the exported plan to produce an accurate description of the prescribed dose to the patient. (2) Dose Comparison: TPS dose calculations can be obtained using either a DICOM export or by direct retrieval of binary dose files from the file system. Dose difference, gamma evaluation and normalised dose difference algorithms [2] were employed for the comparison of the TPS dose distribution and the MC dose distribution. These implementations are spatial resolution independent and able to interpolate for comparisons. Results and Discussion: The tools successfully produced Monte Carlo input files for a variety of plans exported from the Eclipse (Varian Medical Systems) and Pinnacle (Philips Medical Systems) planning systems: ranging in complexity from a single uniform square field to a five-field step and shoot IMRT treatment. The simulation of collimated beams has been verified geometrically, and validation of dose distributions in a simple body phantom (QUASAR) will follow. The developed dose comparison algorithms have also been tested with controlled dose distribution changes. Conclusion: The capability of the developed code to independently process treatment plans has been demonstrated. A number of limitations exist: only static fields are currently supported (dynamic wedges and dynamic IMRT will require further development), and the process has not been tested for planning systems other than Eclipse and Pinnacle. The tools will be used to independently assess the accuracy of the current treatment planning system dose calculation algorithms for complex treatment deliveries such as IMRT in treatment sites where patient inhomogeneities are expected to be significant. Acknowledgements: Computational resources and services used in this work were provided by the HPC and Research Support Group, Queensland University of Technology, Brisbane, Australia. Pinnacle dose parsing made possible with the help of Paul Reich, North Coast Cancer Institute, North Coast, New South Wales.
Resumo:
Invasion waves of cells play an important role in development, disease and repair. Standard discrete models of such processes typically involve simulating cell motility, cell proliferation and cell-to-cell crowding effects in a lattice-based framework. The continuum-limit description is often given by a reaction–diffusion equation that is related to the Fisher–Kolmogorov equation. One of the limitations of a standard lattice-based approach is that real cells move and proliferate in continuous space and are not restricted to a predefined lattice structure. We present a lattice-free model of cell motility and proliferation, with cell-to-cell crowding effects, and we use the model to replicate invasion wave-type behaviour. The continuum-limit description of the discrete model is a reaction–diffusion equation with a proliferation term that is different from lattice-based models. Comparing lattice based and lattice-free simulations indicates that both models lead to invasion fronts that are similar at the leading edge, where the cell density is low. Conversely, the two models make different predictions in the high density region of the domain, well behind the leading edge. We analyse the continuum-limit description of the lattice based and lattice-free models to show that both give rise to invasion wave type solutions that move with the same speed but have very different shapes. We explore the significance of these differences by calibrating the parameters in the standard Fisher–Kolmogorov equation using data from the lattice-free model. We conclude that estimating parameters using this kind of standard procedure can produce misleading results.
Resumo:
We define a pair-correlation function that can be used to characterize spatiotemporal patterning in experimental images and snapshots from discrete simulations. Unlike previous pair-correlation functions, the pair-correlation functions developed here depend on the location and size of objects. The pair-correlation function can be used to indicate complete spatial randomness, aggregation or segregation over a range of length scales, and quantifies spatial structures such as the shape, size and distribution of clusters. Comparing pair-correlation data for various experimental and simulation images illustrates their potential use as a summary statistic for calibrating discrete models of various physical processes.
Resumo:
The first fiber Bragg grating (FBG) accelerometer using direct transverse forces is demonstrated by fixing the FBG by its two ends and placing a transversely moving inertial object at its middle. It is very sensitive because a lightly stretched FBG is more sensitive to transverse forces than axial forces. Its resonant frequency and static sensitivity are analyzed by the classic spring-mass theory, assuming the axial force changes little. The experiments show that the theory can be modified for cases where the assumption does not hold. The resonant frequency can be modified by a linear relationship experimentally achieved, and the static sensitivity by an alternative method proposed. The principles of the over-range protection and low cross axial sensitivity are achieved by limiting the movement of the FBG and were validated experimentally. The sensitivities 1.333 and 0.634 nm/g were experimentally achieved by 5.29 and 2.83 gram inertial objects at 10 Hz from 0.1 to 0.4 g (g = 9.8 m/s 2), respectively, and their resonant frequencies were around 25 Hz. Their theoretical static sensitivities and resonant frequencies found by the modifications are 1.188 nm/g and 26.81 Hz for the 5.29 gram one and 0.784 nm/g and 29.04 Hz for the 2.83 gram one, respectively.