194 resultados para Systems Simulation
em Queensland University of Technology - ePrints Archive
Resumo:
With the advances in computer hardware and software development techniques in the past 25 years, digital computer simulation of train movement and traction systems has been widely adopted as a standard computer-aided engineering tool [1] during the design and development stages of existing and new railway systems. Simulators of different approaches and scales are used extensively to investigate various kinds of system studies. Simulation is now proven to be the cheapest means to carry out performance predication and system behaviour characterisation. When computers were first used to study railway systems, they were mainly employed to perform repetitive but time-consuming computational tasks, such as matrix manipulations for power network solution and exhaustive searches for optimal braking trajectories. With only simple high-level programming languages available at the time, full advantage of the computing hardware could not be taken. Hence, structured simulations of the whole railway system were not very common. Most applications focused on isolated parts of the railway system. It is more appropriate to regard those applications as primarily mechanised calculations rather than simulations. However, a railway system consists of a number of subsystems, such as train movement, power supply and traction drives, which inevitably contains many complexities and diversities. These subsystems interact frequently with each other while the trains are moving; and they have their special features in different railway systems. To further complicate the simulation requirements, constraints like track geometry, speed restrictions and friction have to be considered, not to mention possible non-linearities and uncertainties in the system. In order to provide a comprehensive and accurate account of system behaviour through simulation, a large amount of data has to be organised systematically to ensure easy access and efficient representation; the interactions and relationships among the subsystems should be defined explicitly. These requirements call for sophisticated and effective simulation models for each component of the system. The software development techniques available nowadays allow the evolution of such simulation models. Not only can the applicability of the simulators be largely enhanced by advanced software design, maintainability and modularity for easy understanding and further development, and portability for various hardware platforms are also encouraged. The objective of this paper is to review the development of a number of approaches to simulation models. Attention is, in particular, given to models for train movement, power supply systems and traction drives. These models have been successfully used to enable various ‘what-if’ issues to be resolved effectively in a wide range of applications, such as speed profiles, energy consumption, run times etc.
Resumo:
The impact of simulation methods for social research in the Information Systems (IS) research field remains low. A concern is our field is inadequately leveraging the unique strengths of simulation methods. Although this low impact is frequently attributed to methodological complexity, we offer an alternative explanation – the poor construction of research value. We argue a more intuitive value construction, better connected to the knowledge base, will facilitate increased value and broader appreciation. Meta-analysis of studies published in IS journals over the last decade evidences the low impact. To facilitate value construction, we synthesize four common types of simulation research contribution: Analyzer, Tester, Descriptor, and Theorizer. To illustrate, we employ the proposed typology to describe how each type of value is structured in simulation research and connect each type to instances from IS literature, thereby making these value types and their construction visible and readily accessible to the general IS community.
Resumo:
Simulation is widely used as a tool for analyzing business processes but is mostly focused on examining abstract steady-state situations. Such analyses are helpful for the initial design of a business process but are less suitable for operational decision making and continuous improvement. Here we describe a simulation system for operational decision support in the context of workflow management. To do this we exploit not only the workflow’s design, but also use logged data describing the system’s observed historic behavior, and incorporate information extracted about the current state of the workflow. Making use of actual data capturing the current state and historic information allows our simulations to accurately predict potential near-future behaviors for different scenarios. The approach is supported by a practical toolset which combines and extends the workflow management system YAWL and the process mining framework ProM.
Resumo:
To investigate the effects of adopting a pull system in assembly lines in contrast to a push system, simulation software called “ARENA” is used as a tool in order to present numerical results from both systems. Simulation scenarios are created to evaluate the effects of attributes changing in assembly systems, with influential factors including the change of manufacturing system (push system to pull system) and variation of demand. Moreover, pull system manufacturing consists of the addition attribute, which is the number of buffer storage. This paper will provide an analysis based on a previous case study, hence process time and workflow refer to the journal name “Optimising and simulating the assembly line balancing problem in a motorcycle manufacturing company: a case study” [2]. The implementation of the pull system mechanism is to produce a system improvement in terms of the number of Work-In-Process (WIP), total time of products in the system, and the number of finished product inventory, while retaining the same throughput.
Resumo:
IEEE 802.11p is the new standard for Inter-Vehicular Communications (IVC) using the 5.9 GHz frequency band, as part of the DSRC framework; it will enable applications based on Cooperative Systems. Simulation is widely used to estimate or verify the potential benefits of such cooperative applications, notably in terms of safety for the drivers. We have developed a performance model for 802.11p that can be used by simulations of cooperative applications (e.g. collision avoidance) without requiring intricate models of the whole IVC stack. Instead, it provide a a straightforward yet realistic modelisation of IVC performance. Our model uses data from extensive field trials to infer the correlation between speed, distance and performance metrics such as maximum range, latency and frame loss. Then, we improve this model to limit the number of profiles that have to be generated when there are more than a few couples of emitter-receptor in a given location. Our model generates realistic performance for rural or suburban environments among small groups of IVC-equipped vehicles and road side units.
Resumo:
The University of Queensland (UQ) has extensive laboratory facilities associated with each course in the undergraduate electrical engineering program. The laboratories include machines and drives, power systems simulation, power electronics and intelligent equipment diagnostics. A number of postgraduate coursework programs are available at UQ and the courses associated with these programs also use laboratories. The machine laboratory is currently being renovated with i-lab style web based experimental facilities, which could be remotely accessed. Senior level courses use independent projects using laboratory facilities and this is found to be very useful to improve students' learning skill. Laboratory experiments are always an integral part of a course. Most of the experiments are conducted in a group of 2-3 students and thesis projects in BE and major projects in ME are always individual works. Assessment is done in-class for the performance and also for the report and analysis.
Resumo:
Abstract Computer simulation is a versatile and commonly used tool for the design and evaluation of systems with different degrees of complexity. Power distribution systems and electric railway network are areas for which computer simulations are being heavily applied. A dominant factor in evaluating the performance of a software simulator is its processing time, especially in the cases of real-time simulation. Parallel processing provides a viable mean to reduce the computing time and is therefore suitable for building real-time simulators. In this paper, we present different issues related to solving the power distribution system with parallel computing based on a multiple-CPU server and we will concentrate, in particular, on the speedup performance of such an approach.
Resumo:
Different international plant protection organisations advocate different schemes for conducting pest risk assessments. Most of these schemes use structured questionnaire in which experts are asked to score several items using an ordinal scale. The scores are then combined using a range of procedures, such as simple arithmetic mean, weighted averages, multiplication of scores, and cumulative sums. The most useful schemes will correctly identify harmful pests and identify ones that are not. As the quality of a pest risk assessment can depend on the characteristics of the scoring system used by the risk assessors (i.e., on the number of points of the scale and on the method used for combining the component scores), it is important to assess and compare the performance of different scoring systems. In this article, we proposed a new method for assessing scoring systems. Its principle is to simulate virtual data using a stochastic model and, then, to estimate sensitivity and specificity values from these data for different scoring systems. The interest of our approach was illustrated in a case study where several scoring systems were compared. Data for this analysis were generated using a probabilistic model describing the pest introduction process. The generated data were then used to simulate the outcome of scoring systems and to assess the accuracy of the decisions about positive and negative introduction. The results showed that ordinal scales with at most 5 or 6 points were sufficient and that the multiplication-based scoring systems performed better than their sum-based counterparts. The proposed method could be used in the future to assess a great diversity of scoring systems.
Resumo:
On the road, near collision events (also close calls or near-miss incidents) largely outnumber actual crashes, yet most of them can never be recorded by current traffic data collection technologies or crashes analysis tools. The analysis of near collisions data is an important step in the process of reducing the crash rate. There have been several studies that have investigated near collisions; to our knowledge, this is the first study that uses the functionalities provided by cooperative vehicles to collect near misses information. We use the VISSIM traffic simulator and a custom C++ engine to simulate cooperative vehicles and their ability to detect near collision events. Our results showed that, within a simple simulated environment, adequate information on near collision events can be collected using the functionalities of cooperative perception systems. The relationship between the ratio of detected events and the ratio of equipped vehicle was shown to closely follow a squared law, and the largest source of nondetection was packet loss instead of packet delays and GPS imprecision.