940 resultados para simulation model
Resumo:
The finding that Pareto distributions are adequate to model Internet packet interarrival times has motivated the proposal of methods to evaluate steady-state performance measures of Pareto/D/1/k queues. Some limited analytical derivation for queue models has been proposed in the literature, but their solutions are often of a great mathematical challenge. To overcome such limitations, simulation tools that can deal with general queueing system must be developed. Despite certain limitations, simulation algorithms provide a mechanism to obtain insight and good numerical approximation to parameters of queues. In this work, we give an overview of some of these methods and compare them with our simulation approach, which are suited to solve queues with Generalized-Pareto interarrival time distributions. The paper discusses the properties and use of the Pareto distribution. We propose a real time trace simulation model for estimating the steady-state probability showing the tail-raising effect, loss probability, delay of the Pareto/D/1/k queue and make a comparison with M/D/1/k. The background on Internet traffic will help to do the evaluation correctly. This model can be used to study the long- tailed queueing systems. We close the paper with some general comments and offer thoughts about future work.
Resumo:
A new mesoscale simulation model for solids dissolution based on an computationally efficient and versatile digital modelling approach (DigiDiss) is considered and validated against analytical solutions and published experimental data for simple geometries. As the digital model is specifically designed to handle irregular shapes and complex multi-component structures, use of the model is explored for single crystals (sugars) and clusters. Single crystals and the cluster were first scanned using X-ray microtomography to obtain a digital version of their structures. The digitised particles and clusters were used as a structural input to digital simulation. The same particles were then dissolved in water and the dissolution process was recorded by a video camera and analysed yielding: the overall dissolution times and images of particle size and shape during the dissolution. The results demonstrate the coherence of simulation method to reproduce experimental behaviour, based on known chemical and diffusion properties of constituent phase. The paper discusses how further sophistications to the modelling approach will need to include other important effects such as complex disintegration effects (particle ejection, uncertainties in chemical properties). The nature of the digital modelling approach is well suited to for future implementation with high speed computation using hybrid conventional (CPU) and graphical processor (GPU) systems.
Resumo:
A novel simulation model for pyrolysis processes oflignocellulosicbiomassin AspenPlus (R) was presented at the BC&E 2013. Based on kinetic reaction mechanisms, the simulation calculates product compositions and yields depending on reactor conditions (temperature, residence time, flue gas flow rate) and feedstock composition (biochemical composition, atomic composition, ash and alkali metal content). The simulation model was found to show good correlation with existing publications. In order to further verify the model, own pyrolysis experiments in a 1 kg/h continuously fed fluidized bed fast pyrolysis reactor are performed. Two types of biomass with different characteristics are processed in order to evaluate the influence of the feedstock composition on the yields of the pyrolysis products and their composition. One wood and one straw-like feedstock are used due to their different characteristics. Furthermore, the temperature response of yields and product compositions is evaluated by varying the reactor temperature between 450 and 550 degrees C for one of the feedstocks. The yields of the pyrolysis products (gas, oil, char) are determined and their detailed composition is analysed. The experimental runs are reproduced with the corresponding reactor conditions in the AspenPlus model and the results compared with the experimental findings.
Resumo:
Our aim was to approach an important and well-investigable phenomenon – connected to a relatively simple but real field situation – in such a way, that the results of field observations could be directly comparable with the predictions of a simulation model-system which uses a simple mathematical apparatus and to simultaneously gain such a hypothesis-system, which creates the theoretical opportunity for a later experimental series of studies. As a phenomenon of the study, we chose the seasonal coenological changes of aquatic and semiaquatic Heteroptera community. Based on the observed data, we developed such an ecological model-system, which is suitable for generating realistic patterns highly resembling to the observed temporal patterns, and by the help of which predictions can be given to alternative situations of climatic circumstances not experienced before (e.g. climate changes), and furthermore; which can simulate experimental circumstances. The stable coenological state-plane, which was constructed based on the principle of indirect ordination is suitable for unified handling of data series of monitoring and simulation, and also fits for their comparison. On the state-plane, such deviations of empirical and model-generated data can be observed and analysed, which could otherwise remain hidden.
Resumo:
This research is based on the premises that teams can be designed to optimize its performance, and appropriate team coordination is a significant factor to team outcome performance. Contingency theory argues that the effectiveness of a team depends on the right fit of the team design factors to the particular job at hand. Therefore, organizations need computational tools capable of predict the performance of different configurations of teams. This research created an agent-based model of teams called the Team Coordination Model (TCM). The TCM estimates the coordination load and performance of a team, based on its composition, coordination mechanisms, and job’s structural characteristics. The TCM can be used to determine the team’s design characteristics that most likely lead the team to achieve optimal performance. The TCM is implemented as an agent-based discrete-event simulation application built using JAVA and Cybele Pro agent architecture. The model implements the effect of individual team design factors on team processes, but the resulting performance emerges from the behavior of the agents. These team member agents use decision making, and explicit and implicit mechanisms to coordinate the job. The model validation included the comparison of the TCM’s results with statistics from a real team and with the results predicted by the team performance literature. An illustrative 26-1 fractional factorial experimental design demonstrates the application of the simulation model to the design of a team. The results from the ANOVA analysis have been used to recommend the combination of levels of the experimental factors that optimize the completion time for a team that runs sailboats races. This research main contribution to the team modeling literature is a model capable of simulating teams working on complex job environments. The TCM implements a stochastic job structure model capable of capturing some of the complexity not capture by current models. In a stochastic job structure, the tasks required to complete the job change during the team execution of the job. This research proposed three new types of dependencies between tasks required to model a job as a stochastic structure. These dependencies are conditional sequential, single-conditional sequential, and the merge dependencies.
Resumo:
The goal of mangrove restoration projects should be to improve community structure and ecosystem function of degraded coastal landscapes. This requires the ability to forecast how mangrove structure and function will respond to prescribed changes in site conditions including hydrology, topography, and geophysical energies. There are global, regional, and local factors that can explain gradients of regulators (e.g., salinity, sulfides), resources (nutrients, light, water), and hydroperiod (frequency, duration of flooding) that collectively account for stressors that result in diverse patterns of mangrove properties across a variety of environmental settings. Simulation models of hydrology, nutrient biogeochemistry, and vegetation dynamics have been developed to forecast patterns in mangroves in the Florida Coastal Everglades. These models provide insight to mangrove response to specific restoration alternatives, testing causal mechanisms of system degradation. We propose that these models can also assist in selecting performance measures for monitoring programs that evaluate project effectiveness. This selection process in turn improves model development and calibration for forecasting mangrove response to restoration alternatives. Hydrologic performance measures include soil regulators, particularly soil salinity, surface topography of mangrove landscape, and hydroperiod, including both the frequency and duration of flooding. Estuarine performance measures should include salinity of the bay, tidal amplitude, and conditions of fresh water discharge (included in the salinity value). The most important performance measures from the mangrove biogeochemistry model should include soil resources (bulk density, total nitrogen, and phosphorus) and soil accretion. Mangrove ecology performance measures should include forest dimension analysis (transects and/or plots), sapling recruitment, leaf area index, and faunal relationships. Estuarine ecology performance measures should include the habitat function of mangroves, which can be evaluated with growth rate of key species, habitat suitability analysis, isotope abundance of indicator species, and bird census. The list of performance measures can be modified according to the model output that is used to define the scientific goals during the restoration planning process that reflect specific goals of the project.
Resumo:
Increased pressure to control costs and increased competition has prompted health care managers to look for tools to effectively operate their institutions. This research sought a framework for the development of a Simulation-Based Decision Support System (SB-DSS) to evaluate operating policies. A prototype of this SB-DSS was developed. It incorporates a simulation model that uses real or simulated data. ER decisions have been categorized and, for each one, an implementation plan has been devised. Several issues of integrating heterogeneous tools have been addressed. The prototype revealed that simulation can truly be used in this environment in a timely fashion because the simulation model has been complemented with a series of decision-making routines. These routines use a hierarchical approach to organize the various scenarios under which the model may run and to partially reconfigure the ARENA model at run time. Hence, the SB-DSS tailors its responses to each node in the hierarchy.
Resumo:
The optimization of the timing parameters of traffic signals provides for efficient operation of traffic along a signalized transportation system. Optimization tools with macroscopic simulation models have been used to determine optimal timing plans. These plans have been, in some cases, evaluated and fine tuned using microscopic simulation tools. A number of studies show inconsistencies between optimization tool results based on macroscopic simulation and the results obtained from microscopic simulation. No attempts have been made to determine the reason behind these inconsistencies. This research investigates whether adjusting the parameters of macroscopic simulation models to correspond to the calibrated microscopic simulation model parameters can reduce said inconsistencies. The adjusted parameters include platoon dispersion model parameters, saturation flow rates, and cruise speeds. The results from this work show that adjusting cruise speeds and saturation flow rates can have significant impacts on improving the optimization/macroscopic simulation results as assessed by microscopic simulation models.
Resumo:
A novel surrogate model is proposed in lieu of Computational Fluid Dynamics (CFD) solvers, for fast nonlinear aerodynamic and aeroelastic modeling. A nonlinear function is identified on selected interpolation points by
a discrete empirical interpolation method (DEIM). The flow field is then reconstructed using a least square approximation of the flow modes extracted
by proper orthogonal decomposition (POD). The aeroelastic reduce order
model (ROM) is completed by introducing a nonlinear mapping function
between displacements and the DEIM points. The proposed model is investigated to predict the aerodynamic forces due to forced motions using
a N ACA 0012 airfoil undergoing a prescribed pitching oscillation. To investigate aeroelastic problems at transonic conditions, a pitch/plunge airfoil
and a cropped delta wing aeroelastic models are built using linear structural models. The presence of shock-waves triggers the appearance of limit
cycle oscillations (LCO), which the model is able to predict. For all cases
tested, the new ROM shows the ability to replicate the nonlinear aerodynamic forces, structural displacements and reconstruct the complete flow
field with sufficient accuracy at a fraction of the cost of full order CFD
model.
Resumo:
An emergency lowering system for use in safety critical crane applications is discussed. The system is used to safely lower the payload of a crane in case of an electric blackout. The system is based on a backup power source, which is used to operate the crane while the regular supply is not available. The system enables both horizontal and vertical movements of the crane. Two different configurations for building the system are described, one with an uninterruptible power source (UPS) or a diesel generator connected in parallel to the crane’s power supply and one with a customized energy storage connected to the intermediate DC-link in the crane. In order to be able to size the backup power source, the power required during emergency lowering needs to be understood. A simulation model is used to study and optimize the power used during emergency lowering. The simulation model and optimizations are verified in a test hoist. Simulation results are presented with non-optimized and optimized controls for two example applications: a paper roll crane and a steel mill ladle crane. The optimizations are found to significantly reduce the required power for the crane movements during emergency lowering.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
The purpose of this report is to present the Crossdock Door Assignment Problem, which involves assigning destinations to outbound dock doors of Crossdock centres such that travel distance by material handling equipment is minimized. We propose a two fold solution; simulation and optimization of the simulation model - simulation optimization. The novel aspect of our solution approach is that we intend to use simulation to derive a more realistic objective function and use Memetic algorithms to find an optimal solution. The main advantage of using Memetic algorithms is that it combines a local search with Genetic Algorithms. The Crossdock Door Assignment Problem is a new domain application to Memetic Algorithms and it is yet unknown how it will perform.
Resumo:
The purpose of this report is to present the Crossdock Door Assignment Problem, which involves assigning destinations to outbound dock doors of Crossdock centres such that travel distance by material handling equipment is minimized. We propose a two fold solution; simulation and optimization of the simulation model - simulation optimization. The novel aspect of our solution approach is that we intend to use simulation to derive a more realistic objective function and use Memetic algorithms to find an optimal solution. The main advantage of using Memetic algorithms is that it combines a local search with Genetic Algorithms. The Crossdock Door Assignment Problem is a new domain application to Memetic Algorithms and it is yet unknown how it will perform.
Resumo:
When designing systems that are complex, dynamic and stochastic in nature, simulation is generally recognised as one of the best design support technologies, and a valuable aid in the strategic and tactical decision making process. A simulation model consists of a set of rules that define how a system changes over time, given its current state. Unlike analytical models, a simulation model is not solved but is run and the changes of system states can be observed at any point in time. This provides an insight into system dynamics rather than just predicting the output of a system based on specific inputs. Simulation is not a decision making tool but a decision support tool, allowing better informed decisions to be made. Due to the complexity of the real world, a simulation model can only be an approximation of the target system. The essence of the art of simulation modelling is abstraction and simplification. Only those characteristics that are important for the study and analysis of the target system should be included in the simulation model. The purpose of simulation is either to better understand the operation of a target system, or to make predictions about a target system’s performance. It can be viewed as an artificial white-room which allows one to gain insight but also to test new theories and practices without disrupting the daily routine of the focal organisation. What you can expect to gain from a simulation study is very well summarised by FIRMA (2000). His idea is that if the theory that has been framed about the target system holds, and if this theory has been adequately translated into a computer model this would allow you to answer some of the following questions: · Which kind of behaviour can be expected under arbitrarily given parameter combinations and initial conditions? · Which kind of behaviour will a given target system display in the future? · Which state will the target system reach in the future? The required accuracy of the simulation model very much depends on the type of question one is trying to answer. In order to be able to respond to the first question the simulation model needs to be an explanatory model. This requires less data accuracy. In comparison, the simulation model required to answer the latter two questions has to be predictive in nature and therefore needs highly accurate input data to achieve credible outputs. These predictions involve showing trends, rather than giving precise and absolute predictions of the target system performance. The numerical results of a simulation experiment on their own are most often not very useful and need to be rigorously analysed with statistical methods. These results then need to be considered in the context of the real system and interpreted in a qualitative way to make meaningful recommendations or compile best practice guidelines. One needs a good working knowledge about the behaviour of the real system to be able to fully exploit the understanding gained from simulation experiments. The goal of this chapter is to brace the newcomer to the topic of what we think is a valuable asset to the toolset of analysts and decision makers. We will give you a summary of information we have gathered from the literature and of the experiences that we have made first hand during the last five years, whilst obtaining a better understanding of this exciting technology. We hope that this will help you to avoid some pitfalls that we have unwittingly encountered. Section 2 is an introduction to the different types of simulation used in Operational Research and Management Science with a clear focus on agent-based simulation. In Section 3 we outline the theoretical background of multi-agent systems and their elements to prepare you for Section 4 where we discuss how to develop a multi-agent simulation model. Section 5 outlines a simple example of a multi-agent system. Section 6 provides a collection of resources for further studies and finally in Section 7 we will conclude the chapter with a short summary.
Resumo:
Automotive producers are aiming to make their order fulfilment processes more flexible. Opening the pipeline of planned products for dynamic allocation to dealers/ customers is a significant step to be more flexible but the behaviour of such Virtual-Build-To-Order systems are complex to predict and their performance varies significantly as product variety levels change. This study investigates the potential for intelligent control of the pipeline feed, taking into account the current status of inventory (level and mix) and of the volume and mix of unsold products in the planning pipeline, as well as the demand profile. Five ‘intelligent’ methods for selecting the next product to be planned into the production pipeline are analysed using a discrete event simulation model and compared to the unintelligent random feed. The methods are tested under two conditions, firstly when customers must be fulfilled with the exact product they request, and secondly when customers trade-off a shorter waiting time for compromise in specification. The two forms of customer behaviour have a substantial impact on the performance of the methods and there are also significant differences between the methods themselves. When the producer has an accurate model of customer demand, methods that attempt to harmonise the mix in the system to the demand distribution are superior.