899 resultados para Simulation analysis
Resumo:
This thesis describes the design and implementation of a new dynamic simulator called DASP. It is a computer program package written in standard Fortran 77 for the dynamic analysis and simulation of chemical plants. Its main uses include the investigation of a plant's response to disturbances, the determination of the optimal ranges and sensitivities of controller settings and the simulation of the startup and shutdown of chemical plants. The design and structure of the program and a number of features incorporated into it combine to make DASP an effective tool for dynamic simulation. It is an equation-oriented dynamic simulator but the model equations describing the user's problem are generated from in-built model equation library. A combination of the structuring of the model subroutines, the concept of a unit module, and the use of the connection matrix of the problem given by the user have been exploited to achieve this objective. The Executive program has a structure similar to that of a CSSL-type simulator. DASP solves a system of differential equations coupled to nonlinear algebraic equations using an advanced mixed equation solver. The strategy used in formulating the model equations makes it possible to obtain the steady state solution of the problem using the same model equations. DASP can handle state and time events in an efficient way and this includes the modification of the flowsheet. DASP is highly portable and this has been demonstrated by running it on a number of computers with only trivial modifications. The program runs on a microcomputer with 640 kByte of memory. It is a semi-interactive program, with the bulk of all input data given in pre-prepared data files with communication with the user is via an interactive terminal. Using the features in-built in the package, the user can view or modify the values of any input data, variables and parameters in the model, and modify the structure of the flowsheet of the problem during a simulation session. The program has been demonstrated and verified using a number of example problems.
Resumo:
In series I and II of this study ([Chua et al., 2010a] and [Chua et al., 2010b]), we discussed the time scale of granule–granule collision, droplet–granule collision and droplet spreading in Fluidized Bed Melt Granulation (FBMG). In this third one, we consider the rate at which binder solidifies. Simple analytical solution, based on classical formulation for conduction across a semi-infinite slab, was used to obtain a generalized equation for binder solidification time. A multi-physics simulation package (Comsol) was used to predict the binder solidification time for various operating conditions usually considered in FBMG. The simulation results were validated with experimental temperature data obtained with a high speed infrared camera during solidification of ‘macroscopic’ (mm scale) droplets. For the range of microscopic droplet size and operating conditions considered for a FBMG process, the binder solidification time was found to fall approximately between 10-3 and 10-1 s. This is the slowest compared to the other three major FBMG microscopic events discussed in this series (granule–granule collision, granule–droplet collision and droplet spreading).
Resumo:
The high capital cost of robots prohibit their economic application. One method of making their application more economic is to increase their operating speed. This can be done in a number of ways e.g. redesign of robot geometry, improving actuators and improving control system design. In this thesis the control system design is considered. It is identified in the literature review that two aspects in relation to robot control system design have not been addressed in any great detail by previous researchers. These are: how significant are the coupling terms in the dynamic equations of the robot and what is the effect of the coupling terms on the performance of a number of typical independent axis control schemes?. The work in this thesis addresses these two questions in detail. A program was designed to automatically calculate the path and trajectory and to calculate the significance of the coupling terms in an example application of a robot manipulator tracking a part on a moving conveyor. The inertial and velocity coupling terms have been shown to be of significance when the manipulator was considered to be directly driven. A simulation of the robot manipulator following the planned trajectory has been established in order to assess the performance of the independent axis control strategies. The inertial coupling was shown to reinforce the control torque at the corner points of the trajectory, where there was an abrupt demand in acceleration in each axis but of opposite sign. This reduced the tracking error however, this effect was not controllable. A second effect was due to the velocity coupling terms. At high trajectory speeds it was shown, by means of a root locus analysis, that the velocity coupling terms caused the system to become unstable.
Resumo:
The state of the art in productivity measurement and analysis shows a gap between simple methods having little relevance in practice and sophisticated mathematical theory which is unwieldy for strategic and tactical planning purposes, -particularly at company level. An extension is made in this thesis to the method of productivity measurement and analysis based on the concept of added value, appropriate to those companies in which the materials, bought-in parts and services change substantially and a number of plants and inter-related units are involved in providing components for final assembly. Reviews and comparisons of productivity measurement dealing with alternative indices and their problems have been made and appropriate solutions put forward to productivity analysis in general and the added value method in particular. Based on this concept and method, three kinds of computerised models two of them deterministic, called sensitivity analysis and deterministic appraisal, and the third one, stochastic, called risk simulation, have been developed to cope with the planning of productivity and productivity growth with reference to the changes in their component variables, ranging from a single value 'to• a class interval of values of a productivity distribution. The models are designed to be flexible and can be adjusted according to the available computer capacity expected accuracy and 'presentation of the output. The stochastic model is based on the assumption of statistical independence between individual variables and the existence of normality in their probability distributions. The component variables have been forecasted using polynomials of degree four. This model is tested by comparisons of its behaviour with that of mathematical model using real historical data from British Leyland, and the results were satisfactory within acceptable levels of accuracy. Modifications to the model and its statistical treatment have been made as required. The results of applying these measurements and planning models to the British motor vehicle manufacturing companies are presented and discussed.
Resumo:
An investigation is carried out into the design of a small local computer network for eventual implementation on the University of Aston campus. Microprocessors are investigated as a possible choice for use as a node controller for reasons of cost and reliability. Since the network will be local, high speed lines of megabit order are proposed. After an introduction to several well known networks, various aspects of networks are discussed including packet switching, functions of a node and host-node protocol. Chapter three develops the network philosophy with an introduction to microprocessors. Various organisations of microprocessors into multicomputer and multiprocessor systems are discussed, together with methods of achieving reliabls computing. Chapter four presents the simulation model and its implentation as a computer program. The major modelling effort is to study the behaviour of messages queueing for access to the network and the message delay experienced on the network. Use is made of spectral analysis to determine the sampling frequency while Sxponentially Weighted Noving Averages are used for data smoothing.
Resumo:
Predicting future need for water resources has traditionally been, at best, a crude mixture of art and science. This has prevented the evaluation of water need from being carried out in either a consistent or comprehensive manner. This inconsistent and somewhat arbitrary approach to water resources planning led to well publicised premature developments in the 1970's and 1980's but privatisation of the Water Industry, including creation of the Office of Water Services and the National Rivers Authority in 1989, turned the tide of resource planning to the point where funding of schemes and their justification by the Regulators could no longer be assumed. Furthermore, considerable areas of uncertainty were beginning to enter the debate and complicate the assessment It was also no longer appropriate to consider that contingencies would continue to lie solely on the demand side of the equation. An inability to calculate the balance between supply and demand may mean an inability to meet standards of service or, arguably worse, an excessive provision of water resources and excessive costs to customers. United Kingdom Water Industry Research limited (UKWlR) Headroom project in 1998 provided a simple methodology for the calculation of planning margins. This methodology, although well received, was not, however, accepted by the Regulators as a tool sufficient to promote resource development. This thesis begins by considering the history of water resource planning in the UK, moving on to discuss events following privatisation of the water industry post·1985. The mid section of the research forms the bulk of original work and provides a scoping exercise which reveals a catalogue of uncertainties prevalent within the supply-demand balance. Each of these uncertainties is considered in terms of materiality, scope, and whether it can be quantified within a risk analysis package. Many of the areas of uncertainty identified would merit further research. A workable, yet robust, methodology for evaluating the balance between water resources and water demands by using a spreadsheet based risk analysis package is presented. The technique involves statistical sampling and simulation such that samples are taken from input distributions on both the supply and demand side of the equation and the imbalance between supply and demand is calculated in the form of an output distribution. The percentiles of the output distribution represent different standards of service to the customer. The model allows dependencies between distributions to be considered, for improved uncertainties to be assessed and for the impact of uncertain solutions to any imbalance to be calculated directly. The method is considered a Significant leap forward in the field of water resource planning.
Resumo:
SPOT simulation imagery was acquired for a test site in the Forest of Dean in Gloucestershire, U.K. This data was qualitatively and quantitatively evaluated for its potential application in forest resource mapping and management. A variety of techniques are described for enhancing the image with the aim of providing species level discrimination within the forest. Visual interpretation of the imagery was more successful than automated classification. The heterogeneity within the forest classes, and in particular between the forest and urban class, resulted in poor discrimination using traditional `per-pixel' automated methods of classification. Different means of assessing classification accuracy are proposed. Two techniques for measuring textural variation were investigated in an attempt to improve classification accuracy. The first of these, a sequential segmentation method, was found to be beneficial. The second, a parallel segmentation method, resulted in little improvement though this may be related to a combination of resolution in size of the texture extraction area. The effect on classification accuracy of combining the SPOT simulation imagery with other data types is investigated. A grid cell encoding technique was selected as most appropriate for storing digitised topographic (elevation, slope) and ground truth data. Topographic data were shown to improve species-level classification, though with sixteen classes overall accuracies were consistently below 50%. Neither sub-division into age groups or the incorporation of principal components and a band ratio significantly improved classification accuracy. It is concluded that SPOT imagery will not permit species level classification within forested areas as diverse as the Forest of Dean. The imagery will be most useful as part of a multi-stage sampling scheme. The use of texture analysis is highly recommended for extracting maximum information content from the data. Incorporation of the imagery into a GIS will both aid discrimination and provide a useful management tool.
Resumo:
Atomistic Molecular Dynamics provides powerful and flexible tools for the prediction and analysis of molecular and macromolecular systems. Specifically, it provides a means by which we can measure theoretically that which cannot be measured experimentally: the dynamic time-evolution of complex systems comprising atoms and molecules. It is particularly suitable for the simulation and analysis of the otherwise inaccessible details of MHC-peptide interaction and, on a larger scale, the simulation of the immune synapse. Progress has been relatively tentative yet the emergence of truly high-performance computing and the development of coarse-grained simulation now offers us the hope of accurately predicting thermodynamic parameters and of simulating not merely a handful of proteins but larger, longer simulations comprising thousands of protein molecules and the cellular scale structures they form. We exemplify this within the context of immunoinformatics.
Resumo:
This thesis addresses the kineto-elastodynamic analysis of a four-bar mechanism running at high-speed where all links are assumed to be flexible. First, the mechanism, at static configurations, is considered as structure. Two methods are used to model the system, namely the finite element method (FEM) and the dynamic stiffness method. The natural frequencies and mode shapes at different positions from both methods are calculated and compared. The FEM is used to model the mechanism running at high-speed. The governing equations of motion are derived using Hamilton's principle. The equations obtained are a set of stiff ordinary differential equations with periodic coefficients. A model is developed whereby the FEM and the dynamic stiffness method are used conjointly to provide high-precision results with only one element per link. The principal concern of the mechanism designer is the behaviour of the mechanism at steady-state. Few algorithms have been developed to deliver the steady-state solution without resorting to costly time marching simulation. In this study two algorithms are developed to overcome the limitations of the existing algorithms. The superiority of the new algorithms is demonstrated. The notion of critical speeds is clarified and a distinction is drawn between "critical speeds", where stresses are at a local maximum, and "unstable bands" where the mechanism deflections will grow boundlessly. Floquet theory is used to assess the stability of the system. A simple method to locate the critical speeds is derived. It is shown that the critical speeds of the mechanism coincide with the local maxima of the eigenvalues of the transition matrix with respect to the rotational speed of the mechanism.
Resumo:
The conventional design of forming rolls depends heavily on the individual skill of roll designers which is based on intuition and knowledge gained from previous work. Roll design is normally a trial an error procedure, however with the progress of computer technology, CAD/CAM systems for the cold roll-forming industry have been developed. Generally, however, these CAD systems can only provide a flower pattern based on the knowledge obtained from previously successful flower patterns. In the production of ERW (Electric Resistance Welded) tube and pipe, the need for a theoretical simulation of the roll-forming process, which can not only predict the occurrence of the edge buckling but also obtain the optimum forming condition, has been recognised. A new simulation system named "CADFORM" has been devised that can carry out the consistent forming simulation for this tube-making process. The CADFORM system applied an elastic-plastic stress-strain analysis and evaluate edge buckling by using a simplified model of the forming process. The results can also be visualised graphically. The calculated longitudinal strain is obtained by considering the deformation of lateral elements and takes into account the reduction in strains due to the fin-pass roll. These calculated strains correspond quite well with the experimental results. Using the calculated strains, the stresses in the strip can be estimated. The addition of the fin-pass roll reduction significantly reduces the longitudinal compressive stress and therefore effectively suppresses edge buckling. If the calculated longitudinal stress is controlled, by altering the forming flower pattern so it does not exceed the buckling stress within the material, then the occurrence of edge buckling can be avoided. CADFORM predicts the occurrence of edge buckling of the strip in tube-making and uses this information to suggest an appropriate flower pattern and forming conditions which will suppress the occurrence of the edge buckling.
Resumo:
Biomass-To-Liquid (BTL) is one of the most promising low carbon processes available to support the expanding transportation sector. This multi-step process produces hydrocarbon fuels from biomass, the so-called “second generation biofuels” that, unlike first generation biofuels, have the ability to make use of a wider range of biomass feedstock than just plant oils and sugar/starch components. A BTL process based on gasification has yet to be commercialized. This work focuses on the techno-economic feasibility of nine BTL plants. The scope was limited to hydrocarbon products as these can be readily incorporated and integrated into conventional markets and supply chains. The evaluated BTL systems were based on pressurised oxygen gasification of wood biomass or bio-oil and they were characterised by different fuel synthesis processes including: Fischer-Tropsch synthesis, the Methanol to Gasoline (MTG) process and the Topsoe Integrated Gasoline (TIGAS) synthesis. This was the first time that these three fuel synthesis technologies were compared in a single, consistent evaluation. The selected process concepts were modelled using the process simulation software IPSEpro to determine mass balances, energy balances and product distributions. For each BTL concept, a cost model was developed in MS Excel to estimate capital, operating and production costs. An uncertainty analysis based on the Monte Carlo statistical method, was also carried out to examine how the uncertainty in the input parameters of the cost model could affect the output (i.e. production cost) of the model. This was the first time that an uncertainty analysis was included in a published techno-economic assessment study of BTL systems. It was found that bio-oil gasification cannot currently compete with solid biomass gasification due to the lower efficiencies and higher costs associated with the additional thermal conversion step of fast pyrolysis. Fischer-Tropsch synthesis was the most promising fuel synthesis technology for commercial production of liquid hydrocarbon fuels since it achieved higher efficiencies and lower costs than TIGAS and MTG. None of the BTL systems were competitive with conventional fossil fuel plants. However, if government tax take was reduced by approximately 33% or a subsidy of £55/t dry biomass was available, transport biofuels could be competitive with conventional fuels. Large scale biofuel production may be possible in the long term through subsidies, fuels price rises and legislation.
Resumo:
This study presents some quantitative evidence from a number of simulation experiments on the accuracy of the productivitygrowth estimates derived from growthaccounting (GA) and frontier-based methods (namely data envelopment analysis-, corrected ordinary least squares-, and stochastic frontier analysis-based malmquist indices) under various conditions. These include the presence of technical inefficiency, measurement error, misspecification of the production function (for the GA and parametric approaches) and increased input and price volatility from one period to the next. The study finds that the frontier-based methods usually outperform GA, but the overall performance varies by experiment. Parametric approaches generally perform best when there is no functional form misspecification, but their accuracy greatly diminishes otherwise. The results also show that the deterministic approaches perform adequately even under conditions of (modest) measurement error and when measurement error becomes larger, the accuracy of all approaches (including stochastic approaches) deteriorates rapidly, to the point that their estimates could be considered unreliable for policy purposes.
Resumo:
Simulation is an effective method for improving supply chain performance. However, there is limited advice available to assist practitioners in selecting the most appropriate method for a given problem. Much of the advice that does exist relies on custom and practice rather than a rigorous conceptual or empirical analysis. An analysis of the different modelling techniques applied in the supply chain domain was conducted, and the three main approaches to simulation used were identified; these are System Dynamics (SD), Discrete Event Simulation (DES) and Agent Based Modelling (ABM). This research has examined these approaches in two stages. Firstly, a first principles analysis was carried out in order to challenge the received wisdom about their strengths and weaknesses and a series of propositions were developed from this initial analysis. The second stage was to use the case study approach to test these propositions and to provide further empirical evidence to support their comparison. The contributions of this research are both in terms of knowledge and practice. In terms of knowledge, this research is the first holistic cross paradigm comparison of the three main approaches in the supply chain domain. Case studies have involved building ‘back to back’ models of the same supply chain problem using SD and a discrete approach (either DES or ABM). This has led to contributions concerning the limitations of applying SD to operational problem types. SD has also been found to have risks when applied to strategic and policy problems. Discrete methods have been found to have potential for exploring strategic problem types. It has been found that discrete simulation methods can model material and information feedback successfully. Further insights have been gained into the relationship between modelling purpose and modelling approach. In terms of practice, the findings have been summarised in the form of a framework linking modelling purpose, problem characteristics and simulation approach.
Resumo:
Computational Fluid Dynamics (CFD) has found great acceptance among the engineering community as a tool for research and design of processes that are practically difficult or expensive to study experimentally. One of these processes is the biomass gasification in a Circulating Fluidized Bed (CFB). Biomass gasification is the thermo-chemical conversion of biomass at a high temperature and a controlled oxygen amount into fuel gas, also sometime referred to as syngas. Circulating fluidized bed is a type of reactor in which it is possible to maintain a stable and continuous circulation of solids in a gas-solid system. The main objectives of this thesis are four folds: (i) Develop a three-dimensional predictive model of biomass gasification in a CFB riser using advanced Computational Fluid Dynamic (CFD) (ii) Experimentally validate the developed hydrodynamic model using conventional and advanced measuring techniques (iii) Study the complex hydrodynamics, heat transfer and reaction kinetics through modelling and simulation (iv) Study the CFB gasifier performance through parametric analysis and identify the optimum operating condition to maximize the product gas quality. Two different and complimentary experimental techniques were used to validate the hydrodynamic model, namely pressure measurement and particle tracking. The pressure measurement is a very common and widely used technique in fluidized bed studies, while, particle tracking using PEPT, which was originally developed for medical imaging, is a relatively new technique in the engineering field. It is relatively expensive and only available at few research centres around the world. This study started with a simple poly-dispersed single solid phase then moved to binary solid phases. The single solid phase was used for primary validations and eliminating unnecessary options and steps in building the hydrodynamic model. Then the outcomes from the primary validations were applied to the secondary validations of the binary mixture to avoid time consuming computations. Studies on binary solid mixture hydrodynamics is rarely reported in the literature. In this study the binary solid mixture was modelled and validated using experimental data from the both techniques mentioned above. Good agreement was achieved with the both techniques. According to the general gasification steps the developed model has been separated into three main gasification stages; drying, devolatilization and tar cracking, and partial combustion and gasification. The drying was modelled as a mass transfer from the solid phase to the gas phase. The devolatilization and tar cracking model consist of two steps; the devolatilization of the biomass which is used as a single reaction to generate the biomass gases from the volatile materials and tar cracking. The latter is also modelled as one reaction to generate gases with fixed mass fractions. The first reaction was classified as a heterogeneous reaction while the second reaction was classified as homogenous reaction. The partial combustion and gasification model consisted of carbon combustion reactions and carbon and gas phase reactions. The partial combustion considered was for C, CO, H2 and CH4. The carbon gasification reactions used in this study is the Boudouard reaction with CO2, the reaction with H2O and Methanation (Methane forming reaction) reaction to generate methane. The other gas phase reactions considered in this study are the water gas shift reaction, which is modelled as a reversible reaction and the methane steam reforming reaction. The developed gasification model was validated using different experimental data from the literature and for a wide range of operating conditions. Good agreement was observed, thus confirming the capability of the model in predicting biomass gasification in a CFB to a great accuracy. The developed model has been successfully used to carry out sensitivity and parametric analysis. The sensitivity analysis included: study of the effect of inclusion of various combustion reaction; and the effect of radiation in the gasification reaction. The developed model was also used to carry out parametric analysis by changing the following gasifier operating conditions: fuel/air ratio; biomass flow rates; sand (heat carrier) temperatures; sand flow rates; sand and biomass particle sizes; gasifying agent (pure air or pure steam); pyrolysis models used; steam/biomass ratio. Finally, based on these parametric and sensitivity analysis a final model was recommended for the simulation of biomass gasification in a CFB riser.
Resumo:
The aim of this research was to investigate the molecular interactions occurring in the formulation of non-ionic surfactant based vesicles composed monopalmitoyl glycerol (MPG), cholesterol (Chol) and dicetyl phosphate (DCP). In the formulation of these vesicles, the thermodynamic attributes and surfactant interactions based on molecular dynamics, Langmuir monolayer studies, differential scanning calorimetry (DSC), hot stage microscopy and thermogravimetric analysis (TGA) were investigated. Initially the melting points of the components individually, and combined at a 5:4:1 MPG:Chol:DCP weight ratio, were investigated; the results show that lower (90 C) than previously reported (120-140 C) temperatures could be adopted to produce molten surfactants for the production of niosomes. This was advantageous for surfactant stability; whilst TGA studies show that the individual components were stable to above 200 C, the 5:4:1 MPG:Chol:DCP mixture show ∼2% surfactant degradation at 140 C, compared to 0.01% was measured at 90 C. Niosomes formed at this lower temperature offered comparable characteristics to vesicles prepared using higher temperatures commonly reported in literature. In the formation of niosome vesicles, cholesterol also played a key role. Langmuir monolayer studies demonstrated that intercalation of cholesterol in the monolayer did not occur in the MPG:Chol:DCP (5:4:1 weight ratio) mixture. This suggests cholesterol may support bilayer assembly, with molecular simulation studies also demonstrating that vesicles cannot be built without the addition of cholesterol, with higher concentrations of cholesterol (5:4:1 vs 5:2:1, MPG:Chol:DCP) decreasing the time required for niosome assembly. © 2013 Elsevier B.V.