882 resultados para Dynamic Modelling And Simulation
Resumo:
This paper presents an approach for optimal design of a fully regenerative dynamic dynamometer using genetic algorithms. The proposed dynamometer system includes an energy storage mechanism to adaptively absorb the energy variations following the dynamometer transients. This allows the minimum power electronics requirement at the mains power supply grid to compensate for the losses. The overall dynamometer system is a dynamic complex system and design of the system is a multi-objective problem, which requires advanced optimisation techniques such as genetic algorithms. The case study of designing and simulation of the dynamometer system indicates that the genetic algorithm based approach is able to locate a best available solution in view of system performance and computational costs.
Resumo:
The amplification of demand variation up a supply chain widely termed ‘the Bullwhip Effect’ is disruptive, costly and something that supply chain management generally seeks to minimise. Originally attributed to poor system design; deficiencies in policies, organisation structure and delays in material and information flow all lead to sub-optimal reorder point calculation. It has since been attributed to exogenous random factors such as: uncertainties in demand, supply and distribution lead time but these causes are not exclusive as academic and operational studies since have shown that orders and/or inventories can exhibit significant variability even if customer demand and lead time are deterministic. This increase in the range of possible causes of dynamic behaviour indicates that our understanding of the phenomenon is far from complete. One possible, yet previously unexplored, factor that may influence dynamic behaviour in supply chains is the application and operation of supply chain performance measures. Organisations monitoring and responding to their adopted key performance metrics will make operational changes and this action may influence the level of dynamics within the supply chain, possibly degrading the performance of the very system they were intended to measure. In order to explore this a plausible abstraction of the operational responses to the Supply Chain Council’s SCOR® (Supply Chain Operations Reference) model was incorporated into a classic Beer Game distribution representation, using the dynamic discrete event simulation software Simul8. During the simulation the five SCOR Supply Chain Performance Attributes: Reliability, Responsiveness, Flexibility, Cost and Utilisation were continuously monitored and compared to established targets. Operational adjustments to the; reorder point, transportation modes and production capacity (where appropriate) for three independent supply chain roles were made and the degree of dynamic behaviour in the Supply Chain measured, using the ratio of the standard deviation of upstream demand relative to the standard deviation of the downstream demand. Factors employed to build the detailed model include: variable retail demand, order transmission, transportation delays, production delays, capacity constraints demand multipliers and demand averaging periods. Five dimensions of supply chain performance were monitored independently in three autonomous supply chain roles and operational settings adjusted accordingly. Uniqueness of this research stems from the application of the five SCOR performance attributes with modelled operational responses in a dynamic discrete event simulation model. This project makes its primary contribution to knowledge by measuring the impact, on supply chain dynamics, of applying a representative performance measurement system.
Resumo:
Product design and sourcing decisions are among the most difficult and important of all decisions facing multinational manufacturing companies, yet associated decision support and evaluation systems tend to be myopic in nature. Design for manufacture and assembly techniques, for example, generally focuses on manufacturing capability and ignores capacity although both should be considered. Similarly, most modelling and evaluation tools available to examine the performance of various solution and improvement techniques have a narrower scope than desired. A unique collaboration, funded by the US National Science Foundation, between researchers in the USA and the UK currently addresses these problems. This paper describes a technique known as Design For the Existing Environment (DFEE) and an holistic evaluation system based on enterprise simulation that was used to demonstrate the business benefits of DFEE applied in a simple product development and manufacturing case study. A project that will extend these techniques to evaluate global product sourcing strategies is described along with the practical difficulties of building an enterprise simulation on the scale and detail required.
Resumo:
A new surface analysis technique has been developed which has a number of benefits compared to conventional Low Energy Ion Scattering Spectrometry (LEISS). A major potential advantage arising from the absence of charge exchange complications is the possibility of quantification. The instrumentation that has been developed also offers the possibility of unique studies concerning the interaction between low energy ions and atoms and solid surfaces. From these studies it may also be possible, in principle, to generate sensitivity factors to quantify LEISS data. The instrumentation, which is referred to as a Time-of-Flight Fast Atom Scattering Spectrometer has been developed to investigate these conjecture in practice. The development, involved a number of modifications to an existing instrument, and allowed samples to be bombarded with a monoenergetic pulsed beam of either atoms or ions, and provided the capability to analyse the spectra of scattered atoms and ions separately. Further to this a system was designed and constructed to allow incident, exit and azimuthal angles of the particle beam to be varied independently. The key development was that of a pulsed, and mass filtered atom source; which was developed by a cyclic process of design, modelling and experimentation. Although it was possible to demonstrate the unique capabilities of the instrument, problems relating to surface contamination prevented the measurement of the neutralisation probabilities. However, these problems appear to be technical rather than scientific in nature, and could be readily resolved given the appropriate resources. Experimental spectra obtained from a number of samples demonstrate some fundamental differences between the scattered ion and neutral spectra. For practical non-ordered surfaces the ToF spectra are more complex than their LEISS counterparts. This is particularly true for helium scattering where it appears, in the absence of detailed computer simulation, that quantitative analysis is limited to ordered surfaces. Despite this limitation the ToFFASS instrument opens the way for quantitative analysis of the 'true' surface region to a wider range of surface materials.
Resumo:
The use of digital communication systems is increasing very rapidly. This is due to lower system implementation cost compared to analogue transmission and at the same time, the ease with which several types of data sources (data, digitised speech and video, etc.) can be mixed. The emergence of packet broadcast techniques as an efficient type of multiplexing, especially with the use of contention random multiple access protocols, has led to a wide-spread application of these distributed access protocols in local area networks (LANs) and a further extension of them to radio and mobile radio communication applications. In this research, a proposal for a modified version of the distributed access contention protocol which uses the packet broadcast switching technique has been achieved. The carrier sense multiple access with collision avoidance (CSMA/CA) is found to be the most appropriate protocol which has the ability to satisfy equally the operational requirements for local area networks as well as for radio and mobile radio applications. The suggested version of the protocol is designed in a way in which all desirable features of its precedents is maintained. However, all the shortcomings are eliminated and additional features have been added to strengthen its ability to work with radio and mobile radio channels. Operational performance evaluation of the protocol has been carried out for the two types of non-persistent and slotted non-persistent, through mathematical and simulation modelling of the protocol. The results obtained from the two modelling procedures validate the accuracy of both methods, which compares favourably with its precedent protocol CSMA/CD (with collision detection). A further extension of the protocol operation has been suggested to operate with multichannel systems. Two multichannel systems based on the CSMA/CA protocol for medium access are therefore proposed. These are; the dynamic multichannel system, which is based on two types of channel selection, the random choice (RC) and the idle choice (IC), and the sequential multichannel system. The latter has been proposed in order to supress the effect of the hidden terminal, which always represents a major problem with the usage of the contention random multiple access protocols with radio and mobile radio channels. Verification of their operation performance evaluation has been carried out using mathematical modelling for the dynamic system. However, simulation modelling has been chosen for the sequential system. Both systems are found to improve system operation and fault tolerance when compared to single channel operation.
Resumo:
Using current software engineering technology, the robustness required for safety critical software is not assurable. However, different approaches are possible which can help to assure software robustness to some extent. For achieving high reliability software, methods should be adopted which avoid introducing faults (fault avoidance); then testing should be carried out to identify any faults which persist (error removal). Finally, techniques should be used which allow any undetected faults to be tolerated (fault tolerance). The verification of correctness in system design specification and performance analysis of the model, are the basic issues in concurrent systems. In this context, modeling distributed concurrent software is one of the most important activities in the software life cycle, and communication analysis is a primary consideration to achieve reliability and safety. By and large fault avoidance requires human analysis which is error prone; by reducing human involvement in the tedious aspect of modelling and analysis of the software it is hoped that fewer faults will persist into its implementation in the real-time environment. The Occam language supports concurrent programming and is a language where interprocess interaction takes place by communications. This may lead to deadlock due to communication failure. Proper systematic methods must be adopted in the design of concurrent software for distributed computing systems if the communication structure is to be free of pathologies, such as deadlock. The objective of this thesis is to provide a design environment which ensures that processes are free from deadlock. A software tool was designed and used to facilitate the production of fault-tolerant software for distributed concurrent systems. Where Occam is used as a design language then state space methods, such as Petri-nets, can be used in analysis and simulation to determine the dynamic behaviour of the software, and to identify structures which may be prone to deadlock so that they may be eliminated from the design before the program is ever run. This design software tool consists of two parts. One takes an input program and translates it into a mathematical model (Petri-net), which is used for modeling and analysis of the concurrent software. The second part is the Petri-net simulator that takes the translated program as its input and starts simulation to generate the reachability tree. The tree identifies `deadlock potential' which the user can explore further. Finally, the software tool has been applied to a number of Occam programs. Two examples were taken to show how the tool works in the early design phase for fault prevention before the program is ever run.
Resumo:
The objective of this work has been to investigate the principle of combined bioreaction and separation in a simulated counter-current chromatographic bioreactor-separator system (SCCR-S). The SCCR-S system consisted of twelve 5.4cm i.d x 75cm long columns packed with calcium charged cross-linked polystyrene resin. Three bioreactions, namely the saccharification of modified starch to maltose and dextrin using the enzyme maltogenase, the hydrolysis of lactose to galactose and glucose in the presence of the enzyme lactase and the biosynthesis of dextran from sucrose using the enzyme dextransucrase. Combined bioreaction and separation has been successfully carried out in the SCCR-S system for the saccharification of modified starch to maltose and dextrin. The effects of the operating parameters (switch time, eluent flowrate, feed concentration and enzyme activity) on the performance of the SCCR-S system were investigated. By using an eluent of dilute enzyme solution, starch conversions of up to 60% were achieved using lower amounts of enzyme than the theoretical amount required by a conventional bioreactor to produce the same amount of maltose over the same time period. Comparing the SCCR-S system to a continuous annular chromatograph (CRAC) for the saccharification of modified starch showed that the SCCR-S system required only 34.6-47.3% of the amount of enzyme required by the CRAC. The SCCR-S system was operated in the batch and continuous modes as a bioreactor-separator for the hydrolysis of lactose to galactose and glucose. By operating the system in the continuous mode, the operating parameters were further investigated. During these experiments the eluent was deionised water and the enzyme was introduced into the system through the same port as the feed. The galactose produced was retarded and moved with the stationary phase to be purge as the galactose rich product (GalRP) while the glucose moved with the mobile phase and was collected as the glucose rich product (GRP). By operating at up to 30%w/v lactose feed concentrations, complete conversions were achieved using only 48% of the theoretical amount of enzyme required by a conventional bioreactor to hydrolyse the same amount of glucose over the same time period. The main operating parameters affecting the performance of the SCCR-S system operating in the batch mode were investigated and the results compared to those of the continuous operation of the SCCR-S system. . During the biosynthesis of dextran in the SCCR-S system, a method of on-line regeneration of the resin was required to operate the system continuously. Complete conversion was achieved at sucrose feed concentrations of 5%w/v with fructose rich. products (FRP) of up to 100% obtained. The dextran rich products were contaninated by small amounts of glucose and levan formed during the bioreaction. Mathematical modelling and computer simulation of the SCCR-S. system operating in the continuous mode for the hydrolysis of lactose has been carried out. .
Resumo:
The aim of this work has been to investigate the behaviour of a continuous rotating annular chromatograph (CRAC) under a combined biochemical reaction and separation duty. Two biochemical reactions have been employed, namely the inversion of sucrose to glucose and fructose in the presence of the enzyme invertase and the saccharification of liquefied starch to maltose and dextrin using the enzyme maltogenase. Simultaneous biochemical reaction and separation has been successfully carried out for the first time in a CRAC by inverting sucrose to fructose and glucose using the enzyme invertase and collecting continuously pure fractions of glucose and fructose from the base of the column. The CRAC was made of two concentric cylinders which form an annulus 140 cm long by 1.2 cm wide, giving an annular space of 14.5 dm3. The ion exchange resin used was an industrial grade calcium form Dowex 50W-X4 with a mean diameter of 150 microns. The mobile phase used was deionised and dearated water and contained the appropriate enzyme. The annular column was slowly rotated at speeds of up to 240°h-1 while the sucrose substrate was fed continuously through a stationary feed pipe to the top of the resin bed. A systematic investigation of the factors affecting the performance of the CRAC under simultaneous biochemical reaction and separation conditions was carried out by employing a factorial experimental procedure. The main factors affecting the performance of the system were found to be the feed rate, feed concentrations and eluent rate. Results from the experiments indicated that complete conversion could be achieved for feed concentrations of up to 50% w/v sucrose and at feed throughputs of up to 17.2 kg sucrose per m3 resin/h. The second enzymic reaction, namely the saccharification of liquefied starch to maltose employing the enzyme maltogenase has also been successfully carried out on a CRAC. Results from the experiments using soluble potato starch showed that conversions of up to 79% were obtained for a feed concentration of 15.5% w/v at a feed flowrate of 400 cm3/h. The product maltose obtained was over 95% pure. Mathematical modelling and computer simulation of the sucrose inversion system has been carried out. A finite difference method was used to solve the partial differential equations and the simulation results showed good agreement with the experimental results obtained.
Resumo:
The objective of this work has been to study the behaviour and performance of a batch chromatographic column under simultaneous bioreaction and separation conditions for several carbohydrate feedstocks. Four bioreactions were chosen, namely the hydrolysis of sucrose to glucose and fructose using the enzyme invertase, the hydrolysis of inulin to fructose and glucose using inulinase, the hydrolysis of lactose to glucose and galactose using lactase and the isomerization of glucose to fructose using glucose isomerase. The chromatographic columns employed were jacketed glass columns ranging from 1 m to 2 m long and the internal diameter ranging from 0.97 cm to 1.97 cm. The stationary phase used was a cation exchange resin (PUROLITE PCR-833) in the Ca2+ form for the hydrolysis and the Mg2+ form for the isomerization reactions. The mobile phase used was a diluted enzyme solution which was continuously pumped through the chromatographic bed. The substrate was injected at the top of the bed as a pulse. The effect of the parameters pulse size, the amount of substrate solution introduced into the system corresponding to a percentage of the total empty column volume (% TECV), pulse concentration, eluent flowrate and the enzyme activity of the eluent were investigated. For the system sucrose-invertase complete conversions of substrate were achieved for pulse sizes and pulse concentrations of up to 20% TECV and 60% w/v, respectively. Products with purity above 90% were obtained. The enzyme consumption was 45% of the amount theoretically required to produce the same amount of product as in a conventional batch reactor. A value of 27 kg sucrose/m3 resin/h for the throughput of the system was achieved. The systematic investigation of the factors affecting the performance of the batch chromatographic bioreactor-separator was carried out by employing a factorial experimental procedure. The main factors affecting the performance of the system were the flowrate and enzyme activity. For the system inulin-inulinase total conversions were also obtained for pulses sizes of up to 20 % TECV and a pulse concentration of 10 % w/v. Fructose rich fractions with 100 % purity and representing up to 99.4 % of the total fructose generated were obtained with an enzyme consumption of 32 % of the amount theoretically required to produce the same amount of product in a conventional batch reactor. The hydrolysis of lactose by lactase was studied in the glass columns and also in an SCCR-S unit adapted for batch operation, in co-operation with Dr. Shieh, a fellow researcher in the Chemical Engineering and Applied Chemistry Department at Aston University. By operating at up to 30 % w/v lactose feed concentrations complete conversions were obtained and the purities of the products generated were above 90%. An enzyme consumption of 48 % of the amount theoretically required to produce the same amount of product in a conventional batch reactor was achieved. On working with the system glucose-glucose isomerase, which is a reversible reaction, the separation obtained with the stationary phase conditioned in the magnesium form was very poor although the conversion obtained was compatible with those for conventional batch reactors. By working with a mixed pulse of enzyme and substrate, up to 82.5 % of the fructose generated with a purity of 100 % was obtained. The mathematical modelling and computer simulation of the batch chromatographic bioreaction-separation has been performed on a personal computer. A finite difference method was used to solve the partial differential equations and the simulation results showed good agreement with the experimental results.
Resumo:
The study utilized the advanced technology provided by automated perimeters to investigate the hypothesis that patients with retinitis pigmentosa behave atypically over the dynamic range and to concurrently determine the influence of extraneous factors on the format of the normal perimetric sensitivity profile. The perimetric processing of some patients with retinitis pigmentosa was considered to be abnormal in either the temporal and/or the spatial domain. The standard size III stimulus saturated the central regions and was thus ineffective in detecting early depressions in sensitivity in these areas. When stimulus size was scaled in inverse proportion to the square root of ganglion cell receptive field density (M-scaled), isosensitive profiles did not result, although cortical representation was theoretically equivalent across the visual field. It was conjectured that this was due to variations in the ganglion cell characteristics with increasing peripheral angle, most notably spatial summation. It was concluded that the development of perimetric routines incorporating stimulus sizes adjusted in proportion to the coverage factor of retinal ganglion cells would enhance the diagnostic capacity of perimetry. Good general and local correspondence was found between perimetric sensitivity and the available retinal cell counts. Intraocular light scatter arising both from simulations and media opacities depressed perimetric sensitivity. Attenuation was greater centrally for the smaller LED stimuli, whereas the reverse was true for the larger projected stimuli. Prior perimetric experience and pupil size also demonstrated eccentricity-dependent effect on sensitivity. Practice improved perimetric sensitivity for projected stimuli at eccentricities greater than or equal to 30o; particularly in the superior region. Increase in pupil size for LED stimuli enhanced sensitivity at eccentricities greater than 10o. Conversely, microfluctuation in the accommodative response during perimetric examination and the correction of peripheral refractive error had no significant influence on perimetric sensitivity.
Resumo:
In analysing manufacturing systems, for either design or operational reasons, failure to account for the potentially significant dynamics could produce invalid results. There are many analysis techniques that can be used, however, simulation is unique in its ability to assess detailed, dynamic behaviour. The use of simulation to analyse manufacturing systems would therefore seem appropriate if not essential. Many simulation software products are available but their ease of use and scope of application vary greatly. This is illustrated at one extreme by simulators which offer rapid but limited application whilst at the other simulation languages which are extremely flexible but tedious to code. Given that a typical manufacturing engineer does not posses in depth programming and simulation skills then the use of simulators over simulation languages would seem a more appropriate choice. Whilst simulators offer ease of use their limited functionality may preclude their use in many applications. The construction of current simulators makes it difficult to amend or extend the functionality of the system to meet new challenges. Some simulators could even become obsolete as users, demand modelling functionality that reflects the latest manufacturing system design and operation concepts. This thesis examines the deficiencies in current simulation tools and considers whether they can be overcome by the application of object-oriented principles. Object-oriented techniques have gained in popularity in recent years and are seen as having the potential to overcome any of the problems traditionally associated with software construction. There are a number of key concepts that are exploited in the work described in this thesis: the use of object-oriented techniques to act as a framework for abstracting engineering concepts into a simulation tool and the ability to reuse and extend object-oriented software. It is argued that current object-oriented simulation tools are deficient and that in designing such tools, object -oriented techniques should be used not just for the creation of individual simulation objects but for the creation of the complete software. This results in the ability to construct an easy to use simulator that is not limited by its initial functionality. The thesis presents the design of an object-oriented data driven simulator which can be freely extended. Discussion and work is focused on discrete parts manufacture. The system developed retains the ease of use typical of data driven simulators. Whilst removing any limitation on its potential range of applications. Reference is given to additions made to the simulator by other developers not involved in the original software development. Particular emphasis is put on the requirements of the manufacturing engineer and the need for Ihe engineer to carrv out dynamic evaluations.
Resumo:
Web-based distributed modelling architectures are gaining increasing recognition as potentially useful tools to build holistic environmental models, combining individual components in complex workflows. However, existing web-based modelling frameworks currently offer no support for managing uncertainty. On the other hand, the rich array of modelling frameworks and simulation tools which support uncertainty propagation in complex and chained models typically lack the benefits of web based solutions such as ready publication, discoverability and easy access. In this article we describe the developments within the UncertWeb project which are designed to provide uncertainty support in the context of the proposed ‘Model Web’. We give an overview of uncertainty in modelling, review uncertainty management in existing modelling frameworks and consider the semantic and interoperability issues raised by integrated modelling. We describe the scope and architecture required to support uncertainty management as developed in UncertWeb. This includes tools which support elicitation, aggregation/disaggregation, visualisation and uncertainty/sensitivity analysis. We conclude by highlighting areas that require further research and development in UncertWeb, such as model calibration and inference within complex environmental models.
Resumo:
Simulation is an effective method for improving supply chain performance. However, there is limited advice available to assist practitioners in selecting the most appropriate method for a given problem. Much of the advice that does exist relies on custom and practice rather than a rigorous conceptual or empirical analysis. An analysis of the different modelling techniques applied in the supply chain domain was conducted, and the three main approaches to simulation used were identified; these are System Dynamics (SD), Discrete Event Simulation (DES) and Agent Based Modelling (ABM). This research has examined these approaches in two stages. Firstly, a first principles analysis was carried out in order to challenge the received wisdom about their strengths and weaknesses and a series of propositions were developed from this initial analysis. The second stage was to use the case study approach to test these propositions and to provide further empirical evidence to support their comparison. The contributions of this research are both in terms of knowledge and practice. In terms of knowledge, this research is the first holistic cross paradigm comparison of the three main approaches in the supply chain domain. Case studies have involved building ‘back to back’ models of the same supply chain problem using SD and a discrete approach (either DES or ABM). This has led to contributions concerning the limitations of applying SD to operational problem types. SD has also been found to have risks when applied to strategic and policy problems. Discrete methods have been found to have potential for exploring strategic problem types. It has been found that discrete simulation methods can model material and information feedback successfully. Further insights have been gained into the relationship between modelling purpose and modelling approach. In terms of practice, the findings have been summarised in the form of a framework linking modelling purpose, problem characteristics and simulation approach.
Resumo:
PURPOSE. The purpose of this study was to evaluate the potential of the portable Grand Seiko FR-5000 autorefractor to allow objective, continuous, open-field measurement of accommodation and pupil size for the investigation of the visual response to real-world environments and changes in the optical components of the eye. METHODS. The FR-5000 projects a pair of infrared horizontal and vertical lines on either side of fixation, analyzing the separation of the bars in the reflected image. The measurement bars were turned on permanently and the video output of the FR-5000 fed into a PC for real-time analysis. The calibration between infrared bar separation and the refractive error was assessed over a range of 10.0 D with a model eye. Tolerance to longitudinal instrument head shift was investigated over a ±15 mm range and to eye alignment away from the visual axis over eccentricities up to 25.0°. The minimum pupil size for measurement was determined with a model eye. RESULTS. The separation of the measurement bars changed linearly (r = 0.99), allowing continuous online analysis of the refractive state at 60 Hz temporal and approximately 0.01 D system resolution with pupils >2 mm. The pupil edge could be analyzed on the diagonal axes at the same rate with a system resolution of approximately 0.05 mm. The measurement of accommodation and pupil size were affected by eccentricity of viewing and instrument focusing inaccuracies. CONCLUSIONS. The small size of the instrument together with its resolution and temporal properties and ability to measure through a 2 mm pupil make it useful for the measurement of dynamic accommodation and pupil responses in confined environments, although good eye alignment is important. Copyright © 2006 American Academy of Optometry.