936 resultados para Microscopic simulation models
Resumo:
Original Paper European Journal of Information Systems (2001) 10, 135–146; doi:10.1057/palgrave.ejis.3000394 Organisational learning—a critical systems thinking discipline P Panagiotidis1,3 and J S Edwards2,4 1Deloitte and Touche, Athens, Greece 2Aston Business School, Aston University, Aston Triangle, Birmingham, B4 7ET, UK Correspondence: Dr J S Edwards, Aston Business School, Aston University, Aston Triangle, Birmingham, B4 7ET, UK. E-mail: j.s.edwards@aston.ac.uk 3Petros Panagiotidis is Manager responsible for the Process and Systems Integrity Services of Deloitte and Touche in Athens, Greece. He has a BSc in Business Administration and an MSc in Management Information Systems from Western International University, Phoenix, Arizona, USA; an MSc in Business Systems Analysis and Design from City University, London, UK; and a PhD degree from Aston University, Birmingham, UK. His doctorate was in Business Systems Analysis and Design. His principal interests now are in the ERP/DSS field, where he serves as project leader and project risk managment leader in the implementation of SAP and JD Edwards/Cognos in various major clients in the telecommunications and manufacturing sectors. In addition, he is responsible for the development and application of knowledge management systems and activity-based costing systems. 4John S Edwards is Senior Lecturer in Operational Research and Systems at Aston Business School, Birmingham, UK. He holds MA and PhD degrees (in mathematics and operational research respectively) from Cambridge University. His principal research interests are in knowledge management and decision support, especially methods and processes for system development. He has written more than 30 research papers on these topics, and two books, Building Knowledge-based Systems and Decision Making with Computers, both published by Pitman. Current research work includes the effect of scale of operations on knowledge management, interfacing expert systems with simulation models, process modelling in law and legal services, and a study of the use of artifical intelligence techniques in management accounting. Top of pageAbstract This paper deals with the application of critical systems thinking in the domain of organisational learning and knowledge management. Its viewpoint is that deep organisational learning only takes place when the business systems' stakeholders reflect on their actions and thus inquire about their purpose(s) in relation to the business system and the other stakeholders they perceive to exist. This is done by reflecting both on the sources of motivation and/or deception that are contained in their purpose, and also on the sources of collective motivation and/or deception that are contained in the business system's purpose. The development of an organisational information system that captures, manages and institutionalises meaningful information—a knowledge management system—cannot be separated from organisational learning practices, since it should be the result of these very practices. Although Senge's five disciplines provide a useful starting-point in looking at organisational learning, we argue for a critical systems approach, instead of an uncritical Systems Dynamics one that concentrates only on the organisational learning practices. We proceed to outline a methodology called Business Systems Purpose Analysis (BSPA) that offers a participatory structure for team and organisational learning, upon which the stakeholders can take legitimate action that is based on the force of the better argument. In addition, the organisational learning process in BSPA leads to the development of an intrinsically motivated information organisational system that allows for the institutionalisation of the learning process itself in the form of an organisational knowledge management system. This could be a specific application, or something as wide-ranging as an Enterprise Resource Planning (ERP) implementation. Examples of the use of BSPA in two ERP implementations are presented.
Resumo:
The objective of the thesis was to analyse several process configurations for the production of electricity from biomass. Process simulation models using AspenPlus aimed at calculating the industrial performance of power plant concepts were built, tested, and used for analysis. The criteria used in analysis were performance and cost. All of the advanced systems appear to have higher efficiencies than the commercial reference, the Rankine cycle. However, advanced systems typically have a higher cost of electricity (COE) than the Rankine power plant. High efficiencies do not reduce fuel costs enough to compensate for the high capital costs of advanced concepts. The successful reduction of capital costs would appear to be the key to the introduction of the new systems. Capital costs account for a considerable, often dominant, part of the cost of electricity in these concepts. All of the systems have higher specific investment costs than the conventional industrial alternative, i.e. the Rankine power plant; Combined beat and power production (CUP) is currently the only industrial area of application in which bio-power costs can be considerably reduced to make them competitive. Based on the results of this work, AsperiPlus is an appropriate simulation platform. How-ever, the usefulness of the models could be improved if a number of unit operations were modelled in greater detail. The dryer, gasifier, fast pyrolysis, gas engine and gas turbine models could be improved.
Resumo:
A study has been made of the dynamic behaviour of a nuclear fuel reprocessing plant utilising pulsed solvent extraction columns. A flowsheet is presented and the choice of an extraction device is discussed. The plant is described by a series of modules each module representing an item of equipment. Each module consists of a series of differential equations describing the dynamic behaviour of the equipment. The model is written in PMSP, a language developed for dynamic simulation models. The differential equations are solved to predict plant behaviour with time. The dynamic response of the plant to a range of disturbances has been assessed. The interactions between pulsed columns have been demonstrated and illustrated. The importance of auxillary items of equipment to plant performance is demonstrated. Control of the reprocessing plant is considered and the effect of control parameters on performance assessed.
Resumo:
This thesis describes an investigation by the author into the spares operation of compare BroomWade Ltd. Whilst the complete system, including the warehousing and distribution functions, was investigated, the thesis concentrates on the provisioning aspect of the spares supply problem. Analysis of the historical data showed the presence of significant fluctuations in all the measures of system performance. Two Industrial Dynamics simulation models were developed to study this phenomena. The models showed that any fluctuation in end customer demand would be amplified as it passed through the distributor and warehouse stock control systems. The evidence from the historical data available supported this view of the system's operation. The models were utilised to determine which parts of the total system could be expected to exert a critical influence on its performance. The lead time parameters of the supply sector were found to be critical and further study showed that the manner in which the lead time changed with work in progress levels was also an important factor. The problem therefore resolved into the design of a spares manufacturing system. Which exhibited the appropriate dynamic performance characteristics. The gross level of entity presentation, inherent in the Industrial Dynamics methodology, was found to limit the value of these models in the development of detail design proposals. Accordingly, an interacting job shop simulation package was developed to allow detailed evaluation of organisational factors on the performance characteristics of a manufacturing system. The package was used to develop a design for a pilot spares production unit. The need for a manufacturing system to perform successfully under conditions of fluctuating demand is not limited to the spares field. Thus, although the spares exercise provides an example of the approach, the concepts and techniques developed can be considered to have broad application throughout batch manufacturing industry.
Resumo:
Motivated by the increasing demand and challenges of video streaming in this thesis, we investigate methods by which the quality of the video can be improved. We utilise overlay networks that have been created by implemented relay nodes to produce path diversity, and show through analytical and simulation models for which environments path diversity can improve the packet loss probability. We take the simulation and analytical models further by implementing a real overlay network on top of Planetlab, and show that when the network conditions remain constant the video quality received by the client can be improved. In addition, we show that in the environments where path diversity improves the video quality forward error correction can be used to further enhance the quality. We then investigate the effect of IEEE 802.11e Wireless LAN standard with quality of service enabled on the video quality received by a wireless client. We find that assigning all the video to a single class outperforms a cross class assignment scheme proposed by other researchers. The issue of virtual contention at the access point is also examined. We increase the intelligence of our relay nodes and enable them to cache video, in order to maximise the usefulness of these caches. For this purpose, we introduce a measure, called the PSNR profit, and present an optimal caching method for achieving the maximum PSNR profit at the relay nodes where partitioned video contents are stored and provide an enhanced quality for the client. We also show that the optimised cache the degradation in the video quality received by the client becomes more graceful than the non-optimised system when the network experiences packet loss or is congested.
Resumo:
We assess the feasibility of hybrid solar-biomass power plants for use in India in various applications including tri-generation, electricity generation and process heat. To cover this breadth of scenarios we analyse, with the help of simulation models, case studies with peak thermal capacities ranging from 2 to 10 MW. Evaluations are made against technical, financial and environmental criteria. Suitable solar multiples, based on the trade-offs among the various criteria, range from 1 to 2.5. Compared to conventional energy sources, levelised energy costs are high - but competitive in comparison to other renewables such as photovoltaic and wind. Long payback periods for hybrid plants mean that they cannot compete directly with biomass-only systems. However, a 1.2-3.2 times increase in feedstock price will result in hybrid systems becoming cost competitive. Furthermore, in comparison to biomass-only, hybrid operation saves up to 29% biomass and land with an 8.3-24.8 $/GJ/a and 1.8-5.2 ¢/kWh increase in cost per exergy loss and levelised energy cost. Hybrid plants will become an increasingly attractive option as the cost of solar thermal falls and feedstock, fossil fuel and land prices continue to rise. In the foreseeable future, solar will continue to rely on subsidies and it is recommended to subsidise preferentially tri-generation plants. © 2012 Elsevier Ltd.
Resumo:
For remote, semi-arid areas, brackish groundwater (BW) desalination powered by solar energy may serve as the most technically and economically viable means to alleviate the water stresses. For such systems, high recovery ratio is desired because of the technical and economical difficulties of concentrate management. It has been demonstrated that the current, conventional solar reverse osmosis (RO) desalination can be improved by 40–200 times by eliminating unnecessary energy losses. In this work, a batch-RO system that can be powered by a thermal Rankine cycle has been developed. By directly recycling high pressure concentrates and by using a linkage connection to provide increasing feed pressures, the batch-RO has been shown to achieve a 70% saving in energy consumption compared to a continuous single-stage RO system. Theoretical investigations on the mass transfer phenomena, including dispersion and concentration polarization, have been carried out to complement and to guide experimental efforts. The performance evaluation of the batch-RO system, named DesaLink, has been based on extensive experimental tests performed upon it. Operating DesaLink using compressed air as power supply under laboratory conditions, a freshwater production of approximately 300 litres per day was recorded with a concentration of around 350 ppm, whilst the feed water had a concentration range of 2500–4500 ppm; the corresponding linkage efficiency was around 40%. In the computational aspect, simulation models have been developed and validated for each of the subsystems of DesaLink, upon which an integrated model has been realised for the whole system. The models, both the subsystem ones and the integrated one, have been demonstrated to predict accurately the system performance under specific operational conditions. A simulation case study has been performed using the developed model. Simulation results indicate that the system can be expected to achieve a water production of 200 m3 per year by using a widely available evacuated tube solar collector having an area of only 2 m2. This freshwater production would satisfy the drinking water needs of 163 habitants in the Rajasthan region, the area for which the case study was performed.
Resumo:
The controlled from distance teaching (DT) in the system of technical education has a row of features: complication of informative content, necessity of development of simulation models and trainers for conducting of practical and laboratory employments, conducting of knowledge diagnostics on the basis of mathematical-based algorithms, organization of execution collective projects of the applied setting. For development of the process of teaching bases of fundamental discipline control system Theory of automatic control (TAC) the combined approach of optimum combination of existent programmatic instruments of support was chosen DT and own developments. The system DT TAC included: controlled from distance course (DC) of TAC, site of virtual laboratory practical works in LAB.TAC and students knowledge remote diagnostic system d-tester.
Resumo:
Mathematics Subject Classification: 65C05, 60G50, 39A10, 92C37
Resumo:
Freeway systems are becoming more congested each day. One contribution to freeway traffic congestion comprises platoons of on-ramp traffic merging into freeway mainlines. As a relatively low-cost countermeasure to the problem, ramp meters are being deployed in both directions of an 11-mile section of I-95 in Miami-Dade County, Florida. The local Fuzzy Logic (FL) ramp metering algorithm implemented in Seattle, Washington, has been selected for deployment. The FL ramp metering algorithm is powered by the Fuzzy Logic Controller (FLC). The FLC depends on a series of parameters that can significantly alter the behavior of the controller, thus affecting the performance of ramp meters. However, the most suitable values for these parameters are often difficult to determine, as they vary with current traffic conditions. Thus, for optimum performance, the parameter values must be fine-tuned. This research presents a new method of fine tuning the FLC parameters using Particle Swarm Optimization (PSO). PSO attempts to optimize several important parameters of the FLC. The objective function of the optimization model incorporates the METANET macroscopic traffic flow model to minimize delay time, subject to the constraints of reasonable ranges of ramp metering rates and FLC parameters. To further improve the performance, a short-term traffic forecasting module using a discrete Kalman filter was incorporated to predict the downstream freeway mainline occupancy. This helps to detect the presence of downstream bottlenecks. The CORSIM microscopic simulation model was selected as the platform to evaluate the performance of the proposed PSO tuning strategy. The ramp-metering algorithm incorporating the tuning strategy was implemented using CORSIM's run-time extension (RTE) and was tested on the aforementioned I-95 corridor. The performance of the FLC with PSO tuning was compared with the performance of the existing FLC without PSO tuning. The results show that the FLC with PSO tuning outperforms the existing FL metering, fixed-time metering, and existing conditions without metering in terms of total travel time savings, average speed, and system-wide throughput.
Resumo:
The total time a customer spends in the business process system, called the customer cycle-time, is a major contributor to overall customer satisfaction. Business process analysts and designers are frequently asked to design process solutions with optimal performance. Simulation models have been very popular to quantitatively evaluate the business processes; however, simulation is time-consuming and it also requires extensive modeling experiences to develop simulation models. Moreover, simulation models neither provide recommendations nor yield optimal solutions for business process design. A queueing network model is a good analytical approach toward business process analysis and design, and can provide a useful abstraction of a business process. However, the existing queueing network models were developed based on telephone systems or applied to manufacturing processes in which machine servers dominate the system. In a business process, the servers are usually people. The characteristics of human servers should be taken into account by the queueing model, i.e. specialization and coordination. ^ The research described in this dissertation develops an open queueing network model to do a quick analysis of business processes. Additionally, optimization models are developed to provide optimal business process designs. The queueing network model extends and improves upon existing multi-class open-queueing network models (MOQN) so that the customer flow in the human-server oriented processes can be modeled. The optimization models help business process designers to find the optimal design of a business process with consideration of specialization and coordination. ^ The main findings of the research are, first, parallelization can reduce the cycle-time for those customer classes that require more than one parallel activity; however, the coordination time due to the parallelization overwhelms the savings from parallelization under the high utilization servers since the waiting time significantly increases, thus the cycle-time increases. Third, the level of industrial technology employed by a company and coordination time to mange the tasks have strongest impact on the business process design; as the level of industrial technology employed by the company is high; more division is required to improve the cycle-time; as the coordination time required is high; consolidation is required to improve the cycle-time. ^
Resumo:
Parallel processing is prevalent in many manufacturing and service systems. Many manufactured products are built and assembled from several components fabricated in parallel lines. An example of this manufacturing system configuration is observed at a manufacturing facility equipped to assemble and test web servers. Characteristics of a typical web server assembly line are: multiple products, job circulation, and paralleling processing. The primary objective of this research was to develop analytical approximations to predict performance measures of manufacturing systems with job failures and parallel processing. The analytical formulations extend previous queueing models used in assembly manufacturing systems in that they can handle serial and different configurations of paralleling processing with multiple product classes, and job circulation due to random part failures. In addition, appropriate correction terms via regression analysis were added to the approximations in order to minimize the gap in the error between the analytical approximation and the simulation models. Markovian and general type manufacturing systems, with multiple product classes, job circulation due to failures, and fork and join systems to model parallel processing were studied. In the Markovian and general case, the approximations without correction terms performed quite well for one and two product problem instances. However, it was observed that the flow time error increased as the number of products and net traffic intensity increased. Therefore, correction terms for single and fork-join stations were developed via regression analysis to deal with more than two products. The numerical comparisons showed that the approximations perform remarkably well when the corrections factors were used in the approximations. In general, the average flow time error was reduced from 38.19% to 5.59% in the Markovian case, and from 26.39% to 7.23% in the general case. All the equations stated in the analytical formulations were implemented as a set of Matlab scripts. By using this set, operations managers of web server assembly lines, manufacturing or other service systems with similar characteristics can estimate different system performance measures, and make judicious decisions - especially setting delivery due dates, capacity planning, and bottleneck mitigation, among others.
Resumo:
Financial innovations have emerged globally to close the gap between the rising global demand for infrastructures and the availability of financing sources offered by traditional financing mechanisms such as fuel taxation, tax-exempt bonds, and federal and state funds. The key to sustainable innovative financing mechanisms is effective policymaking. This paper discusses the theoretical framework of a research study whose objective is to structurally and systemically assess financial innovations in global infrastructures. The research aims to create analysis frameworks, taxonomies and constructs, and simulation models pertaining to the dynamics of the innovation process to be used in policy analysis. Structural assessment of innovative financing focuses on the typologies and loci of innovations and evaluates the performance of different types of innovative financing mechanisms. Systemic analysis of innovative financing explores the determinants of the innovation process using the System of Innovation approach. The final deliverables of the research include propositions pertaining to the constituents of System of Innovation for infrastructure finance which include the players, institutions, activities, and networks. These static constructs are used to develop a hybrid Agent-Based/System Dynamics simulation model to derive propositions regarding the emergent dynamics of the system. The initial outcomes of the research study are presented in this paper and include: (a) an archetype for mapping innovative financing mechanisms, (b) a System of Systems-based analysis framework to identify the dimensions of Systems of Innovation analyses, and (c) initial observations regarding the players, institutions, activities, and networks of the System of Innovation in the context of the U.S. transportation infrastructure financing.
Resumo:
Parallel processing is prevalent in many manufacturing and service systems. Many manufactured products are built and assembled from several components fabricated in parallel lines. An example of this manufacturing system configuration is observed at a manufacturing facility equipped to assemble and test web servers. Characteristics of a typical web server assembly line are: multiple products, job circulation, and paralleling processing. The primary objective of this research was to develop analytical approximations to predict performance measures of manufacturing systems with job failures and parallel processing. The analytical formulations extend previous queueing models used in assembly manufacturing systems in that they can handle serial and different configurations of paralleling processing with multiple product classes, and job circulation due to random part failures. In addition, appropriate correction terms via regression analysis were added to the approximations in order to minimize the gap in the error between the analytical approximation and the simulation models. Markovian and general type manufacturing systems, with multiple product classes, job circulation due to failures, and fork and join systems to model parallel processing were studied. In the Markovian and general case, the approximations without correction terms performed quite well for one and two product problem instances. However, it was observed that the flow time error increased as the number of products and net traffic intensity increased. Therefore, correction terms for single and fork-join stations were developed via regression analysis to deal with more than two products. The numerical comparisons showed that the approximations perform remarkably well when the corrections factors were used in the approximations. In general, the average flow time error was reduced from 38.19% to 5.59% in the Markovian case, and from 26.39% to 7.23% in the general case. All the equations stated in the analytical formulations were implemented as a set of Matlab scripts. By using this set, operations managers of web server assembly lines, manufacturing or other service systems with similar characteristics can estimate different system performance measures, and make judicious decisions - especially setting delivery due dates, capacity planning, and bottleneck mitigation, among others.
Resumo:
The distribution and mobilization of fluid in a porous medium depend on the capillary, gravity, and viscous forces. In oil field, the processes of enhanced oil recovery involve change and importance of these forces to increase the oil recovery factor. In the case of gas assisted gravity drainage (GAGD) process is important to understand the physical mechanisms to mobilize oil through the interaction of these forces. For this reason, several authors have developed physical models in laboratory and core floods of GAGD to study the performance of these forces through dimensionless groups. These models showed conclusive results. However, numerical simulation models have not been used for this type of study. Therefore, the objective of this work is to study the performance of capillary, viscous and gravity forces on GAGD process and its influence on the oil recovery factor through a 2D numerical simulation model. To analyze the interplay of these forces, dimensionless groups reported in the literature have been used such as Capillary Number (Nc), Bond number (Nb) and Gravity Number (Ng). This was done to determine the effectiveness of each force related to the other one. A comparison of the results obtained from the numerical simulation was also carried out with the results reported in the literature. The results showed that before breakthrough time, the lower is the injection flow rate, oil recovery is increased by capillary force, and after breakthrough time, the higher is the injection flow rate, oil recovery is increased by gravity force. A good relationship was found between the results obtained in this research with those published in the literature. The simulation results indicated that before the gas breakthrough, higher oil recoveries were obtained at lower Nc and Nb and, after the gas breakthrough, higher oil recoveries were obtained at lower Ng. The numerical models are consistent with the reported results in the literature