52 resultados para wind power, simulation, simulation tool, user interface
Resumo:
The development of increasingly powerful computers, which has enabled the use of windowing software, has also opened the way for the computer study, via simulation, of very complex physical systems. In this study, the main issues related to the implementation of interactive simulations of complex systems are identified and discussed. Most existing simulators are closed in the sense that there is no access to the source code and, even if it were available, adaptation to interaction with other systems would require extensive code re-writing. This work aims to increase the flexibility of such software by developing a set of object-oriented simulation classes, which can be extended, by subclassing, at any level, i.e., at the problem domain, presentation or interaction levels. A strategy, which involves the use of an object-oriented framework, concurrent execution of several simulation modules, use of a networked windowing system and the re-use of existing software written in procedural languages, is proposed. A prototype tool which combines these techniques has been implemented and is presented. It allows the on-line definition of the configuration of the physical system and generates the appropriate graphical user interface. Simulation routines have been developed for the chemical recovery cycle of a paper pulp mill. The application, by creation of new classes, of the prototype to the interactive simulation of this physical system is described. Besides providing visual feedback, the resulting graphical user interface greatly simplifies the interaction with this set of simulation modules. This study shows that considerable benefits can be obtained by application of computer science concepts to the engineering domain, by helping domain experts to tailor interactive tools to suit their needs.
Resumo:
The modern grid system or the smart grid is likely to be populated with multiple distributed energy sources, e.g. wind power, PV power, Plug-in Electric Vehicle (PEV). It will also include a variety of linear and nonlinear loads. The intermittent nature of renewable energies like PV, wind turbine and increased penetration of Electric Vehicle (EV) makes the stable operation of utility grid system challenging. In order to ensure a stable operation of the utility grid system and to support smart grid functionalities such as, fault ride-through, frequency response, reactive power support, and mitigation of power quality issues, an energy storage system (ESS) could play an important role. A fast acting bidirectional energy storage system which can rapidly provide and absorb power and/or VARs for a sufficient time is a potentially valuable tool to support this functionality. Battery energy storage systems (BESS) are one of a range suitable energy storage system because it can provide and absorb power for sufficient time as well as able to respond reasonably fast. Conventional BESS already exist on the grid system are made up primarily of new batteries. The cost of these batteries can be high which makes most BESS an expensive solution. In order to assist moving towards a low carbon economy and to reduce battery cost this work aims to research the opportunities for the re-use of batteries after their primary use in low and ultra-low carbon vehicles (EV/HEV) on the electricity grid system. This research aims to develop a new generation of second life battery energy storage systems (SLBESS) which could interface to the low/medium voltage network to provide necessary grid support in a reliable and in cost-effective manner. The reliability/performance of these batteries is not clear, but is almost certainly worse than a new battery. Manufacturers indicate that a mixture of gradual degradation and sudden failure are both possible and failure mechanisms are likely to be related to how hard the batteries were driven inside the vehicle. There are several figures from a number of sources including the DECC (Department of Energy and Climate Control) and Arup and Cenex reports indicate anything from 70,000 to 2.6 million electric and hybrid vehicles on the road by 2020. Once the vehicle battery has degraded to around 70-80% of its capacity it is considered to be at the end of its first life application. This leaves capacity available for a second life at a much cheaper cost than a new BESS Assuming a battery capability of around 5-18kWhr (MHEV 5kWh - BEV 18kWh battery) and approximate 10 year life span, this equates to a projection of battery storage capability available for second life of >1GWhrs by 2025. Moreover, each vehicle manufacturer has different specifications for battery chemistry, number and arrangement of battery cells, capacity, voltage, size etc. To enable research and investment in this area and to maximize the remaining life of these batteries, one of the design challenges is to combine these hybrid batteries into a grid-tie converter where their different performance characteristics, and parameter variation can be catered for and a hot swapping mechanism is available so that as a battery ends it second life, it can be replaced without affecting the overall system operation. This integration of either single types of batteries with vastly different performance capability or a hybrid battery system to a grid-tie 3 energy storage system is different to currently existing work on battery energy storage systems (BESS) which deals with a single type of battery with common characteristics. This thesis addresses and solves the power electronic design challenges in integrating second life hybrid batteries into a grid-tie energy storage unit for the first time. This study details a suitable multi-modular power electronic converter and its various switching strategies which can integrate widely different batteries to a grid-tie inverter irrespective of their characteristics, voltage levels and reliability. The proposed converter provides a high efficiency, enhanced control flexibility and has the capability to operate in different operational modes from the input to output. Designing an appropriate control system for this kind of hybrid battery storage system is also important because of the variation of battery types, differences in characteristics and different levels of degradations. This thesis proposes a generalised distributed power sharing strategy based on weighting function aims to optimally use a set of hybrid batteries according to their relative characteristics while providing the necessary grid support by distributing the power between the batteries. The strategy is adaptive in nature and varies as the individual battery characteristics change in real time as a result of degradation for example. A suitable bidirectional distributed control strategy or a module independent control technique has been developed corresponding to each mode of operation of the proposed modular converter. Stability is an important consideration in control of all power converters and as such this thesis investigates the control stability of the multi-modular converter in detailed. Many controllers use PI/PID based techniques with fixed control parameters. However, this is not found to be suitable from a stability point-of-view. Issues of control stability using this controller type under one of the operating modes has led to the development of an alternative adaptive and nonlinear Lyapunov based control for the modular power converter. Finally, a detailed simulation and experimental validation of the proposed power converter operation, power sharing strategy, proposed control structures and control stability issue have been undertaken using a grid connected laboratory based multi-modular hybrid battery energy storage system prototype. The experimental validation has demonstrated the feasibility of this new energy storage system operation for use in future grid applications.
Resumo:
Fault tree analysis is used as a tool within hazard and operability (Hazop) studies. The present study proposes a new methodology for obtaining the exact TOP event probability of coherent fault trees. The technique uses a top-down approach similar to that of FATRAM. This new Fault Tree Disjoint Reduction Algorithm resolves all the intermediate events in the tree except OR gates with basic event inputs so that a near minimal cut sets expression is obtained. Then Bennetts' disjoint technique is applied and remaining OR gates are resolved. The technique has been found to be appropriate as an alternative to Monte Carlo simulation methods when rare events are countered and exact results are needed. The algorithm has been developed in FORTRAN 77 on the Perq workstation as an addition to the Aston Hazop package. The Perq graphical environment enabled a friendly user interface to be created. The total package takes as its input cause and symptom equations using Lihou's form of coding and produces both drawings of fault trees and the Boolean sum of products expression into which reliability data can be substituted directly.
Resumo:
Concurrent engineering and design for manufacture and assembly strategies have become pervasive in use in a wide array of industrial settings. These strategies have generally focused on product and process design issues based on capability concerns. The strategies have been historically justified using cost savings calculations focusing on easily quantifiable costs such as raw material savings or manufacturing or assembly operations no longer required. It is argued herein that neither the focus of the strategies nor the means of justification are adequate. Product and process design strategies should include both capability and capacity concerns and justification procedures should include the financial effects that the product and process changes would have on the entire company. The authors of this paper take this more holistic view of the problem and examine an innovative new design strategy using a comprehensive enterprise simulation tool. The results indicate that both the design strategy and the simulator show promise for further industrial use. © 2001 Elsevier Science B.V. All rights reserved.
Resumo:
Manufacturing planning and control systems are fundamental to the successful operations of a manufacturing organisation. 10 order to improve their business performance, significant investment is made by companies into planning and control systems; however, not all companies realise the benefits sought Many companies continue to suffer from high levels of inventory, shortages, obsolete parts, poor resource utilisation and poor delivery performance. This thesis argues that the fit between the planning and control system and the manufacturing organisation is a crucial element of success. The design of appropriate control systems is, therefore, important. The different approaches to the design of manufacturing planning and control systems are investigated. It is concluded that there is no provision within these design methodologies to properly assess the impact of a proposed design on the manufacturing facility. Consequently, an understanding of how a new (or modified) planning and control system will perform in the context of the complete manufacturing system is unlikely to be gained until after the system has been implemented and is running. There are many modelling techniques available, however discrete-event simulation is unique in its ability to model the complex dynamics inherent in manufacturing systems, of which the planning and control system is an integral component. The existing application of simulation to manufacturing control system issues is limited: although operational issues are addressed, application to the more fundamental design of control systems is rarely, if at all, considered. The lack of a suitable simulation-based modelling tool does not help matters. The requirements of a simulation tool capable of modelling a host of different planning and control systems is presented. It is argued that only through the application of object-oriented principles can these extensive requirements be achieved. This thesis reports on the development of an extensible class library called WBS/Control, which is based on object-oriented principles and discrete-event simulation. The functionality, both current and future, offered by WBS/Control means that different planning and control systems can be modelled: not only the more standard implementations but also hybrid systems and new designs. The flexibility implicit in the development of WBS/Control supports its application to design and operational issues. WBS/Control wholly integrates with an existing manufacturing simulator to provide a more complete modelling environment.
Resumo:
This chapter discusses network protection of high-voltage direct current (HVDC) transmission systems for large-scale offshore wind farms where the HVDC system utilizes voltage-source converters. The multi-terminal HVDC network topology and protection allocation and configuration are discussed with DC circuit breaker and protection relay configurations studied for different fault conditions. A detailed protection scheme is designed with a solution that does not require relay communication. Advanced understanding of protection system design and operation is necessary for reliable and safe operation of the meshed HVDC system under fault conditions. Meshed-HVDC systems are important as they will be used to interconnect large-scale offshore wind generation projects. Offshore wind generation is growing rapidly and offers a means of securing energy supply and addressing emissions targets whilst minimising community impacts. There are ambitious plans concerning such projects in Europe and in the Asia-Pacific region which will all require a reliable yet economic system to generate, collect, and transmit electrical power from renewable resources. Collective offshore wind farms are efficient and have potential as a significant low-carbon energy source. However, this requires a reliable collection and transmission system. Offshore wind power generation is a relatively new area and lacks systematic analysis of faults and associated operational experience to enhance further development. Appropriate fault protection schemes are required and this chapter highlights the process of developing and assessing such schemes. The chapter illustrates the basic meshed topology, identifies the need for distance evaluation, and appropriate cable models, then details the design and operation of the protection scheme with simulation results used to illustrate operation. © Springer Science+Business Media Singapore 2014.
Resumo:
This paper compares the performance of four different power electronic converter topologies, which have been proposed for STATCOM applications. Two of the topologies are Modular Multilevel Cascaded Converters (MMCC), whilst the remaining circuits utilize magnetic elements and an open-winding transformer configuration to combine individual power modules. It is assumed that the STATCOM has to work under unbalanced conditions, so that it delivers both positive and negative sequence currents. Simulation studies for the four topologies have been carried out using the simulation tool Saber. © 2013 IEEE.
Resumo:
Although maximum power point tracking (MPPT) is crucial in the design of a wind power generation system, the necessary control strategies should also be considered for conditions that require a power reduction, called de-loading in this paper. A coordinated control scheme for a proposed current source converter (CSC) based DC wind energy conversion system is presented in this paper. This scheme combines coordinated control of the pitch angle, a DC load dumping chopper and the DC/DC converter, to quickly achieve wind farm de-loading. MATLAB/Simulink simulations and experiments are used to validate the purpose and effectiveness of the control scheme, both at the same power level. © 2013 IEEE.
Resumo:
Multidimensional compound optimization is a new paradigm in the drug discovery process, yielding efficiencies during early stages and reducing attrition in the later stages of drug development. The success of this strategy relies heavily on understanding this multidimensional data and extracting useful information from it. This paper demonstrates how principled visualization algorithms can be used to understand and explore a large data set created in the early stages of drug discovery. The experiments presented are performed on a real-world data set comprising biological activity data and some whole-molecular physicochemical properties. Data visualization is a popular way of presenting complex data in a simpler form. We have applied powerful principled visualization methods, such as generative topographic mapping (GTM) and hierarchical GTM (HGTM), to help the domain experts (screening scientists, chemists, biologists, etc.) understand and draw meaningful decisions. We also benchmark these principled methods against relatively better known visualization approaches, principal component analysis (PCA), Sammon's mapping, and self-organizing maps (SOMs), to demonstrate their enhanced power to help the user visualize the large multidimensional data sets one has to deal with during the early stages of the drug discovery process. The results reported clearly show that the GTM and HGTM algorithms allow the user to cluster active compounds for different targets and understand them better than the benchmarks. An interactive software tool supporting these visualization algorithms was provided to the domain experts. The tool facilitates the domain experts by exploration of the projection obtained from the visualization algorithms providing facilities such as parallel coordinate plots, magnification factors, directional curvatures, and integration with industry standard software. © 2006 American Chemical Society.
Resumo:
Discusses the necessity for the conscious recognition of the phenomenon known as the extended enterprise; this demands that product, process and supply chain design are all considered simultaneously. Structure must be given to the extended enterprise in order to understand and manage it efficaciously. The authors discuss multiple perspectives for doing this, and employ the notions of “3-dimensional concurrent engineering” and “holonic thinking” for conceiving what the structure may look like. Describes a current “action research” project that is investigating potential lead-time reductions within an extended enterprise’s product introduction process. This aims to produce process visualisations, a framework for structuring and sychronising phases and stage-gates within the extended enterprise, and a new simulation tool which will provide a synthetic distributed hypermedia network. These deliverables will be used to play strategic “games” to explore problem issues within the product introduction process that belongs to the extended enterprise, develop teamwork across autonomous companies, and ultimately, contribute to the design of future extended enterprise supply chains.
Resumo:
When constructing and using environmental models, it is typical that many of the inputs to the models will not be known perfectly. In some cases, it will be possible to make observations, or occasionally physics-based uncertainty propagation, to ascertain the uncertainty on these inputs. However, such observations are often either not available or even possible, and another approach to characterising the uncertainty on the inputs must be sought. Even when observations are available, if the analysis is being carried out within a Bayesian framework then prior distributions will have to be specified. One option for gathering or at least estimating this information is to employ expert elicitation. Expert elicitation is well studied within statistics and psychology and involves the assessment of the beliefs of a group of experts about an uncertain quantity, (for example an input / parameter within a model), typically in terms of obtaining a probability distribution. One of the challenges in expert elicitation is to minimise the biases that might enter into the judgements made by the individual experts, and then to come to a consensus decision within the group of experts. Effort is made in the elicitation exercise to prevent biases clouding the judgements through well-devised questioning schemes. It is also important that, when reaching a consensus, the experts are exposed to the knowledge of the others in the group. Within the FP7 UncertWeb project (http://www.uncertweb.org/), there is a requirement to build a Webbased tool for expert elicitation. In this paper, we discuss some of the issues of building a Web-based elicitation system - both the technological aspects and the statistical and scientific issues. In particular, we demonstrate two tools: a Web-based system for the elicitation of continuous random variables and a system designed to elicit uncertainty about categorical random variables in the setting of landcover classification uncertainty. The first of these examples is a generic tool developed to elicit uncertainty about univariate continuous random variables. It is designed to be used within an application context and extends the existing SHELF method, adding a web interface and access to metadata. The tool is developed so that it can be readily integrated with environmental models exposed as web services. The second example was developed for the TREES-3 initiative which monitors tropical landcover change through ground-truthing at confluence points. It allows experts to validate the accuracy of automated landcover classifications using site-specific imagery and local knowledge. Experts may provide uncertainty information at various levels: from a general rating of their confidence in a site validation to a numerical ranking of the possible landcover types within a segment. A key challenge in the web based setting is the design of the user interface and the method of interacting between the problem owner and the problem experts. We show the workflow of the elicitation tool, and show how we can represent the final elicited distributions and confusion matrices using UncertML, ready for integration into uncertainty enabled workflows.We also show how the metadata associated with the elicitation exercise is captured and can be referenced from the elicited result, providing crucial lineage information and thus traceability in the decision making process.
Resumo:
The proliferation of data throughout the strategic, tactical and operational areas within many organisations, has provided a need for the decision maker to be presented with structured information that is appropriate for achieving allocated tasks. However, despite this abundance of data, managers at all levels in the organisation commonly encounter a condition of ‘information overload’, that results in a paucity of the correct information. Specifically, this thesis will focus upon the tactical domain within the organisation and the information needs of management who reside at this level. In doing so, it will argue that the link between decision making at the tactical level in the organisation, and low-level transaction processing data, should be through a common object model that used a framework based upon knowledge leveraged from co-ordination theory. In order to achieve this, the Co-ordinated Business Object Model (CBOM) was created. Detailing a two-tier framework, the first tier models data based upon four interactive object models, namely, processes, activities, resources and actors. The second tier analyses the data captured by the four object models, and returns information that can be used to support tactical decision making. In addition, the Co-ordinated Business Object Support System (CBOSS), is a prototype tool that has been developed in order to both support the CBOM implementation, and to also demonstrate the functionality of the CBOM as a modelling approach for supporting tactical management decision making. Containing a graphical user interface, the system’s functionality allows the user to create and explore alternative implementations of an identified tactical level process. In order to validate the CBOM, three verification tests have been completed. The results provide evidence that the CBOM framework helps bridge the gap between low level transaction data, and the information that is used to support tactical level decision making.
Resumo:
Cellular manufacturing is widely acknowledged as one of the key approaches to achieving world-class performance in batch manufacturing operations. The design of cellular manufacturing systems (CMS) is therefore crucial in determining a company's competitiveness. This thesis postulated that, in order to be effective the design of CMS should not only be systematic but also systemic. A systemic design uses the concepts of the body of work known as the 'systems approach' to ensure that a truly effective CMS is defined. The thesis examined the systems approach and created a systemic framework against which existing approaches to the design of CMS were evaluated. The most promising of these, Manufacturing Systems Engineering (MSE), was further investigated using a series of cross-sectional case-studies. Although, in practice, MSE proved to be less than systemic, it appeared to produce significant benefits. This seemed to suggest that CMS design did not need to be systemic to be effective. However, further longitudinal case-studies showed that the benefits claimed were at an operational level not at a business level and also that the performance of the whole system had not been evaluated. The deficiencies identified in the existing approaches to designing CMS were then addressed by the development of a novel CMS design methodology that fully utilised systems concepts. A key aspect of the methodology was the use of the Whole Business Simulator (WBS), a modelling and simulation tool that enabled the evaluation of CMS at operational and business levels. The most contentious aspects of the methodology were tested on a significant and complex case-study. The results of the exercise indicated that the systemic methodology was feasible.
Resumo:
In analysing manufacturing systems, for either design or operational reasons, failure to account for the potentially significant dynamics could produce invalid results. There are many analysis techniques that can be used, however, simulation is unique in its ability to assess detailed, dynamic behaviour. The use of simulation to analyse manufacturing systems would therefore seem appropriate if not essential. Many simulation software products are available but their ease of use and scope of application vary greatly. This is illustrated at one extreme by simulators which offer rapid but limited application whilst at the other simulation languages which are extremely flexible but tedious to code. Given that a typical manufacturing engineer does not posses in depth programming and simulation skills then the use of simulators over simulation languages would seem a more appropriate choice. Whilst simulators offer ease of use their limited functionality may preclude their use in many applications. The construction of current simulators makes it difficult to amend or extend the functionality of the system to meet new challenges. Some simulators could even become obsolete as users, demand modelling functionality that reflects the latest manufacturing system design and operation concepts. This thesis examines the deficiencies in current simulation tools and considers whether they can be overcome by the application of object-oriented principles. Object-oriented techniques have gained in popularity in recent years and are seen as having the potential to overcome any of the problems traditionally associated with software construction. There are a number of key concepts that are exploited in the work described in this thesis: the use of object-oriented techniques to act as a framework for abstracting engineering concepts into a simulation tool and the ability to reuse and extend object-oriented software. It is argued that current object-oriented simulation tools are deficient and that in designing such tools, object -oriented techniques should be used not just for the creation of individual simulation objects but for the creation of the complete software. This results in the ability to construct an easy to use simulator that is not limited by its initial functionality. The thesis presents the design of an object-oriented data driven simulator which can be freely extended. Discussion and work is focused on discrete parts manufacture. The system developed retains the ease of use typical of data driven simulators. Whilst removing any limitation on its potential range of applications. Reference is given to additions made to the simulator by other developers not involved in the original software development. Particular emphasis is put on the requirements of the manufacturing engineer and the need for Ihe engineer to carrv out dynamic evaluations.
Resumo:
We present an imaging system based on light emitting diode (LED) illumination that produces multispectral optical images of the human ocular fundus. It uses a conventional fundus camera equipped with a high power LED light source and a highly sensitive electron-multiplying charge coupled device camera. It is able to take pictures at a series of wavelengths in rapid succession at short exposure times, thereby eliminating the image shift introduced by natural eye movements (saccades). In contrast with snapshot systems the images retain full spatial resolution. The system is not suitable for applications where the full spectral resolution is required as it uses discrete wavebands for illumination. This is not a problem in retinal imaging where the use of selected wavelengths is common. The modular nature of the light source allows new wavelengths to be introduced easily and at low cost. The use of wavelength-specific LEDs as a source is preferable to white light illumination and subsequent filtering of the remitted light as it minimizes the total light exposure of the subject. The system is controlled via a graphical user interface that enables flexible control of intensity, duration, and sequencing of sources in synchrony with the camera. Our initial experiments indicate that the system can acquire multispectral image sequences of the human retina at exposure times of 0.05 s in the range of 500-620 nm with mean signal to noise ratio of 17 dB (min 11, std 4.5), making it suitable for quantitative analysis with application to the diagnosis and screening of eye diseases such as diabetic retinopathy and age-related macular degeneration.