435 resultados para Grid simulation
Resumo:
This paper presents an accurate and robust geometric and material nonlinear formulation to predict structural behaviour of unprotected steel members at elevated temperatures. A fire analysis including large displacement effects for frame structures is presented. This finite element formulation of beam-column elements is based on the plastic hinge approach to model the elasto-plastic strain-hardening material behaviour. The Newton-Raphson method allowing for the thermal-time dependent effect was employed for the solution of the non-linear governing equations for large deflection in thermal history. A combined incremental and total formulation for determining member resistance is employed in this nonlinear solution procedure for the efficient modeling of nonlinear effects. Degradation of material strength with increasing temperature is simulated by a set of temperature-stress-strain curves according to both ECCS and BS5950 Part 8, which implicitly allows for creep deformation. The effects of uniform or non-uniform temperature distribution over the section of the structural steel member are also considered. Several numerical and experimental verifications are presented.
Resumo:
With the ever-increasing penetration level of wind power, the impacts of wind power on the power system are becoming more and more significant. Hence, it is necessary to systematically examine its impacts on the small signal stability and transient stability in order to find out countermeasures. As such, a comprehensive study is carried out to compare the dynamic performances of power system respectively with three widely-used power generators. First, the dynamic models are described for three types of wind power generators, i. e. the squirrel cage induction generator (SCIG), doubly fed induction generator (DFIG) and permanent magnet generator (PMG). Then, the impacts of these wind power generators on the small signal stability and transient stability are compared with that of a substituted synchronous generator (SG) in the WSCC three-machine nine-bus system by the eigenvalue analysis and dynamic time-domain simulations. Simulation results show that the impacts of different wind power generators are different under small and large disturbances.
Resumo:
Agent-based modelling (ABM), like other modelling techniques, is used to answer specific questions from real world systems that could otherwise be expensive or impractical. Its recent gain in popularity can be attributed to some degree to its capacity to use information at a fine level of detail of the system, both geographically and temporally, and generate information at a higher level, where emerging patterns can be observed. This technique is data-intensive, as explicit data at a fine level of detail is used and it is computer-intensive as many interactions between agents, which can learn and have a goal, are required. With the growing availability of data and the increase in computer power, these concerns are however fading. Nonetheless, being able to update or extend the model as more information becomes available can become problematic, because of the tight coupling of the agents and their dependence on the data, especially when modelling very large systems. One large system to which ABM is currently applied is the electricity distribution where thousands of agents representing the network and the consumers’ behaviours are interacting with one another. A framework that aims at answering a range of questions regarding the potential evolution of the grid has been developed and is presented here. It uses agent-based modelling to represent the engineering infrastructure of the distribution network and has been built with flexibility and extensibility in mind. What distinguishes the method presented here from the usual ABMs is that this ABM has been developed in a compositional manner. This encompasses not only the software tool, which core is named MODAM (MODular Agent-based Model) but the model itself. Using such approach enables the model to be extended as more information becomes available or modified as the electricity system evolves, leading to an adaptable model. Two well-known modularity principles in the software engineering domain are information hiding and separation of concerns. These principles were used to develop the agent-based model on top of OSGi and Eclipse plugins which have good support for modularity. Information regarding the model entities was separated into a) assets which describe the entities’ physical characteristics, and b) agents which describe their behaviour according to their goal and previous learning experiences. This approach diverges from the traditional approach where both aspects are often conflated. It has many advantages in terms of reusability of one or the other aspect for different purposes as well as composability when building simulations. For example, the way an asset is used on a network can greatly vary while its physical characteristics are the same – this is the case for two identical battery systems which usage will vary depending on the purpose of their installation. While any battery can be described by its physical properties (e.g. capacity, lifetime, and depth of discharge), its behaviour will vary depending on who is using it and what their aim is. The model is populated using data describing both aspects (physical characteristics and behaviour) and can be updated as required depending on what simulation is to be run. For example, data can be used to describe the environment to which the agents respond to – e.g. weather for solar panels, or to describe the assets and their relation to one another – e.g. the network assets. Finally, when running a simulation, MODAM calls on its module manager that coordinates the different plugins, automates the creation of the assets and agents using factories, and schedules their execution which can be done sequentially or in parallel for faster execution. Building agent-based models in this way has proven fast when adding new complex behaviours, as well as new types of assets. Simulations have been run to understand the potential impact of changes on the network in terms of assets (e.g. installation of decentralised generators) or behaviours (e.g. response to different management aims). While this platform has been developed within the context of a project focussing on the electricity domain, the core of the software, MODAM, can be extended to other domains such as transport which is part of future work with the addition of electric vehicles.
Resumo:
Global awareness for cleaner and renewable energy is transforming the electricity sector at many levels. New technologies are being increasingly integrated into the electricity grid at high, medium and low voltage levels, new taxes on carbon emissions are being introduced and individuals can now produce electricity, mainly through rooftop photovoltaic (PV) systems. While leading to improvements, these changes also introduce challenges, and a question that often rises is ‘how can we manage this constantly evolving grid?’ The Queensland Government and Ergon Energy, one of the two Queensland distribution companies, have partnered with some Australian and German universities on a project to answer this question in a holistic manner. The project investigates the impact the integration of renewables and other new technologies has on the physical structure of the grid, and how this evolving system can be managed in a sustainable and economical manner. To aid understanding of what the future might bring, a software platform has been developed that integrates two modelling techniques: agent-based modelling (ABM) to capture the characteristics of the different system units accurately and dynamically, and particle swarm optimization (PSO) to find the most economical mix of network extension and integration of distributed generation over long periods of time. Using data from Ergon Energy, two types of networks (3 phase, and Single Wired Earth Return or SWER) have been modelled; three-phase networks are usually used in dense networks such as urban areas, while SWER networks are widely used in rural Queensland. Simulations can be performed on these networks to identify the required upgrades, following a three-step process: a) what is already in place and how it performs under current and future loads, b) what can be done to manage it and plan the future grid and c) how these upgrades/new installations will perform over time. The number of small-scale distributed generators, e.g. PV and battery, is now sufficient (and expected to increase) to impact the operation of the grid, which in turn needs to be considered by the distribution network manager when planning for upgrades and/or installations to stay within regulatory limits. Different scenarios can be simulated, with different levels of distributed generation, in-place as well as expected, so that a large number of options can be assessed (Step a). Once the location, sizing and timing of assets upgrade and/or installation are found using optimisation techniques (Step b), it is possible to assess the adequacy of their daily performance using agent-based modelling (Step c). One distinguishing feature of this software is that it is possible to analyse a whole area at once, while still having a tailored solution for each of the sub-areas. To illustrate this, using the impact of battery and PV can have on the two types of networks mentioned above, three design conditions can be identified (amongst others): · Urban conditions o Feeders that have a low take-up of solar generators, may benefit from adding solar panels o Feeders that need voltage support at specific times, may be assisted by installing batteries · Rural conditions - SWER network o Feeders that need voltage support as well as peak lopping may benefit from both battery and solar panel installations. This small example demonstrates that no single solution can be applied across all three areas, and there is a need to be selective in which one is applied to each branch of the network. This is currently the function of the engineer who can define various scenarios against a configuration, test them and iterate towards an appropriate solution. Future work will focus on increasing the level of automation in identifying areas where particular solutions are applicable.
Resumo:
Pile foundations transfer loads from superstructures to stronger sub soil. Their strength and stability can hence affect structural safety. This paper treats the response of reinforced concrete pile in saturated sand to a buried explosion. Fully coupled computer simulation techniques are used together with five different material models. Influence of reinforcement on pile response is investigated and important safety parameters of horizontal deformations and tensile stresses in the pile are evaluated. Results indicate that adequate longitudinal reinforcement and proper detailing of transverse reinforcement can reduce pile damage. Present findings can serve as a benchmark reference for future analysis and design.
Resumo:
Food waste is a current challenge that both developing and developed countries face. This project applied a novel combination of available methods in Mechanical, agricultural and food engineering to address these challenges. A systematic approach was devised to investigate possibilities of reducing food waste and increasing the efficiency of industry by applying engineering concepts and theories including experimental, mathematical and computational modelling methods. This study highlights the impact of comprehensive understanding of agricultural and food material response to the mechanical operations and its direct relation to the volume of food wasted globally.
Resumo:
In this paper, the inherent mechanism of benefits associated with smart grid development is examined based on the Pressure-State-Response (PSR) model from resource economics. The emerging types of technology brought up by smart grid development are taken as pressures. The improvements of the performance and efficiency of power system operation are taken as states. The effects of smart grid development on society are taken as responses. Then, a novel method for evaluating social benefits in energy saving and CO2 emission reduction from smart grid development is presented. Finally, the benefits in a province in northwest China is carried out by employing the developed evaluation system, and reasonable evaluation results are attained.
Resumo:
Solutions to remedy the voltage disturbances have been mostly suggested only for industrial customers. However, not much research has been done on the impact of the voltage problems on residential facilities. This paper proposes a new method to reduce the effect of voltage dip and swell in smart grids equipped by communication systems. To reach this purpose, a voltage source inverter and the corresponding control system are employed. The behavior of a power system during voltage dip and swell are analyzed. The results demonstrate reasonable improvement in terms of voltage dip and swell mitigation. All simulations are implemented in MATLAB/Simulink environment.
Resumo:
A novel intelligent online demand management system is discussed in this chapter for peak load management in low voltage residential distribution networks based on the smart grid concept. The discussed system also regulates the network voltage, balances the power in three phases and coordinates the energy storage within the network. This method uses low cost controllers, with two-way communication interfaces, installed in costumers’ premises and at distribution transformers to manage the peak load while maximizing customer satisfaction. A multi-objective decision making process is proposed to select the load(s) to be delayed or controlled. The efficacy of the proposed control system is verified by a MATLAB-based simulation which includes detailed modeling of residential loads and the network.
Resumo:
Many model-based investigation techniques, such as sensitivity analysis, optimization, and statistical inference, require a large number of model evaluations to be performed at different input and/or parameter values. This limits the application of these techniques to models that can be implemented in computationally efficient computer codes. Emulators, by providing efficient interpolation between outputs of deterministic simulation models, can considerably extend the field of applicability of such computationally demanding techniques. So far, the dominant techniques for developing emulators have been priors in the form of Gaussian stochastic processes (GASP) that were conditioned with a design data set of inputs and corresponding model outputs. In the context of dynamic models, this approach has two essential disadvantages: (i) these emulators do not consider our knowledge of the structure of the model, and (ii) they run into numerical difficulties if there are a large number of closely spaced input points as is often the case in the time dimension of dynamic models. To address both of these problems, a new concept of developing emulators for dynamic models is proposed. This concept is based on a prior that combines a simplified linear state space model of the temporal evolution of the dynamic model with Gaussian stochastic processes for the innovation terms as functions of model parameters and/or inputs. These innovation terms are intended to correct the error of the linear model at each output step. Conditioning this prior to the design data set is done by Kalman smoothing. This leads to an efficient emulator that, due to the consideration of our knowledge about dominant mechanisms built into the simulation model, can be expected to outperform purely statistical emulators at least in cases in which the design data set is small. The feasibility and potential difficulties of the proposed approach are demonstrated by the application to a simple hydrological model.
Resumo:
Distributed generation (DG) resources are commonly used in the electric systems to obtain minimum line losses, as one of the benefits of DG, in radial distribution systems. Studies have shown the importance of appropriate selection of location and size of DGs. This paper proposes an analytical method for solving optimal distributed generation placement (ODGP) problem to minimize line losses in radial distribution systems using loss sensitivity factor (LSF) based on bus-injection to branch-current (BIBC) matrix. The proposed method is formulated and tested on 12 and 34 bus radial distribution systems. The classical grid search algorithm based on successive load flows is employed to validate the results. The main advantages of the proposed method as compared with the other conventional methods are the robustness and no need to calculate and invert large admittance or Jacobian matrices. Therefore, the simulation time and the amount of computer memory, required for processing data especially for the large systems, decreases.
Resumo:
Graphene has been increasingly used as nano sized fillers to create a broad range of nanocomposites with exceptional properties. The interfaces between fillers and matrix play a critical role in dictating the overall performance of a composite. However, the load transfer mechanism along graphene-polymer interface has not been well understood. In this study, we conducted molecular dynamics simulations to investigate the influence of surface functionalization and layer length on the interfacial load transfer in graphene polymer nanocomposites. The simulation results show that oxygen-functionalized graphene leads to larger interfacial shear force than hydrogen-functionalized and pristine ones during pull-out process. The increase of oxygen coverage and layer length enhances interfacial shear force. Further increase of oxygen coverage to about 7% leads to a saturated interfacial shear force. A model was also established to demonstrate that the mechanism of interfacial load transfer consists of two contributing parts, including the formation of new surface and relative sliding along the interface. These results are believed to be useful in development of new graphene-based nanocomposites with better interfacial properties.
Resumo:
User evaluations using paper prototypes commonly lack social context. The Group simulation technique described in this paper offers a solution to this problem. The study introduces an early-phase participatory design technique targeted for small groups. The proposed technique is used for evaluating an interface, which enables group work in photo collection creation. Three groups of four users, 12 in total, took part in a simulation session where they tested a low-fidelity design concept that included their own personal photo content from an event that their group attended together. The users’ own content was used to evoke natural experiences. Our results indicate that the technique helped users to naturally engage with the prototype in the session. The technique is suggested to be suitable for evaluating other early-phase concepts and to guide design solutions, especially with the concepts that include users’ personal content and enable content sharing.
Resumo:
In this paper, load profile and operational goal are used to find optimal sizing of combined PV-energy storage for a future grid-connected residential building. As part of this approach, five operational goals are introduced and the annual cost for each operation goal has been assessed. Finally, the optimal sizing for combined PV-energy storage has been determined, using direct search method. In addition, sensitivity of the annual cost to different parameters has been analyzed.
Resumo:
Standard Monte Carlo (sMC) simulation models have been widely used in AEC industry research to address system uncertainties. Although the benefits of probabilistic simulation analyses over deterministic methods are well documented, the sMC simulation technique is quite sensitive to the probability distributions of the input variables. This phenomenon becomes highly pronounced when the region of interest within the joint probability distribution (a function of the input variables) is small. In such cases, the standard Monte Carlo approach is often impractical from a computational standpoint. In this paper, a comparative analysis of standard Monte Carlo simulation to Markov Chain Monte Carlo with subset simulation (MCMC/ss) is presented. The MCMC/ss technique constitutes a more complex simulation method (relative to sMC), wherein a structured sampling algorithm is employed in place of completely randomized sampling. Consequently, gains in computational efficiency can be made. The two simulation methods are compared via theoretical case studies.