21 resultados para Voltage Source Inverters
Resumo:
DC line faults on high-voltage direct current (HVDC) systems utilising voltage source converters (VSCs) are a major issue for multi-terminal HVDC systems in which complete isolation of the faulted system is not a viable option. Of these faults, single line-to-earth faults are the most common fault scenario. To better understand the system under such faults, this study analyses the behaviour of HVDC systems based on both conventional two-level converter and multilevel modular converter technology, experiencing a permanent line-to-earth fault. Operation of the proposed system under two different earthing configurations of converter side AC transformer earthed with converter unearthed, and both converter and AC transformer unearthed, was analysed and simulated, with particular attention paid to the converter operation. It was observed that the development of potential earth loops within the system as a result of DC line-to-earth faults leads to substantial overcurrent and results in oscillations depending on the earthing configuration.
Resumo:
Power electronics plays an important role in the control and conversion of modern electric power systems. In particular, to integrate various renewable energies using DC transmissions and to provide more flexible power control in AC systems, significant efforts have been made in the modulation and control of power electronics devices. Pulse width modulation (PWM) is a well developed technology in the conversion between AC and DC power sources, especially for the purpose of harmonics reduction and energy optimization. As a fundamental decoupled control method, vector control with PI controllers has been widely used in power systems. However, significant power loss occurs during the operation of these devices, and the loss is often dissipated in the form of heat, leading to significant maintenance effort. Though much work has been done to improve the power electronics design, little has focused so far on the investigation of the controller design to reduce the controller energy consumption (leading to power loss in power electronics) while maintaining acceptable system performance. This paper aims to bridge the gap and investigates their correlations. It is shown a more thoughtful controller design can achieve better balance between energy consumption in power electronics control and system performance, which potentially leads to significant energy saving for integration of renewable power sources.
Resumo:
In multi-terminal high voltage direct current (HVDC) grids, the widely deployed droop control strategies will cause a non-uniform voltage deviation on the power flow, which is determined by the network topology and droop settings. This voltage deviation results in an inconsistent power flow pattern when the dispatch references are changed, which could be detrimental to the operation and seamless integration of HVDC grids. In this paper, a novel droop setting design method is proposed to address this problem for a more precise power dispatch. The effects of voltage deviations on the power sharing accuracy and transmission loss are analysed. This paper shows that there is a trade-off between minimizing the voltage deviation, ensuring a proper power delivery and reducing the total transmission loss in the droop setting design. The efficacy of the proposed method is confirmed by simulation studies.
Adaptive backstepping droop controller design for multi-terminal high-voltage direct current systems
Resumo:
Wind power is one of the most developed renewable energy resources worldwide. To integrate offshore wind farms to onshore grids, the high-voltage direct current (HVDC) transmission cables interfaced with voltage source converters (VSCs) are considered to be a better solution than conventional approaches. Proper DC voltage indicates successive power transfer. To connect more than one onshore grid, the DC voltage droop control is one of the most popular methods to share the control burden between different terminals. However, the challenges are that small droop gains will cause voltage deviations, while higher droop gain settings will cause large oscillations. This study aims to enhance the performance of the traditional droop controller by considering the DC cable dynamics. Based on the backstepping control concept, DC cables are modelled with a series of capacitors and inductors. The final droop control law is deduced step-by-step from the original remote side. At each step the control error from the previous step is considered. Simulation results show that both the voltage deviations and oscillations can be effectively reduced using the proposed method. Further, power sharing between different terminals can be effectively simplified such that it correlates linearly with the droop gains, thus enabling simple yet accurate system operation and control.
Resumo:
Damping torque analysis is a well-developed technique for understanding and studying power system oscillations. This paper presents the applications of damping torque analysis for DC bus implemented damping control in power transmission networks in two examples. The first example is the investigation of damping effect of shunt VSC (Voltage Source Converter) based FACTS voltage control, i.e., STATCOM (Static Synchronous Compensator) voltage control. It is shown in the paper that STATCOM voltage control mainly contributes synchronous torque and hence has little effect on the damping of power system oscillations. The second example is the damping control implemented by a Battery Energy Storage System (BESS) installed in a power system. Damping torque analysis reveals that when BESS damping control is realized by regulating exchange of active and reactive power between the BESS and power system respectively, BESS damping control exhibits different properties. It is concluded by damping torque analysis that BESS damping control implemented by regulating active power is better with less interaction with BESS voltage control and more robust to variations of power system operating conditions. In the paper, all analytical conclusions obtained are demonstrated by simulation results of example power systems.
Resumo:
This paper analyzes the behavior of a Voltage Source Converter Based HVDC system under DC cable fault conditions. The behavior of the HVDC system during a permanent line-to-earth fault is analyzed, outlining the systems configuration and behavior at each stage of the fault timeline. Operation of the proposed system under a single earthing configurations i.e. Converter (solid) earthed/AC transformer unearthed, was analyzed and simulated, with particular attention paid to the converters operation. It was observed that the development of potential earth loops within the system as a result of DC line- toearth faults leads to substantial overcurrent and results in system configuration oscillation.
Resumo:
Modern control methods like optimal control and model predictive control (MPC) provide a framework for simultaneous regulation of the tracking performance and limiting the control energy, thus have been widely deployed in industrial applications. Yet, due to its simplicity and robustness, the conventional P (Proportional) and PI (Proportional–Integral) control are still the most common methods used in many engineering systems, such as electric power systems, automotive, and Heating, Ventilation and Air Conditioning (HVAC) for buildings, where energy efficiency and energy saving are the critical issues to be addressed. Yet, little has been done so far to explore the effect of its parameter tuning on both the system performance and control energy consumption, and how these two objectives are correlated within the P and PI control framework. In this paper, the P and PI controllers are designed with a simultaneous consideration of these two aspects. Two case studies are investigated in detail, including the control of Voltage Source Converters (VSCs) for transmitting offshore wind power to onshore AC grid through High Voltage DC links, and the control of HVAC systems. Results reveal that there exists a better trade-off between the tracking performance and the control energy through a proper choice of the P and PI controller parameters.
Resumo:
The present paper proposes for the first time, a novel design methodology based on the optimization of source/drain extension (SDE) regions to significantly improve the trade-off between intrinsic voltage gain (A(vo)) and cut-off frequency (f(T)) in nanoscale double gate (DG) devices. Our results show that an optimally designed 25 nm gate length SDE region engineered DG MOSFET operating at drain current of 10 mu A/mu m, exhibits up to 65% improvement in intrinsic voltage gain and 85% in cut-off frequency over devices designed with abrupt SIDE regions. The influence of spacer width, lateral source/drain doping gradient and symmetric as well as asymmetrically designed SDE regions on key analog figures of merit (FOM) such as transconductance (g(m)), transconductance-to-current ratio (g(m)/I-ds), Early voltage (V-EA), output conductance (g(ds)) and gate capacitances are examined in detail. The present work provides new opportunities for realizing future low-voltage/low-power analog circuits with nanoscale SDE engineered DG MOSFETs. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
In this letter, we propose a novel design methodology for engineering source/drain extension (SDE) regions to simultaneously improve intrinsic dc gain (A(vo)) and cutoff frequency (f(T)) of 25-nm gate-length FinFETs operated at low drain-current (I-ds = 10 mu A/mu m). SDE region optimization in 25-nm FinFETs results in exceptionally high values of Avo (similar to 45 dB) and f(T) (similar to 70 GHz), which is nearly 2.5 times greater when compared to devices designed with abrupt SDE regions. The influence of spacer width, lateral source/drain doping gradient, and the spacer-to-gradient ratio on key analog figures of merit is examined in detail. This letter provides new opportunities for realizing future low-voltage/low-power analog design with nanoscale SDE-engineered FinFETs.
Resumo:
The impact of source/drain engineering on the performance of a six-transistor (6-T) static random access memory (SRAM) cell, based on 22 nm double-gate (DG) SOI MOSFETs, has been analyzed using mixed-mode simulation, for three different circuit topologies for low voltage operation. The trade-offs associated with the various conflicting requirements relating to read/write/standby operations have been evaluated comprehensively in terms of eight performance metrics, namely retention noise margin, static noise margin, static voltage/current noise margin, write-ability current, write trip voltage/current and leakage current. Optimal design parameters with gate-underlap architecture have been identified to enhance the overall SRAM performance, and the influence of parasitic source/drain resistance and supply voltage scaling has been investigated. A gate-underlap device designed with a spacer-to-straggle (s/sigma) ratio in the range 2-3 yields improved SRAM performance metrics, regardless of circuit topology. An optimal two word-line double-gate SOI 6-T SRAM cell design exhibits a high SNM similar to 162 mV, I-wr similar to 35 mu A and low I-leak similar to 70 pA at V-DD = 0.6 V, while maintaining SNM similar to 30% V-DD over the supply voltage (V-DD) range of 0.4-0.9 V.
Resumo:
In this paper, we analyze the enormous potential of engineering source/drain extension (SDE) regions in FinFETs for ultra-low-voltage (ULV) analog applications. SDE region design can simultaneously improve two key analog figures of merit (FOM)-intrinsic de gain (A(vo)) and cutoff frequency (f(T)) for 60 and 30 nm FinFETs operated at low drive current (J(ds) = 5 mu A/mu m). The improved Avo and fT are nearly twice compared to those of devices with abrupt SDE regions. The influence of the SDE region profile and its impact on analog FOM is extensively analyzed. Results show that SDE region optimization provides an additional degree of freedom apart from device parameters (fin width and aspect ratio) to design future nanoscale analog devices. The results are analyzed in terms of spacer-to-straggle ratio a new design parameter for SDE engineered devices. This paper provides new opportunities for realizing future ULV/low-power analog design with FinFETs.
Resumo:
In this paper, we propose for the first time, an analytical model for short channel effects in nanoscale source/drain extension region engineered double gate (DG) SOI MOSFETs. The impact of (i) lateral source/drain doping gradient (d), (ii) spacer width (s), (iii) spacer to doping gradient ratio (s/d) and (iv) silicon film thickness (T-si), on short channel effects - threshold voltage (V-th) and subthreshold slope (S), on-current (I-on), off-current (I-on) and I-on/I-off is extensively analysed by using the analytical model and 2D device simulations. The results of the analytical model confirm well with simulated data over the entire range of spacer widths, doping gradients and effective channel lengths. Results show that lateral source/drain doping gradient along with spacer width can not only effectively control short channel effects, thus presenting low off-current, but can also be optimised to achieve high values of on-currents. The present work provides valuable design insights in the performance of nanoscale DG Sol devices with optimal source/drain engineering and serves as a tool to optimise important device and technological parameters for 65 nm technology node and below. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
The characterization of a direct current, low-pressure, and high-density reflex discharge plasma source operating in argon and in nitrogen, over a range of pressures 1.0-10(-2) mbar, discharge currents 20-200 mA, and magnetic fields 0-120 G, and its parametric characterization is presented. Both external parameters, such as the breakdown potential and the discharge voltage-current characteristic, and internal parameters, like the charge carrier's temperature and density, plasma potential, floating potential, and electron energy distribution function, were measured. The electron energy distribution functions are bi-Maxwellian, but some structure is observed in these functions in nitrogen plasmas. There is experimental evidence for the existence of three groups of electrons within this reflex discharge plasma. Due to the enhanced hollow cathode effect by the magnetic trapping of electrons, the density of the cold group of electrons is as high as 10(18) m(-3), and the temperature is as low as a few tenths of an electron volt. The bulk plasma density scales with the dissipated power. Another important feature of this reflex plasma source is its high degree of uniformity, while the discharge bulk region is free of electric field. (C) 2002 American Institute of Physics.