40 resultados para Electric fault currents
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
This paper focus on the problem of locating single-phase faults in mixed distribution electric systems, with overhead lines and underground cables, using voltage and current measurements at the sending-end and sequence model of the network. Since calculating series impedance for underground cables is not as simple as in the case of overhead lines, the paper proposes a methodology to obtain an estimation of zero-sequence impedance of underground cables starting from previous single-faults occurred in the system, in which an electric arc occurred at the fault location. For this reason, the signal is previously pretreated to eliminate its peaks voltage and the analysis can be done working with a signal as close as a sinus wave as possible
Resumo:
Fault location has been studied deeply for transmission lines due to its importance in power systems. Nowadays the problem of fault location on distribution systems is receiving special attention mainly because of the power quality regulations. In this context, this paper presents an application software developed in Matlabtrade that automatically calculates the location of a fault in a distribution power system, starting from voltages and currents measured at the line terminal and the model of the distribution power system data. The application is based on a N-ary tree structure, which is suitable to be used in this application due to the highly branched and the non- homogeneity nature of the distribution systems, and has been developed for single-phase, two-phase, two-phase-to-ground, and three-phase faults. The implemented application is tested by using fault data in a real electrical distribution power system
Resumo:
El projecte es va fer al KHLim a Diepenbeek. Es tractava de dissenyar un nou dispositiu de localització d'avaries del relè del motor, per tal de de substituir el que ja hi havia, per raons de seguretat
Resumo:
The demand for computational power has been leading the improvement of the High Performance Computing (HPC) area, generally represented by the use of distributed systems like clusters of computers running parallel applications. In this area, fault tolerance plays an important role in order to provide high availability isolating the application from the faults effects. Performance and availability form an undissociable binomial for some kind of applications. Therefore, the fault tolerant solutions must take into consideration these two constraints when it has been designed. In this dissertation, we present a few side-effects that some fault tolerant solutions may presents when recovering a failed process. These effects may causes degradation of the system, affecting mainly the overall performance and availability. We introduce RADIC-II, a fault tolerant architecture for message passing based on RADIC (Redundant Array of Distributed Independent Fault Tolerance Controllers) architecture. RADIC-II keeps as maximum as possible the RADIC features of transparency, decentralization, flexibility and scalability, incorporating a flexible dynamic redundancy feature, allowing to mitigate or to avoid some recovery side-effects.
Resumo:
Report for the scientific sojourn at the University of Linköping between April to July 2007. Monitoring of the air intake system of an automotive engine is important to meet emission related legislative diagnosis requirements. During the research the problem of fault detection in the air intake system was stated as a constraint satisfaction problem over continuous domains with a big number of variables and constraints. This problem was solved using Interval-based Consistency Techniques. Interval-based consistency techniques are shown to be particularly efficient for checking the consistency of the Analytical Redundancy Relations (ARRs), dealing with uncertain measurements and parameters, and using experimental data. All experiments were performed on a four-cylinder turbo-charged spark-ignited SAAB engine located in the research laboratory at Vehicular System Group - University of Linköping.
Resumo:
The aim of this paper is to discover the origins of utility regulation in Spain, and to analyse, from a microeconomic perspective, its characteristics and the impact of regulation on consumers and utilities. Madrid and the Madrilenian utilities are taken as a case study. The electric industry in the period studied was a natural monopoly2. Each of the three phases of production, generation, transmission and distribution, had natural monopoly characteristics. Therefore, the most efficient form to generate, transmit and distribute electricity was the monopoly because one firm can produce a quantity at a lower cost than the sum of costs incurred by two or more firms. A problem arises because when a firm is the single provider it can charge prices above the marginal cost, at monopoly prices. When a monopolist reduces the quantity produced, price increases, causing the consumer to demand less than the economic efficiency level, incurring a loss of consumer surplus. The loss of the consumer surplus is not completely gained by the monopolist, causing a loss of social surplus, a deadweight loss. The main objective of regulation is going to be to reduce to a minimum the deadweight loss. Regulation is also needed because when the monopolist fixes prices at marginal cost equal marginal revenue there would be an incentive for firms to enter the market creating inefficiency. The Madrilenian industry has been chosen because of the availability of statistical information on costs and production. The complex industry structure and the atomised demand add interest to the analysis. This study will also provide some light on the tariff regulation of the period which has been poorly studied and will complement the literature on the US electric utilities regulation where a different type of regulation was implemented.
Resumo:
Systematic asymptotic methods are used to formulate a model for the extensional flow of a thin sheet of nematic liquid crystal. With no external body forces applied, the model is found to be equivalent to the so-called Trouton model for Newtonian sheets (and fi bers), albeit with a modi fied "Trouton ratio". However, with a symmetry-breaking electric field gradient applied, behavior deviates from the Newtonian case, and the sheet can undergo fi nite-time breakup if a suitable destabilizing field is applied. Some simple exact solutions are presented to illustrate the results in certain idealized limits, as well as sample numerical results to the full model equations.
Resumo:
In this paper a novel methodology aimed at minimizing the probability of network failure and the failure impact (in terms of QoS degradation) while optimizing the resource consumption is introduced. A detailed study of MPLS recovery techniques and their GMPLS extensions are also presented. In this scenario, some features for reducing the failure impact and offering minimum failure probabilities at the same time are also analyzed. Novel two-step routing algorithms using this methodology are proposed. Results show that these methods offer high protection levels with optimal resource consumption
Resumo:
This paper presents and compares two approaches to estimate the origin (upstream or downstream) of voltage sag registered in distribution substations. The first approach is based on the application of a single rule dealing with features extracted from the impedances during the fault whereas the second method exploit the variability of waveforms from an statistical point of view. Both approaches have been tested with voltage sags registered in distribution substations and advantages, drawbacks and comparative results are presented
Resumo:
Monitor a distribution network implies working with a huge amount of data coining from the different elements that interact in the network. This paper presents a visualization tool that simplifies the task of searching the database for useful information applicable to fault management or preventive maintenance of the network
Resumo:
This paper deals with fault detection and isolation problems for nonlinear dynamic systems. Both problems are stated as constraint satisfaction problems (CSP) and solved using consistency techniques. The main contribution is the isolation method based on consistency techniques and uncertainty space refining of interval parameters. The major advantage of this method is that the isolation speed is fast even taking into account uncertainty in parameters, measurements, and model errors. Interval calculations bring independence from the assumption of monotony considered by several approaches for fault isolation which are based on observers. An application to a well known alcoholic fermentation process model is presented
Resumo:
Not considered in the analytical model of the plant, uncertainties always dramatically decrease the performance of the fault detection task in the practice. To cope better with this prevalent problem, in this paper we develop a methodology using Modal Interval Analysis which takes into account those uncertainties in the plant model. A fault detection method is developed based on this model which is quite robust to uncertainty and results in no false alarm. As soon as a fault is detected, an ANFIS model is trained in online to capture the major behavior of the occurred fault which can be used for fault accommodation. The simulation results understandably demonstrate the capability of the proposed method for accomplishing both tasks appropriately
Resumo:
Often practical performance of analytical redundancy for fault detection and diagnosis is decreased by uncertainties prevailing not only in the system model, but also in the measurements. In this paper, the problem of fault detection is stated as a constraint satisfaction problem over continuous domains with a big number of variables and constraints. This problem can be solved using modal interval analysis and consistency techniques. Consistency techniques are then shown to be particularly efficient to check the consistency of the analytical redundancy relations (ARRs), dealing with uncertain measurements and parameters. Through the work presented in this paper, it can be observed that consistency techniques can be used to increase the performance of a robust fault detection tool, which is based on interval arithmetic. The proposed method is illustrated using a nonlinear dynamic model of a hydraulic system
Resumo:
The speed of fault isolation is crucial for the design and reconfiguration of fault tolerant control (FTC). In this paper the fault isolation problem is stated as a constraint satisfaction problem (CSP) and solved using constraint propagation techniques. The proposed method is based on constraint satisfaction techniques and uncertainty space refining of interval parameters. In comparison with other approaches based on adaptive observers, the major advantage of the presented method is that the isolation speed is fast even taking into account uncertainty in parameters, measurements and model errors and without the monotonicity assumption. In order to illustrate the proposed approach, a case study of a nonlinear dynamic system is presented
Resumo:
Through the history of Electrical Engineering education, vectorial and phasorial diagrams have been used as a fundamental learning tool. At present, computational power has replaced them by long data lists, the result of solving equation systems by means of numerical methods. In this sense, diagrams have been shifted to an academic background and although theoretically explained, they are not used in a practical way within specific examples. This fact may be against the understanding of the complex behavior of the electrical power systems by students. This article proposes a modification of the classical Perrine-Baum diagram construction to allowing both a more practical representation and a better understanding of the behavior of a high-voltage electric line under different levels of load. This modification allows, at the same time, the forecast of the obsolescence of this behavior and line’s loading capacity. Complementary, we evaluate the impact of this tool in the learning process showing comparative undergraduate results during three academic years