987 resultados para Fault simulation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a new approach for optimal phasor measurement units placement for fault location on electric power distribution systems using Greedy Randomized Adaptive Search Procedure metaheuristic and Monte Carlo simulation. The optimized placement model herein proposed is a general methodology that can be used to place devices aiming to record the voltage sag magnitudes for any fault location algorithm that uses voltage information measured at a limited set of nodes along the feeder. An overhead, three-phase, three-wire, 13.8 kV, 134-node, real-life feeder model is used to evaluate the algorithm. Tests show that the results of the fault location methodology were improved thanks to the new optimized allocation of the meters pinpointed using this methodology. © 2011 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an interactive simulation environment for distance protection, developed with ATP and foreign models based on ANSI C. Files in COMTRADE format are possible to generate after ATP simulation. These files can be used to calibrate real relays. Also, the performance of relay algorithms with real oscillography events is possible to assess by using the ATP option for POSTPROCESS PLOT FILE (PPF). The main purpose of the work is to develop a tool to allow the analysis of diverse fault cases and to perform coordination studies, as well as, to allow the analysis of the relay's performance in the face of a real event. © 2011 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Das Ziel dieser Arbeit bestand in der Untersuchung der Störungsverteilung und der Störungskinematik im Zusammenhang mit der Hebung der Riftschultern des Rwenzori Gebirges.rnDas Rwenzori Gebirge befindet sich im NNE-SSWbis N-S verlaufenden Albertine Rift, des nördlichsten Segments des westlichen Armes des Ostafrikanischen Grabensystems. Das Albertine Rift besteht aus Becken unterschiedlicher Höhe, die den Lake Albert, Lake Edward, Lake George und Lake Kivu enthalten. Der Rwenzori horst trennt die Becken des Lake Albert und des Lake Edward. Es erstreckt sich 120km in N-S Richtung, sowie 40-50km in E-W Richtung, der h¨ochste Punkt befindet sich 5111 ü. NN. Diese Studie untersucht einen Abschnitt des Rifts zwischen etwa 1°N und 0°30'S Breite sowie 29°30' und 30°30' östlicher Länge ersteckt. Auch die Feldarbeit konzentrierte sich auf dieses Gebiet.rnrnHauptzweck dieser Studie bestand darin, die folgende These auf ihre Richtigkeit zu überprüfen: ’Wenn es im Verlauf der Zeit tatsächlich zu wesentlichen Änderungen in der Störungskinematik kam, dann ist die starke Hebung der Riftflanken im Bereich der Rwenzoris nicht einfach durch Bewegung entlang der Graben-Hauptst¨orungen zu erklären. Vielmehr ist sie ein Resultat des Zusammenspiels mehrerer tektonische Prozesse, die das Spannungsfeld beeinflussen und dadurch Änderungen in der Kinematik hervorrufen.’ Dadurch konzentrierte sich die Studie in erster Linie auf die Störungsanalyse.rnrnDie Kenntnis regionaler Änderungen der Extensionsrichtung ist entscheidend für das Verständnis komplexer Riftsysteme wie dem Ostafrikanischen Graben. Daher bestand der Kern der Untersuchung in der Kartierung von Störungen und der Untersuchung der Störungskinematik. Die Aufnahme strukturgeologischer Daten konzentrierte sich auf die Ugandische Seite des Rifts, und Pal¨aospannungen wurden mit Hilfe von St¨orungsdaten durch Spannungsinversion rekonstruiert.rnDie unterschiedliche Orientierung spr¨oder Strukturen im Gelände, die geometrische Analyse der geologischen Strukturen sowie die Ergebnisse von Mikrostrukturen im Dünnschliff (Kapitel 4) weisen auf verschiedene Spannungsfelder hin, die auf mögliche Änderungen der Extensionsrichtung hinweisen. Die Resultate der Spannungsinversion sprechen für Ab-, Über- und Blattverschiebungen sowie für Schrägüberschiebungen (Kapitel 5). Aus der Orientierung der Abschiebungen gehen zwei verschiedene Extensionsrichtungen hervor: im Wesentlichen NW-SE Extension in fast allen Gebieten, sowie NNE-SSW Extension im östlichen Zentralbereich.rnAus der Analyse von Blattverschiebungen ergaben sich drei unterschiedliche Spannungszustände. Zum Einen NNW-SSE bis N-S Kompression in Verbindung mit ENE-WSW bzw E-W Extension wurde für die nördlichen und die zentralen Ruwenzoris ausgemacht. Ein zweiter Spannungszustand mit WNW-ESE Kompression/NNE-SSW Extension betraf die Zentralen Rwenzoris. Ein dritter Spannungszustand mit NNW-SSE Extension betraf den östlichen Zentralteil der Rwenzoris. Schrägüberschiebungen sind durch dazu schräge Achsen charakterisiert, die für N-S bis NNW-SSE Kompression sprechen und ausschließlich im östlichen Zentralabschnitt auftreten. Überschiebungen, die hauptsächlich in den zentralen und den östlichen Rwenzoris auftreten, sprechen für NE-SW orientierten σ2-Achsen und NW-SE Extension.rnrnEs konnten drei unterschiedliche Spannungseinflüsse identifiziert werden: auf die kollisionsbedingte Bildung eines Überschiebungssystem folgte intra-kratonische Kompression und schließlich extensionskontrollierte Riftbildung. Der Übergang zwischen den beiden letztgenannten Spannungszuständen erfolgte Schrittweise und erzeugte vermutlich lokal begrenzte Transpression und Transtension. Gegenw¨artig wird die Störungskinematik der Region durch ein tensiles Spannungsregime in NW-SE bis N-S Richtung bestimmt.rnrnLokale Spannungsvariationen werden dabei hauptsächlich durch die Interferenzrndes regionalen Spannungsfeldes mit lokalen Hauptst¨orungen verursacht. Weitere Faktoren die zu lokalen Veränderungen des Spannungsfeldes führen können sind unterschiedliche Hebungsgeschwindigkeiten, Blockrotation oder die Interaktion von Riftsegmenten. Um den Einfluß präexistenter Strukturen und anderer Bedingungen auf die Hebung der Rwenzoris zu ermitteln, wurde der Riftprozeß mit Hilfe eines analogen ’Sandbox’-Modells rekonstruiert (Kapitel 6). Da sich die Moho-Diskontinuität im Bereich des Arbeitsgebietes in einer Tiefe von 25 km befindet, aktive Störungen aber nur bis zu einer Tiefe von etwa 20 km beobachtet werden können (Koehn et al. 2008), wurden nur die oberen 25 km im Modell nachbebildet. Untersucht und mit Geländebeobachtungen verglichen wurden sowohl die Reihenfolge, in der Riftsegmente entstehen, als auch die Muster, die sich im Verlauf der Nukleierung und des Wachstums dieser Riftsegmente ausbilden. Das Hauptaugenmerk wurde auf die Entwicklung der beiden Subsegmente gelegt auf denen sich der Lake Albert bzw. der Lake Edward und der Lake George befinden, sowie auf das dazwischenliegende Rwenzori Gebirge. Das Ziel der Untersuchung bestand darin herauszufinden, in welcher Weise das südwärts propagierende Lake Albert-Subsegment mit dem sinistral versetzten nordwärts propagierenden Lake Edward/Lake George-Subsegment interagiert.rnrnVon besonderem Interesse war es, in welcherWeise die Strukturen innerhalb und außerhalb der Rwenzoris durch die Interaktion dieser Riftsegmente beeinflußt wurden. rnrnDrei verschiedene Versuchsreihen mit unterschiedlichen Randbedingungen wurden miteinander verglichen. Abhängig vom vorherrschenden Deformationstyp der Transferzone wurden die Reihen als ’Scherungs-dominiert’, ’Extensions-dominiert’ und als ’Rotations-dominiert’ charakterisiert. Die Beobachtung der 3-dimensionalen strukturellen Entwicklung der Riftsegmente wurde durch die Kombination von Modell-Aufsichten mit Profilschnitten ermöglicht. Von den drei genannten Versuchsreihen entwickelte die ’Rotationsdominierten’ Reihe einen rautenförmiger Block im Tranferbereich der beiden Riftsegmente, der sich um 5−20° im Uhrzeigersinn drehte. DieserWinkel liegt im Bereich des vermuteten Rotationswinkel des Rwenzori-Blocks (5°). Zusammengefasst untersuchen die Sandbox-Versuche den Einfluss präexistenter Strukturen und der Überlappung bzw. Überschneidung zweier interagierender Riftsegmente auf die Entwicklung des Riftsystems. Sie befassen sich darüber hinaus mit der Frage, welchen Einfluss Blockbildung und -rotation auf das lokale Stressfeld haben.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern control systems are becoming more and more complex and control algorithms more and more sophisticated. Consequently, Fault Detection and Diagnosis (FDD) and Fault Tolerant Control (FTC) have gained central importance over the past decades, due to the increasing requirements of availability, cost efficiency, reliability and operating safety. This thesis deals with the FDD and FTC problems in a spacecraft Attitude Determination and Control System (ADCS). Firstly, the detailed nonlinear models of the spacecraft attitude dynamics and kinematics are described, along with the dynamic models of the actuators and main external disturbance sources. The considered ADCS is composed of an array of four redundant reaction wheels. A set of sensors provides satellite angular velocity, attitude and flywheel spin rate information. Then, general overviews of the Fault Detection and Isolation (FDI), Fault Estimation (FE) and Fault Tolerant Control (FTC) problems are presented, and the design and implementation of a novel diagnosis system is described. The system consists of a FDI module composed of properly organized model-based residual filters, exploiting the available input and output information for the detection and localization of an occurred fault. A proper fault mapping procedure and the nonlinear geometric approach are exploited to design residual filters explicitly decoupled from the external aerodynamic disturbance and sensitive to specific sets of faults. The subsequent use of suitable adaptive FE algorithms, based on the exploitation of radial basis function neural networks, allows to obtain accurate fault estimations. Finally, this estimation is actively exploited in a FTC scheme to achieve a suitable fault accommodation and guarantee the desired control performances. A standard sliding mode controller is implemented for attitude stabilization and control. Several simulation results are given to highlight the performances of the overall designed system in case of different types of faults affecting the ADCS actuators and sensors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the past few decades, integrated circuits have become a major part of everyday life. Every circuit that is created needs to be tested for faults so faulty circuits are not sent to end-users. The creation of these tests is time consuming, costly and difficult to perform on larger circuits. This research presents a novel method for fault detection and test pattern reduction in integrated circuitry under test. By leveraging the FPGA's reconfigurability and parallel processing capabilities, a speed up in fault detection can be achieved over previous computer simulation techniques. This work presents the following contributions to the field of Stuck-At-Fault detection: We present a new method for inserting faults into a circuit net list. Given any circuit netlist, our tool can insert multiplexers into a circuit at correct internal nodes to aid in fault emulation on reconfigurable hardware. We present a parallel method of fault emulation. The benefit of the FPGA is not only its ability to implement any circuit, but its ability to process data in parallel. This research utilizes this to create a more efficient emulation method that implements numerous copies of the same circuit in the FPGA. A new method to organize the most efficient faults. Most methods for determinin the minimum number of inputs to cover the most faults require sophisticated softwareprograms that use heuristics. By utilizing hardware, this research is able to process data faster and use a simpler method for an efficient way of minimizing inputs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Transformer protection is one of the most challenging applications within the power system protective relay field. Transformers with a capacity rating exceeding 10 MVA are usually protected using differential current relays. Transformers are an aging and vulnerable bottleneck in the present power grid; therefore, quick fault detection and corresponding transformer de-energization is the key element in minimizing transformer damage. Present differential current relays are based on digital signal processing (DSP). They combine DSP phasor estimation and protective-logic-based decision making. The limitations of existing DSP-based differential current relays must be identified to determine the best protection options for sensitive and quick fault detection. The development, implementation, and evaluation of a DSP differential current relay is detailed. The overall goal is to make fault detection faster without compromising secure and safe transformer operation. A detailed background on the DSP differential current relay is provided. Then different DSP phasor estimation filters are implemented and evaluated based on their ability to extract desired frequency components from the measured current signal quickly and accurately. The main focus of the phasor estimation evaluation is to identify the difference between using non-recursive and recursive filtering methods. Then the protective logic of the DSP differential current relay is implemented and required settings made in accordance with transformer application. Finally, the DSP differential current relay will be evaluated using available transformer models within the ATP simulation environment. Recursive filtering methods were found to have significant advantage over non-recursive filtering methods when evaluated individually and when applied in the DSP differential relay. Recursive filtering methods can be up to 50% faster than non-recursive methods, but can cause false trip due to overshoot if the only objective is speed. The relay sensitivity is however independent of filtering method and depends on the settings of the relay’s differential characteristics (pickup threshold and percent slope).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A good and early fault detection and isolation system along with efficient alarm management and fine sensor validation systems are very important in today¿s complex process plants, specially in terms of safety enhancement and costs reduction. This paper presents a methodology for fault characterization. This is a self-learning approach developed in two phases. An initial, learning phase, where the simulation of process units, without and with different faults, will let the system (in an automated way) to detect the key variables that characterize the faults. This will be used in a second (on line) phase, where these key variables will be monitored in order to diagnose possible faults. Using this scheme the faults will be diagnosed and isolated in an early stage where the fault still has not turned into a failure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

n this work, a mathematical unifying framework for designing new fault detection schemes in nonlinear stochastic continuous-time dynamical systems is developed. These schemes are based on a stochastic process, called the residual, which reflects the system behavior and whose changes are to be detected. A quickest detection scheme for the residual is proposed, which is based on the computed likelihood ratios for time-varying statistical changes in the Ornstein–Uhlenbeck process. Several expressions are provided, depending on a priori knowledge of the fault, which can be employed in a proposed CUSUM-type approximated scheme. This general setting gathers different existing fault detection schemes within a unifying framework, and allows for the definition of new ones. A comparative simulation example illustrates the behavior of the proposed schemes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper a new method for fault isolation in a class of continuous-time stochastic dynamical systems is proposed. The method is framed in the context of model-based analytical redundancy, consisting in the generation of a residual signal by means of a diagnostic observer, for its posterior analysis. Once a fault has been detected, and assuming some basic a priori knowledge about the set of possible failures in the plant, the isolation task is then formulated as a type of on-line statistical classification problem. The proposed isolation scheme employs in parallel different hypotheses tests on a statistic of the residual signal, one test for each possible fault. This isolation method is characterized by deriving for the unidimensional case, a sufficient isolability condition as well as an upperbound of the probability of missed isolation. Simulation examples illustrate the applicability of the proposed scheme.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Las Field-Programmable Gate Arrays (FPGAs) SRAM se construyen sobre una memoria de configuración de tecnología RAM Estática (SRAM). Presentan múltiples características que las hacen muy interesantes para diseñar sistemas empotrados complejos. En primer lugar presentan un coste no-recurrente de ingeniería (NRE) bajo, ya que los elementos lógicos y de enrutado están pre-implementados (el diseño de usuario define su conexionado). También, a diferencia de otras tecnologías de FPGA, pueden ser reconfiguradas (incluso en campo) un número ilimitado de veces. Es más, las FPGAs SRAM de Xilinx soportan Reconfiguración Parcial Dinámica (DPR), la cual permite reconfigurar la FPGA sin interrumpir la aplicación. Finalmente, presentan una alta densidad de lógica, una alta capacidad de procesamiento y un rico juego de macro-bloques. Sin embargo, un inconveniente de esta tecnología es su susceptibilidad a la radiación ionizante, la cual aumenta con el grado de integración (geometrías más pequeñas, menores tensiones y mayores frecuencias). Esta es una precupación de primer nivel para aplicaciones en entornos altamente radiativos y con requisitos de alta confiabilidad. Este fenómeno conlleva una degradación a largo plazo y también puede inducir fallos instantáneos, los cuales pueden ser reversibles o producir daños irreversibles. En las FPGAs SRAM, los fallos inducidos por radiación pueden aparecer en en dos capas de arquitectura diferentes, que están físicamente superpuestas en el dado de silicio. La Capa de Aplicación (o A-Layer) contiene el hardware definido por el usuario, y la Capa de Configuración contiene la memoria de configuración y la circuitería de soporte. Los fallos en cualquiera de estas capas pueden hacer fracasar el sistema, lo cual puede ser ás o menos tolerable dependiendo de los requisitos de confiabilidad del sistema. En el caso general, estos fallos deben gestionados de alguna manera. Esta tesis trata sobre la gestión de fallos en FPGAs SRAM a nivel de sistema, en el contexto de sistemas empotrados autónomos y confiables operando en un entorno radiativo. La tesis se centra principalmente en aplicaciones espaciales, pero los mismos principios pueden aplicarse a aplicaciones terrenas. Las principales diferencias entre ambas son el nivel de radiación y la posibilidad de mantenimiento. Las diferentes técnicas para la gestión de fallos en A-Layer y C-Layer son clasificados, y sus implicaciones en la confiabilidad del sistema son analizados. Se proponen varias arquitecturas tanto para Gestores de Fallos de una capa como de doble-capa. Para estos últimos se propone una arquitectura novedosa, flexible y versátil. Gestiona las dos capas concurrentemente de manera coordinada, y permite equilibrar el nivel de redundancia y la confiabilidad. Con el objeto de validar técnicas de gestión de fallos dinámicas, se desarrollan dos diferentes soluciones. La primera es un entorno de simulación para Gestores de Fallos de C-Layer, basado en SystemC como lenguaje de modelado y como simulador basado en eventos. Este entorno y su metodología asociada permite explorar el espacio de diseño del Gestor de Fallos, desacoplando su diseño del desarrollo de la FPGA objetivo. El entorno incluye modelos tanto para la C-Layer de la FPGA como para el Gestor de Fallos, los cuales pueden interactuar a diferentes niveles de abstracción (a nivel de configuration frames y a nivel físico JTAG o SelectMAP). El entorno es configurable, escalable y versátil, e incluye capacidades de inyección de fallos. Los resultados de simulación para algunos escenarios son presentados y comentados. La segunda es una plataforma de validación para Gestores de Fallos de FPGAs Xilinx Virtex. La plataforma hardware aloja tres Módulos de FPGA Xilinx Virtex-4 FX12 y dos Módulos de Unidad de Microcontrolador (MCUs) de 32-bits de propósito general. Los Módulos MCU permiten prototipar Gestores de Fallos de C-Layer y A-Layer basados en software. Cada Módulo FPGA implementa un enlace de A-Layer Ethernet (a través de un switch Ethernet) con uno de los Módulos MCU, y un enlace de C-Layer JTAG con el otro. Además, ambos Módulos MCU intercambian comandos y datos a través de un enlace interno tipo UART. Al igual que para el entorno de simulación, se incluyen capacidades de inyección de fallos. Los resultados de pruebas para algunos escenarios son también presentados y comentados. En resumen, esta tesis cubre el proceso completo desde la descripción de los fallos FPGAs SRAM inducidos por radiación, pasando por la identificación y clasificación de técnicas de gestión de fallos, y por la propuesta de arquitecturas de Gestores de Fallos, para finalmente validarlas por simulación y pruebas. El trabajo futuro está relacionado sobre todo con la implementación de Gestores de Fallos de Sistema endurecidos para radiación. ABSTRACT SRAM-based Field-Programmable Gate Arrays (FPGAs) are built on Static RAM (SRAM) technology configuration memory. They present a number of features that make them very convenient for building complex embedded systems. First of all, they benefit from low Non-Recurrent Engineering (NRE) costs, as the logic and routing elements are pre-implemented (user design defines their connection). Also, as opposed to other FPGA technologies, they can be reconfigured (even in the field) an unlimited number of times. Moreover, Xilinx SRAM-based FPGAs feature Dynamic Partial Reconfiguration (DPR), which allows to partially reconfigure the FPGA without disrupting de application. Finally, they feature a high logic density, high processing capability and a rich set of hard macros. However, one limitation of this technology is its susceptibility to ionizing radiation, which increases with technology scaling (smaller geometries, lower voltages and higher frequencies). This is a first order concern for applications in harsh radiation environments and requiring high dependability. Ionizing radiation leads to long term degradation as well as instantaneous faults, which can in turn be reversible or produce irreversible damage. In SRAM-based FPGAs, radiation-induced faults can appear at two architectural layers, which are physically overlaid on the silicon die. The Application Layer (or A-Layer) contains the user-defined hardware, and the Configuration Layer (or C-Layer) contains the (volatile) configuration memory and its support circuitry. Faults at either layers can imply a system failure, which may be more ore less tolerated depending on the dependability requirements. In the general case, such faults must be managed in some way. This thesis is about managing SRAM-based FPGA faults at system level, in the context of autonomous and dependable embedded systems operating in a radiative environment. The focus is mainly on space applications, but the same principles can be applied to ground applications. The main differences between them are the radiation level and the possibility for maintenance. The different techniques for A-Layer and C-Layer fault management are classified and their implications in system dependability are assessed. Several architectures are proposed, both for single-layer and dual-layer Fault Managers. For the latter, a novel, flexible and versatile architecture is proposed. It manages both layers concurrently in a coordinated way, and allows balancing redundancy level and dependability. For the purpose of validating dynamic fault management techniques, two different solutions are developed. The first one is a simulation framework for C-Layer Fault Managers, based on SystemC as modeling language and event-driven simulator. This framework and its associated methodology allows exploring the Fault Manager design space, decoupling its design from the target FPGA development. The framework includes models for both the FPGA C-Layer and for the Fault Manager, which can interact at different abstraction levels (at configuration frame level and at JTAG or SelectMAP physical level). The framework is configurable, scalable and versatile, and includes fault injection capabilities. Simulation results for some scenarios are presented and discussed. The second one is a validation platform for Xilinx Virtex FPGA Fault Managers. The platform hosts three Xilinx Virtex-4 FX12 FPGA Modules and two general-purpose 32-bit Microcontroller Unit (MCU) Modules. The MCU Modules allow prototyping software-based CLayer and A-Layer Fault Managers. Each FPGA Module implements one A-Layer Ethernet link (through an Ethernet switch) with one of the MCU Modules, and one C-Layer JTAG link with the other. In addition, both MCU Modules exchange commands and data over an internal UART link. Similarly to the simulation framework, fault injection capabilities are implemented. Test results for some scenarios are also presented and discussed. In summary, this thesis covers the whole process from describing the problem of radiationinduced faults in SRAM-based FPGAs, then identifying and classifying fault management techniques, then proposing Fault Manager architectures and finally validating them by simulation and test. The proposed future work is mainly related to the implementation of radiation-hardened System Fault Managers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We examine the event statistics obtained from two differing simplified models for earthquake faults. The first model is a reproduction of the Block-Slider model of Carlson et al. (1991), a model often employed in seismicity studies. The second model is an elastodynamic fault model based upon the Lattice Solid Model (LSM) of Mora and Place (1994). We performed simulations in which the fault length was varied in each model and generated synthetic catalogs of event sizes and times. From these catalogs, we constructed interval event size distributions and inter-event time distributions. The larger, localised events in the Block-Slider model displayed the same scaling behaviour as events in the LSM however the distribution of inter-event times was markedly different. The analysis of both event size and inter-event time statistics is an effective method for comparative studies of differing simplified models for earthquake faults.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A systematic goal-driven top-down modelling methodology is proposed that is capable of developing a multiscale model of a process system for given diagnostic purposes. The diagnostic goal-set and the symptoms are extracted from HAZOP analysis results, where the possible actions to be performed in a fault situation are also described. The multiscale dynamic model is realized in the form of a hierarchical coloured Petri net by using a novel substitution place-transition pair. Multiscale simulation that focuses automatically on the fault areas is used to predict the effect of the proposed preventive actions. The notions and procedures are illustrated on some simple case studies including a heat exchanger network and a more complex wet granulation process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stochastic simulation is a recognised tool for quantifying the spatial distribution of geological uncertainty and risk in earth science and engineering. Metals mining is an area where simulation technologies are extensively used; however, applications in the coal mining industry have been limited. This is particularly due to the lack of a systematic demonstration illustrating the capabilities these techniques have in problem solving in coal mining. This paper presents two broad and technically distinct areas of applications in coal mining. The first deals with the use of simulation in the quantification of uncertainty in coal seam attributes and risk assessment to assist coal resource classification, and drillhole spacing optimisation to meet pre-specified risk levels at a required confidence. The second application presents the use of stochastic simulation in the quantification of fault risk, an area of particular interest to underground coal mining, and documents the performance of the approach. The examples presented demonstrate the advantages and positive contribution stochastic simulation approaches bring to the coal mining industry

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computer programs have been developed to enable the coordination of fuses and overcurrent relays for radial power systems under estimated fault current conditions. The grading curves for these protection devices can be produced on a graphics terminal and a hard copy can be obtained. Additional programs have also been developed which could be used to assess the validity of relay settings (obtained under the above conditions) when the transient effect is included. Modelling of a current transformer is included because transformer saturation may occur if the fault current is high, and hence the secondary current is distorted. Experiments were carried out to confirm that distorted currents will affect the relay operating time, and it is shown that if the relay current contains only a small percentage of harmonic distortion, the relay operating time is increased. System equations were arranged to enable the model to predict fault currents with a generator transformer incorporated in the system, and also to include the effect of circuit breaker opening, arcing resistance, and earthing resistance. A fictitious field winding was included to enable more accurate prediction of fault currents when the system is operating at both lagging and leading power factors prior to the occurrence of the fault.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a new converter protection method, primarily based on a series dynamic resistor (SDR) that avoids the doubly-fed induction generator (DFIG) control being disabled by crowbar protection during fault conditions. A combined converter protection scheme based on the proposed SDR and conventional crowbar is analyzed and discussed. The main protection advantages are due to the series topology when compared with crowbar and dc-chopper protection. Various fault overcurrent conditions (both symmetrical and asymmetrical) are analyzed and used to design the protection in detail, including the switching strategy and coordination with crowbar, and resistance value calculations. PSCAD/EMTDC simulation results show that the proposed method is advantageous for fault overcurrent protection, especially for asymmetrical faults, in which the traditional crowbar protection may malfunction.