891 resultados para presumption of fault
Resumo:
The generator differential protection is one of the most important electrical protections of synchronous generator stator windings. Its operation principle is based on the comparison of the input current and output current at each phase winding. Unwanted trip commands are usually caused by CT saturation, wrong CT selection, or the fact that they may come from different manufacturers. In generators grounded through high impedance, only phase-to-phase or three-phase faults can be detected by the differential protection. This kind of fault causes differential current to flow in, at least, two phases of the winding. Several cases of unwanted trip commands caused by the appearance of differential current in only one phase of the generator have been reported. In this paper multi-phase criterion is proposed for generator differential protection algorithm when applied to high impedance grounded generators.
Resumo:
El auge del "Internet de las Cosas" (IoT, "Internet of Things") y sus tecnologías asociadas han permitido su aplicación en diversos dominios de la aplicación, entre los que se encuentran la monitorización de ecosistemas forestales, la gestión de catástrofes y emergencias, la domótica, la automatización industrial, los servicios para ciudades inteligentes, la eficiencia energética de edificios, la detección de intrusos, la gestión de desastres y emergencias o la monitorización de señales corporales, entre muchas otras. La desventaja de una red IoT es que una vez desplegada, ésta queda desatendida, es decir queda sujeta, entre otras cosas, a condiciones climáticas cambiantes y expuestas a catástrofes naturales, fallos de software o hardware, o ataques maliciosos de terceros, por lo que se puede considerar que dichas redes son propensas a fallos. El principal requisito de los nodos constituyentes de una red IoT es que estos deben ser capaces de seguir funcionando a pesar de sufrir errores en el propio sistema. La capacidad de la red para recuperarse ante fallos internos y externos inesperados es lo que se conoce actualmente como "Resiliencia" de la red. Por tanto, a la hora de diseñar y desplegar aplicaciones o servicios para IoT, se espera que la red sea tolerante a fallos, que sea auto-configurable, auto-adaptable, auto-optimizable con respecto a nuevas condiciones que puedan aparecer durante su ejecución. Esto lleva al análisis de un problema fundamental en el estudio de las redes IoT, el problema de la "Conectividad". Se dice que una red está conectada si todo par de nodos en la red son capaces de encontrar al menos un camino de comunicación entre ambos. Sin embargo, la red puede desconectarse debido a varias razones, como que se agote la batería, que un nodo sea destruido, etc. Por tanto, se hace necesario gestionar la resiliencia de la red con el objeto de mantener la conectividad entre sus nodos, de tal manera que cada nodo IoT sea capaz de proveer servicios continuos, a otros nodos, a otras redes o, a otros servicios y aplicaciones. En este contexto, el objetivo principal de esta tesis doctoral se centra en el estudio del problema de conectividad IoT, más concretamente en el desarrollo de modelos para el análisis y gestión de la Resiliencia, llevado a la práctica a través de las redes WSN, con el fin de mejorar la capacidad la tolerancia a fallos de los nodos que componen la red. Este reto se aborda teniendo en cuenta dos enfoques distintos, por una parte, a diferencia de otro tipo de redes de dispositivos convencionales, los nodos en una red IoT son propensos a perder la conexión, debido a que se despliegan en entornos aislados, o en entornos con condiciones extremas; por otra parte, los nodos suelen ser recursos con bajas capacidades en términos de procesamiento, almacenamiento y batería, entre otros, por lo que requiere que el diseño de la gestión de su resiliencia sea ligero, distribuido y energéticamente eficiente. En este sentido, esta tesis desarrolla técnicas auto-adaptativas que permiten a una red IoT, desde la perspectiva del control de su topología, ser resiliente ante fallos en sus nodos. Para ello, se utilizan técnicas basadas en lógica difusa y técnicas de control proporcional, integral y derivativa (PID - "proportional-integral-derivative"), con el objeto de mejorar la conectividad de la red, teniendo en cuenta que el consumo de energía debe preservarse tanto como sea posible. De igual manera, se ha tenido en cuenta que el algoritmo de control debe ser distribuido debido a que, en general, los enfoques centralizados no suelen ser factibles a despliegues a gran escala. El presente trabajo de tesis implica varios retos que conciernen a la conectividad de red, entre los que se incluyen: la creación y el análisis de modelos matemáticos que describan la red, una propuesta de sistema de control auto-adaptativo en respuesta a fallos en los nodos, la optimización de los parámetros del sistema de control, la validación mediante una implementación siguiendo un enfoque de ingeniería del software y finalmente la evaluación en una aplicación real. Atendiendo a los retos anteriormente mencionados, el presente trabajo justifica, mediante una análisis matemático, la relación existente entre el "grado de un nodo" (definido como el número de nodos en la vecindad del nodo en cuestión) y la conectividad de la red, y prueba la eficacia de varios tipos de controladores que permiten ajustar la potencia de trasmisión de los nodos de red en respuesta a eventuales fallos, teniendo en cuenta el consumo de energía como parte de los objetivos de control. Así mismo, este trabajo realiza una evaluación y comparación con otros algoritmos representativos; en donde se demuestra que el enfoque desarrollado es más tolerante a fallos aleatorios en los nodos de la red, así como en su eficiencia energética. Adicionalmente, el uso de algoritmos bioinspirados ha permitido la optimización de los parámetros de control de redes dinámicas de gran tamaño. Con respecto a la implementación en un sistema real, se han integrado las propuestas de esta tesis en un modelo de programación OSGi ("Open Services Gateway Initiative") con el objeto de crear un middleware auto-adaptativo que mejore la gestión de la resiliencia, especialmente la reconfiguración en tiempo de ejecución de componentes software cuando se ha producido un fallo. Como conclusión, los resultados de esta tesis doctoral contribuyen a la investigación teórica y, a la aplicación práctica del control resiliente de la topología en redes distribuidas de gran tamaño. Los diseños y algoritmos presentados pueden ser vistos como una prueba novedosa de algunas técnicas para la próxima era de IoT. A continuación, se enuncian de forma resumida las principales contribuciones de esta tesis: (1) Se han analizado matemáticamente propiedades relacionadas con la conectividad de la red. Se estudia, por ejemplo, cómo varía la probabilidad de conexión de la red al modificar el alcance de comunicación de los nodos, así como cuál es el mínimo número de nodos que hay que añadir al sistema desconectado para su re-conexión. (2) Se han propuesto sistemas de control basados en lógica difusa para alcanzar el grado de los nodos deseado, manteniendo la conectividad completa de la red. Se han evaluado diferentes tipos de controladores basados en lógica difusa mediante simulaciones, y los resultados se han comparado con otros algoritmos representativos. (3) Se ha investigado más a fondo, dando un enfoque más simple y aplicable, el sistema de control de doble bucle, y sus parámetros de control se han optimizado empleando algoritmos heurísticos como el método de la entropía cruzada (CE, "Cross Entropy"), la optimización por enjambre de partículas (PSO, "Particle Swarm Optimization"), y la evolución diferencial (DE, "Differential Evolution"). (4) Se han evaluado mediante simulación, la mayoría de los diseños aquí presentados; además, parte de los trabajos se han implementado y validado en una aplicación real combinando técnicas de software auto-adaptativo, como por ejemplo las de una arquitectura orientada a servicios (SOA, "Service-Oriented Architecture"). ABSTRACT The advent of the Internet of Things (IoT) enables a tremendous number of applications, such as forest monitoring, disaster management, home automation, factory automation, smart city, etc. However, various kinds of unexpected disturbances may cause node failure in the IoT, for example battery depletion, software/hardware malfunction issues and malicious attacks. So, it can be considered that the IoT is prone to failure. The ability of the network to recover from unexpected internal and external failures is known as "resilience" of the network. Resilience usually serves as an important non-functional requirement when designing IoT, which can further be broken down into "self-*" properties, such as self-adaptive, self-healing, self-configuring, self-optimization, etc. One of the consequences that node failure brings to the IoT is that some nodes may be disconnected from others, such that they are not capable of providing continuous services for other nodes, networks, and applications. In this sense, the main objective of this dissertation focuses on the IoT connectivity problem. A network is regarded as connected if any pair of different nodes can communicate with each other either directly or via a limited number of intermediate nodes. More specifically, this thesis focuses on the development of models for analysis and management of resilience, implemented through the Wireless Sensor Networks (WSNs), which is a challenging task. On the one hand, unlike other conventional network devices, nodes in the IoT are more likely to be disconnected from each other due to their deployment in a hostile or isolated environment. On the other hand, nodes are resource-constrained in terms of limited processing capability, storage and battery capacity, which requires that the design of the resilience management for IoT has to be lightweight, distributed and energy-efficient. In this context, the thesis presents self-adaptive techniques for IoT, with the aim of making the IoT resilient against node failures from the network topology control point of view. The fuzzy-logic and proportional-integral-derivative (PID) control techniques are leveraged to improve the network connectivity of the IoT in response to node failures, meanwhile taking into consideration that energy consumption must be preserved as much as possible. The control algorithm itself is designed to be distributed, because the centralized approaches are usually not feasible in large scale IoT deployments. The thesis involves various aspects concerning network connectivity, including: creation and analysis of mathematical models describing the network, proposing self-adaptive control systems in response to node failures, control system parameter optimization, implementation using the software engineering approach, and evaluation in a real application. This thesis also justifies the relations between the "node degree" (the number of neighbor(s) of a node) and network connectivity through mathematic analysis, and proves the effectiveness of various types of controllers that can adjust power transmission of the IoT nodes in response to node failures. The controllers also take into consideration the energy consumption as part of the control goals. The evaluation is performed and comparison is made with other representative algorithms. The simulation results show that the proposals in this thesis can tolerate more random node failures and save more energy when compared with those representative algorithms. Additionally, the simulations demonstrate that the use of the bio-inspired algorithms allows optimizing the parameters of the controller. With respect to the implementation in a real system, the programming model called OSGi (Open Service Gateway Initiative) is integrated with the proposals in order to create a self-adaptive middleware, especially reconfiguring the software components at runtime when failures occur. The outcomes of this thesis contribute to theoretic research and practical applications of resilient topology control for large and distributed networks. The presented controller designs and optimization algorithms can be viewed as novel trials of the control and optimization techniques for the coming era of the IoT. The contributions of this thesis can be summarized as follows: (1) Mathematically, the fault-tolerant probability of a large-scale stochastic network is analyzed. It is studied how the probability of network connectivity depends on the communication range of the nodes, and what is the minimum number of neighbors to be added for network re-connection. (2) A fuzzy-logic control system is proposed, which obtains the desired node degree and in turn maintains the network connectivity when it is subject to node failures. There are different types of fuzzy-logic controllers evaluated by simulations, and the results demonstrate the improvement of fault-tolerant capability as compared to some other representative algorithms. (3) A simpler but more applicable approach, the two-loop control system is further investigated, and its control parameters are optimized by using some heuristic algorithms such as Cross Entropy (CE), Particle Swarm Optimization (PSO), and Differential Evolution (DE). (4) Most of the designs are evaluated by means of simulations, but part of the proposals are implemented and tested in a real-world application by combining the self-adaptive software technique and the control algorithms which are presented in this thesis.
Resumo:
This research presents the development and implementation of fault location algorithms in power distribution networks with distributed generation units installed along their feeders. The proposed algorithms are capable of locating the fault based on voltage and current signals recorded by intelligent electronic devices installed at the end of the feeder sections, information to compute the loads connected to these feeders and their electric characteristics, and the operating status of the network. In addition, this work presents the study of analytical models of distributed generation and load technologies that could contribute to the performance of the proposed fault location algorithms. The validation of the algorithms was based on computer simulations using network models implemented in ATP, whereas the algorithms were implemented in MATLAB.
Resumo:
The design of fault tolerant systems is gaining importance in large domains of embedded applications where design constrains are as important as reliability. New software techniques, based on selective application of redundancy, have shown remarkable fault coverage with reduced costs and overheads. However, the large number of different solutions provided by these techniques, and the costly process to assess their reliability, make the design space exploration a very difficult and time-consuming task. This paper proposes the integration of a multi-objective optimization tool with a software hardening environment to perform an automatic design space exploration in the search for the best trade-offs between reliability, cost, and performance. The first tool is commanded by a genetic algorithm which can simultaneously fulfill many design goals thanks to the use of the NSGA-II multi-objective algorithm. The second is a compiler-based infrastructure that automatically produces selective protected (hardened) versions of the software and generates accurate overhead reports and fault coverage estimations. The advantages of our proposal are illustrated by means of a complex and detailed case study involving a typical embedded application, the AES (Advanced Encryption Standard).
Resumo:
Dilatant faults often form in rocks containing pre-existing joints, but the effects of joints on fault segment linkage and fracture connectivity is not well understood. We present an analogue modeling study using cohesive powder with pre-formed joint sets in the upper layer, varying the angle between joints and a rigid basement fault. We analyze interpreted map-view photographs at maximum displacement for damage zone width, number of connected joints, number of secondary fractures, degree of segmentation and area fraction of massively dilatant fractures. Particle imaging velocimetry helps provide insights on deformation history of the experiments and illustrate the localization pattern of fault segments. Results show that with increasing angle between joint-set and basement-fault strike the number of secondary fractures and the number of connected joints increases, while the area fraction of massively dilatant fractures shows only a minor increase. Models without pre-existing joints show far lower area fractions of massively dilatant fractures while forming distinctly more secondary fractures.
Resumo:
Mode of access: Internet.
Resumo:
The Seattle Fault is an active east-west trending reverse fault zone that intersects both Seattle and Bellevue, two highly populated cities in Washington. Rupture along strands of the fault poses a serious threat to infrastructure and thousands of people in the region. Precise locations of fault strands are still poorly constrained in Bellevue due to blind thrusting, urban development, and/or erosion. Seismic reflection and aeromagnetic surveys have shed light on structural geometries of the fault zone in bedrock. However, the fault displaces both bedrock and unconsolidated Quaternary deposits, and seismic data are poor indicators of the locations of fault strands within the unconsolidated strata. Fortunately, evidence of past fault strand ruptures may also be recorded indirectly by fluvial processes and should also be observable in the subsurface. I analyzed hillslope and river geomorphology using LiDAR data and ArcGIS to locate surface fault traces and then compare/correlate these findings to subsurface offsets identified using borehole data. Geotechnical borings were used to locate one fault offset and provide input to a cross section of the fault constructed using Rockworks software. Knickpoints, which may correlate to fault rupture, were found upstream of this newly identified fault offset as well as upstream of a previously known fault segment.
Resumo:
The discovery of the Woodleigh impact structure, first identified by R. P. lasky, bears a number of parallels with that of the Chlcxulub impact structure of K-T boundary age, underpinning complications inherent in the study of buried impact structures by geophysical techniques and drilling. Questions raised in connection with the diameter of the Woodleigh impact structure reflect uncertainties in criteria used to define original crater sizes in eroded and buried impact structures as well as limits on the geological controls at Woodleigh. The truncation of the regional Ajona - Wandagee gravity ridges by the outer aureole of the Woodleigh structure, a superposed arcuate magnetic anomaly along the eastern part of the structure, seismic-reflection data indicating a central > 37 km-diameter dome, correlation of fault patterns between Woodleigh and less-deeply eroded impact structures (Ries crater, Chesapeake Bay), and morphometric estimates all indicate a final diameter of 120 km. At Woodleigh, pre-hydrothermal shock-induced melting and diaplectic transformations are heavily masked by pervasive alteration of the shocked gneisses to montmorillonite-dominated clays, accounting for the high MgO and low K2O of cryptocrystalline components. The possible contamination of sub-crater levels of the Woodlelgh impact structure by meteoritic components, suggested by high Ni, Co, Cr, Ni/ Co and Ni/Cr ratios, requires further siderophile element analyses of vein materials. Although stratigraphic age constraints on the impact event are broad (post-Middle Devonian to pre-Early Jurassic) high-temperature (200-250 degrees C) pervasive hydrothermal activity dated by K-Ar isotopes of illite - smectite indicates an age of 359 +/- 4 Ma. To date neither Late Devonian crater fill, nor impact ejecta fallout units have been identified, although metallic meteoritic ablation spherules of a similar age have been found in the Conning Basin.
Resumo:
Stochastic simulation is a recognised tool for quantifying the spatial distribution of geological uncertainty and risk in earth science and engineering. Metals mining is an area where simulation technologies are extensively used; however, applications in the coal mining industry have been limited. This is particularly due to the lack of a systematic demonstration illustrating the capabilities these techniques have in problem solving in coal mining. This paper presents two broad and technically distinct areas of applications in coal mining. The first deals with the use of simulation in the quantification of uncertainty in coal seam attributes and risk assessment to assist coal resource classification, and drillhole spacing optimisation to meet pre-specified risk levels at a required confidence. The second application presents the use of stochastic simulation in the quantification of fault risk, an area of particular interest to underground coal mining, and documents the performance of the approach. The examples presented demonstrate the advantages and positive contribution stochastic simulation approaches bring to the coal mining industry
Resumo:
The research carried out in this thesis was mainly concerned with the effects of large induction motors and their transient performance in power systems. Computer packages using the three phase co-ordinate frame of reference were developed to simulate the induction motor transient performance. A technique using matrix algebra was developed to allow extension of the three phase co-ordinate method to analyse asymmetrical and symmetrical faults on both sides of the three phase delta-star transformer which is usually required when connecting large induction motors to the supply system. System simulation, applying these two techniques, was used to study the transient stability of a power system. The response of a typical system, loaded with a group of large induction motors, two three-phase delta-star transformers, a synchronous generator and an infinite system was analysed. The computer software developed to study this system has the advantage that different types of fault at different locations can be studied by simple changes in input data. The research also involved investigating the possibility of using different integrating routines such as Runge-Kutta-Gill, RungeKutta-Fehlberg and the Predictor-Corrector methods. The investigation enables the reduction of computation time, which is necessary when solving the induction motor equations expressed in terms of the three phase variables. The outcome of this investigation was utilised in analysing an introductory model (containing only minimal control action) of an isolated system having a significant induction motor load compared to the size of the generator energising the system.
Resumo:
Computer programs have been developed to enable the coordination of fuses and overcurrent relays for radial power systems under estimated fault current conditions. The grading curves for these protection devices can be produced on a graphics terminal and a hard copy can be obtained. Additional programs have also been developed which could be used to assess the validity of relay settings (obtained under the above conditions) when the transient effect is included. Modelling of a current transformer is included because transformer saturation may occur if the fault current is high, and hence the secondary current is distorted. Experiments were carried out to confirm that distorted currents will affect the relay operating time, and it is shown that if the relay current contains only a small percentage of harmonic distortion, the relay operating time is increased. System equations were arranged to enable the model to predict fault currents with a generator transformer incorporated in the system, and also to include the effect of circuit breaker opening, arcing resistance, and earthing resistance. A fictitious field winding was included to enable more accurate prediction of fault currents when the system is operating at both lagging and leading power factors prior to the occurrence of the fault.
Resumo:
Hazard and operability (HAZOP) studies on chemical process plants are very time consuming, and often tedious, tasks. The requirement for HAZOP studies is that a team of experts systematically analyse every conceivable process deviation, identifying possible causes and any hazards that may result. The systematic nature of the task, and the fact that some team members may be unoccupied for much of the time, can lead to tedium, which in turn may lead to serious errors or omissions. An aid to HAZOP are fault trees, which present the system failure logic graphically such that the study team can readily assimilate their findings. Fault trees are also useful to the identification of design weaknesses, and may additionally be used to estimate the likelihood of hazardous events occurring. The one drawback of fault trees is that they are difficult to generate by hand. This is because of the sheer size and complexity of modern process plants. The work in this thesis proposed a computer-based method to aid the development of fault trees for chemical process plants. The aim is to produce concise, structured fault trees that are easy for analysts to understand. Standard plant input-output equation models for major process units are modified such that they include ancillary units and pipework. This results in a reduction in the nodes required to represent a plant. Control loops and protective systems are modelled as operators which act on process variables. This modelling maintains the functionality of loops, making fault tree generation easier and improving the structure of the fault trees produced. A method, called event ordering, is proposed which allows the magnitude of deviations of controlled or measured variables to be defined in terms of the control loops and protective systems with which they are associated.
Resumo:
The initial aim of this research was to investigate the application of expert Systems, or Knowledge Base Systems technology to the automated synthesis of Hazard and Operability Studies. Due to the generic nature of Fault Analysis problems and the way in which Knowledge Base Systems work, this goal has evolved into a consideration of automated support for Fault Analysis in general, covering HAZOP, Fault Tree Analysis, FMEA and Fault Diagnosis in the Process Industries. This thesis described a proposed architecture for such an Expert System. The purpose of the System is to produce a descriptive model of faults and fault propagation from a description of the physical structure of the plant. From these descriptive models, the desired Fault Analysis may be produced. The way in which this is done reflects the complexity of the problem which, in principle, encompasses the whole of the discipline of Process Engineering. An attempt is made to incorporate the perceived method that an expert uses to solve the problem; keywords, heuristics and guidelines from techniques such as HAZOP and Fault Tree Synthesis are used. In a truly Expert System, the performance of the system is strongly dependent on the high quality of the knowledge that is incorporated. This expert knowledge takes the form of heuristics or rules of thumb which are used in problem solving. This research has shown that, for the application of fault analysis heuristics, it is necessary to have a representation of the details of fault propagation within a process. This helps to ensure the robustness of the system - a gradual rather than abrupt degradation at the boundaries of the domain knowledge.
Resumo:
The application of high-power voltage-source converters (VSCs) to multiterminal dc networks is attracting research interest. The development of VSC-based dc networks is constrained by the lack of operational experience, the immaturity of appropriate protective devices, and the lack of appropriate fault analysis techniques. VSCs are vulnerable to dc-cable short-circuit and ground faults due to the high discharge current from the dc-link capacitance. However, faults occurring along the interconnecting dc cables are most likely to threaten system operation. In this paper, cable faults in VSC-based dc networks are analyzed in detail with the identification and definition of the most serious stages of the fault that need to be avoided. A fault location method is proposed because this is a prerequisite for an effective design of a fault protection scheme. It is demonstrated that it is relatively easy to evaluate the distance to a short-circuit fault using voltage reference comparison. For the more difficult challenge of locating ground faults, a method of estimating both the ground resistance and the distance to the fault is proposed by analyzing the initial stage of the fault transient. Analysis of the proposed method is provided and is based on simulation results, with a range of fault resistances, distances, and operational conditions considered.
Resumo:
The operation of technical processes requires increasingly advanced supervision and fault diagnostics to improve reliability and safety. This paper gives an introduction to the field of fault detection and diagnostics and has short methods classification. Growth of complexity and functional importance of inertial navigation systems leads to high losses at the equipment refusals. The paper is devoted to the INS diagnostics system development, allowing identifying the cause of malfunction. The practical realization of this system concerns a software package, performing a set of multidimensional information analysis. The project consists of three parts: subsystem for analyzing, subsystem for data collection and universal interface for open architecture realization. For a diagnostics improving in small analyzing samples new approaches based on pattern recognition algorithms voting and taking into account correlations between target and input parameters will be applied. The system now is at the development stage.