902 resultados para transient fault
Resumo:
As computer chips implementation technologies evolve to obtain more performance, those computer chips are using smaller components, with bigger density of transistors and working with lower power voltages. All these factors turn the computer chips less robust and increase the probability of a transient fault. Transient faults may occur once and never more happen the same way in a computer system lifetime. There are distinct consequences when a transient fault occurs: the operating system might abort the execution if the change produced by the fault is detected by bad behavior of the application, but the biggest risk is that the fault produces an undetected data corruption that modifies the application final result without warnings (for example a bit flip in some crucial data). With the objective of researching transient faults in computer system’s processor registers and memory we have developed an extension of HP’s and AMD joint full system simulation environment, named COTSon. This extension allows the injection of faults that change a single bit in processor registers and memory of the simulated computer. The developed fault injection system makes it possible to: evaluate the effects of single bit flip transient faults in an application, analyze an application robustness against single bit flip transient faults and validate fault detection mechanism and strategies.
Resumo:
The objective of this work is to conduct a comparative study between the fuse key and the single-phase seccionalizador, which are protective equipment used in an electricity distribution networks. This study has also the purpose to reduce the number of electrical power breakdown. Distribution networks are not free from faults, disturbances and failures, then the occurrence of adversities on the network, which may be transient or permanent faults, results in the interruption of electric power. Thus, there are protective systems of distribution networks, which aims to ensure that the electric system continues to function. The incidence of transient faults in the distribution network of this electricity company was used to generate immediate shutdown of customers due to the bad use of fuses as protective equipment by the reclosers. With the use of the fuse switch in the distribution network, there was the immediate shutdown of customers, however, using the single-phase seccionalizador as protective equipment by the reclosers, there are three attempts to restart the electricity power. As the attempts to restart the electricity power are able to eliminate a transient fault, not causing shutdown of any costumer, with the implementation of single-phase sectionalizers to replace the fuses, the number of company shutdowns due to transient faults was reduced by 47.6%
Resumo:
The objective of this work is to conduct a comparative study between the fuse key and the single-phase seccionalizador, which are protective equipment used in an electricity distribution networks. This study has also the purpose to reduce the number of electrical power breakdown. Distribution networks are not free from faults, disturbances and failures, then the occurrence of adversities on the network, which may be transient or permanent faults, results in the interruption of electric power. Thus, there are protective systems of distribution networks, which aims to ensure that the electric system continues to function. The incidence of transient faults in the distribution network of this electricity company was used to generate immediate shutdown of customers due to the bad use of fuses as protective equipment by the reclosers. With the use of the fuse switch in the distribution network, there was the immediate shutdown of customers, however, using the single-phase seccionalizador as protective equipment by the reclosers, there are three attempts to restart the electricity power. As the attempts to restart the electricity power are able to eliminate a transient fault, not causing shutdown of any costumer, with the implementation of single-phase sectionalizers to replace the fuses, the number of company shutdowns due to transient faults was reduced by 47.6%
Resumo:
This work presents the study and development of a combined fault location scheme for three-terminal transmission lines using wavelet transforms (WTs). The methodology is based on the low- and high-frequency components of the transient signals originated from fault situations registered in the terminals of a system. By processing these signals and using the WT, it is possible to determine the time of travelling waves of voltages and/or currents from the fault point to the terminals, as well as estimate the fundamental frequency components. A new approach presents a reliable and accurate fault location scheme combining some different solutions. The main idea is to have a decision routine in order to select which method should be used in each situation presented to the algorithm. The combined algorithm was tested for different fault conditions by simulations using the ATP (Alternative Transients Program) software. The results obtained are promising and demonstrate a highly satisfactory degree of accuracy and reliability of the proposed method.
Resumo:
The main purpose of this paper is to present architecture of automated system that allows monitoring and tracking in real time (online) the possible occurrence of faults and electromagnetic transients observed in primary power distribution networks. Through the interconnection of this automated system to the utility operation center, it will be possible to provide an efficient tool that will assist in decisionmaking by the Operation Center. In short, the desired purpose aims to have all tools necessary to identify, almost instantaneously, the occurrence of faults and transient disturbances in the primary power distribution system, as well as to determine its respective origin and probable location. The compilations of results from the application of this automated system show that the developed techniques provide accurate results, identifying and locating several occurrences of faults observed in the distribution system.
Resumo:
Power distribution automation and control are import-ant tools in the current restructured electricity markets. Unfortunately, due to its stochastic nature, distribution systems faults are hardly avoidable. This paper proposes a novel fault diagnosis scheme for power distribution systems, composed by three different processes: fault detection and classification, fault location, and fault section determination. The fault detection and classification technique is wavelet based. The fault-location technique is impedance based and uses local voltage and current fundamental phasors. The fault section determination method is artificial neural network based and uses the local current and voltage signals to estimate the faulted section. The proposed hybrid scheme was validated through Alternate Transient Program/Electromagentic Transients Program simulations and was implemented as embedded software. It is currently used as a fault diagnosis tool in a Southern Brazilian power distribution company.
Resumo:
In this paper, a novel adaptive strategy to obtain technically justified fault-ride-through requirements for wind turbines (WTs) is proposed. The main objective is to promote an effective integration of wind turbines into power systems with still low penetration levels of wind power based on technical and economical considerations. The level of requirement imposed by the strategy is increased stepwise over time, depending on system characteristics and on wind power penetration level. The idea behind is to introduce stringent requirements only when they are technically needed for a reliable and secure power system operation. Voltage stability support and fault-ride-through requirements are considered in the strategy. Simulations are based on the Chilean transmission network, a midsize isolated power system with still low penetration levels of wind power. Simulations include fixed speed induction generators and doubly fed induction generators. The effects on power system stability of the wind power injections, integrated into the network by adopting the adaptive strategy, are compared with the effects that have the same installed capacity of wind power but only considering WTs able to fulfill stringent requirements (fault-ride-through capability and support voltage stability). Based on simulations and international experience, technically justified requirements for the Chilean case are proposed.
Resumo:
Technology scaling has proceeded into dimensions in which the reliability of manufactured devices is becoming endangered. The reliability decrease is a consequence of physical limitations, relative increase of variations, and decreasing noise margins, among others. A promising solution for bringing the reliability of circuits back to a desired level is the use of design methods which introduce tolerance against possible faults in an integrated circuit. This thesis studies and presents fault tolerance methods for network-onchip (NoC) which is a design paradigm targeted for very large systems-onchip. In a NoC resources, such as processors and memories, are connected to a communication network; comparable to the Internet. Fault tolerance in such a system can be achieved at many abstraction levels. The thesis studies the origin of faults in modern technologies and explains the classification to transient, intermittent and permanent faults. A survey of fault tolerance methods is presented to demonstrate the diversity of available methods. Networks-on-chip are approached by exploring their main design choices: the selection of a topology, routing protocol, and flow control method. Fault tolerance methods for NoCs are studied at different layers of the OSI reference model. The data link layer provides a reliable communication link over a physical channel. Error control coding is an efficient fault tolerance method especially against transient faults at this abstraction level. Error control coding methods suitable for on-chip communication are studied and their implementations presented. Error control coding loses its effectiveness in the presence of intermittent and permanent faults. Therefore, other solutions against them are presented. The introduction of spare wires and split transmissions are shown to provide good tolerance against intermittent and permanent errors and their combination to error control coding is illustrated. At the network layer positioned above the data link layer, fault tolerance can be achieved with the design of fault tolerant network topologies and routing algorithms. Both of these approaches are presented in the thesis together with realizations in the both categories. The thesis concludes that an optimal fault tolerance solution contains carefully co-designed elements from different abstraction levels
Resumo:
The bedrock of old crystalline cratons is characteristically saturated with brittle structures formed during successive superimposed episodes of deformation and under varying stress regimes. As a result, the crust effectively deforms through the reactivation of pre-existing structures rather than by through the activation, or generation, of new ones, and is said to be in a state of 'structural maturity'. By combining data from Olkiluoto Island, southwestern Finland, which has been investigated as the potential site of a deep geological repository for high-level nuclear waste, with observations from southern Sweden, it can be concluded that the southern part of the Svecofennian shield had already attained structural maturity during the Mesoproterozoic era. This indicates that the phase of activation of the crust, i.e. the time interval during which new fractures were generated, was brief in comparison to the subsequent reactivation phase. Structural maturity of the bedrock was also attained relatively rapidly in Namaqualand, western South Africa, after the formation of first brittle structures during Neoproterozoic time. Subsequent brittle deformation in Namaqualand was controlled by the reactivation of pre-existing strike-slip faults.In such settings, seismic events are likely to occur through reactivation of pre-existing zones that are favourably oriented with respect to prevailing stresses. In Namaqualand, this is shown for present day seismicity by slip tendency analysis, and at Olkiluoto, for a Neoproterozoic earthquake reactivating a Mesoproterozoic fault. By combining detailed field observations with the results of paleostress inversions and relative and absolute time constraints, seven distinctm superimposed paleostress regimes have been recognized in the Olkiluoto region. From oldest to youngest these are: (1) NW-SE to NNW-SSE transpression, which prevailed soon after 1.75 Ga, when the crust had sufficiently cooled down to allow brittle deformation to occur. During this phase conjugate NNW-SSE and NE-SW striking strike-slip faults were active simultaneous with reactivation of SE-dipping low-angle shear zones and foliation planes. This was followed by (2) N-S to NE-SW transpression, which caused partial reactivation of structures formed in the first event; (3) NW-SE extension during the Gothian orogeny and at the time of rapakivi magmatism and intrusion of diabase dikes; (4) NE-SW transtension that occurred between 1.60 and 1.30 Ga and which also formed the NW-SE-trending Satakunta graben located some 20 km north of Olkiluoto. Greisen-type veins also formed during this phase. (5) NE-SW compression that postdates both the formation of the 1.56 Ga rapakivi granites and 1.27 Ga olivine diabases of the region; (6) E-W transpression during the early stages of the Mesoproterozoic Sveconorwegian orogeny and which also predated (7) almost coaxial E-W extension attributed to the collapse of the Sveconorwegian orogeny. The kinematic analysis of fracture systems in crystalline bedrock also provides a robust framework for evaluating fluid-rock interaction in the brittle regime; this is essential in assessment of bedrock integrity for numerous geo-engineering applications, including groundwater management, transient or permanent CO2 storage and site investigations for permanent waste disposal. Investigations at Olkiluoto revealed that fluid flow along fractures is coupled with low normal tractions due to in-situ stresses and thus deviates from the generally accepted critically stressed fracture concept, where fluid flow is concentrated on fractures on the verge of failure. The difference is linked to the shallow conditions of Olkiluoto - due to the low differential stresses inherent at shallow depths, fracture activation and fluid flow is controlled by dilation due to low normal tractions. At deeper settings, however, fluid flow is controlled by fracture criticality caused by large differential stress, which drives shear deformation instead of dilation.
Resumo:
To ensure minimum loss of system security and revenue it is essential that faults on underground cable systems be located and repaired rapidly. Currently in the UK, the impulse current method is used to prelocate faults, prior to using acoustic methods to pinpoint the fault location. The impulse current method is heavily dependent on the engineer's knowledge and experience in recognising/interpreting the transient waveforms produced by the fault. The development of a prototype real-time expert system aid for the prelocation of cable faults is described. Results from the prototype demonstrate the feasibility and benefits of the expert system as an aid for the diagnosis and location of faults on underground cable systems.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
In the present thesis, a new methodology of diagnosis based on advanced use of time-frequency technique analysis is presented. More precisely, a new fault index that allows tracking individual fault components in a single frequency band is defined. More in detail, a frequency sliding is applied to the signals being analyzed (currents, voltages, vibration signals), so that each single fault frequency component is shifted into a prefixed single frequency band. Then, the discrete Wavelet Transform is applied to the resulting signal to extract the fault signature in the frequency band that has been chosen. Once the state of the machine has been qualitatively diagnosed, a quantitative evaluation of the fault degree is necessary. For this purpose, a fault index based on the energy calculation of approximation and/or detail signals resulting from wavelet decomposition has been introduced to quantify the fault extend. The main advantages of the developed new method over existing Diagnosis techniques are the following: - Capability of monitoring the fault evolution continuously over time under any transient operating condition; - Speed/slip measurement or estimation is not required; - Higher accuracy in filtering frequency components around the fundamental in case of rotor faults; - Reduction in the likelihood of false indications by avoiding confusion with other fault harmonics (the contribution of the most relevant fault frequency components under speed-varying conditions are clamped in a single frequency band); - Low memory requirement due to low sampling frequency; - Reduction in the latency of time processing (no requirement of repeated sampling operation).
Resumo:
This paper presents an analysis of the fault tolerance achieved by an autonomous, fully embedded evolvable hardware system, which uses a combination of partial dynamic reconfiguration and an evolutionary algorithm (EA). It demonstrates that the system may self-recover from both transient and cumulative permanent faults. This self-adaptive system, based on a 2D array of 16 (4×4) Processing Elements (PEs), is tested with an image filtering application. Results show that it may properly recover from faults in up to 3 PEs, that is, more than 18% cumulative permanent faults. Two fault models are used for testing purposes, at PE and CLB levels. Two self-healing strategies are also introduced, depending on whether fault diagnosis is available or not. They are based on scrubbing, fitness evaluation, dynamic partial reconfiguration and in-system evolutionary adaptation. Since most of these adaptability features are already available on the system for its normal operation, resource cost for self-healing is very low (only some code additions in the internal microprocessor core)
Resumo:
The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance Dc, apparent fracture energy at a rupture front, time-dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of Dc apply to faults in nature. However, scaling of Dc is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks.
Resumo:
Although models of homogeneous faults develop seismicity that has a Gutenberg-Richter distribution, this is only a transient state that is followed by events that are strongly influenced by the nature of the boundaries. Models with geometrical inhomogeneities of fracture thresholds can limit the sizes of earthquakes but now favor the characteristic earthquake model for large earthquakes. The character of the seismicity is extremely sensitive to distributions of inhomogeneities, suggesting that statistical rules for large earthquakes in one region may not be applicable to large earthquakes in another region. Model simulations on simple networks of faults with inhomogeneities of threshold develop episodes of lacunarity on all members of the network. There is no validity to the popular assumption that the average rate of slip on individual faults is a constant. Intermediate term precursory activity such as local quiescence and increases in intermediate-magnitude activity at long range are simulated well by the assumption that strong weakening of faults by injection of fluids and weakening of asperities on inhomogeneous models of fault networks is the dominant process; the heat flow paradox, the orientation of the stress field, and the low average stress drop in some earthquakes are understood in terms of the asperity model of inhomogeneous faulting.