149 resultados para fault
Resumo:
This paper presents a multi-class support vector machine (SVMs) approach for locating and diagnosing faults in electric power distribution feeders with the penetration of Distributed Generations (DGs). The proposed approach is based on the three phase voltage and current measurements which are available at all the sources i.e. substation and at the connection points of DG. To illustrate the proposed methodology, a practical distribution feeder emanating from 132/11kV-grid substation in India with loads and suitable number of DGs at different locations is considered. To show the effectiveness of the proposed methodology, practical situations in distribution systems (DS) such as all types of faults with a wide range of varying fault locations, source short circuit (SSC) levels and fault impedances are considered for studies. The proposed fault location scheme is capable of accurately identify the fault type, location of faulted feeder section and the fault impedance. The results demonstrate the feasibility of applying the proposed method in practical in smart grid distribution automation (DA) for fault diagnosis.
Resumo:
First principles calculations were done to evaluate the lattice parameter, cohesive energy and stacking fault energies of ordered gamma' (Ll(2)) precipitates in superalloys as a function of composition. It was found that addition of Ti and Ta lead to an increase in lattice parameter and decrease in cohesive energy, while Ni antisites had the opposite effect. Ta and Ti addition to stoichiometric Ni3Al resulted in an initial increase in the energies of APB((111)), CSF(111), APB((001)) and SISF(111). However, at higher concentrations, the fault energies decreased. Addition of Ni antisites decreased the energy of all four faults monotonically. A model based on nearest neighbor bonding was used for Ni-3(Al, Ta), Ni-3(Al, Ti) and Ni-3(Al, Ni) pseudo-binary systems and extended to pseudo- ternary Ni-3(Al, Ta, Ni) and Ni-3(Al, Ti, Ni) systems. Recipes were developed for predicting lattice parameters, cohesive energies and fault energies in pseudo- ternary systems on the basis of coefficients derived from simpler pseudobinary systems. The model predictions were found to be in good agreement with first principles calculations for lattice parameters, cohesive energies, and energies of APB((111)) and CSF(111).
Resumo:
Mobile nodes observing correlated data communicate using an insecure bidirectional switch to generate a secret key, which must remain concealed from the switch. We are interested in fault-tolerant secret key rates, i.e., the rates of secret key generated even if a subset of nodes drop out before the completion of the communication protocol. We formulate a new notion of fault-tolerant secret key capacity, and present an upper bound on it. This upper bound is shown to be tight when the random variables corresponding to the observations of nodes are exchangeable. Further, it is shown that one round of interaction achieves the fault-tolerant secret key capacity in this case. The upper bound is also tight for the case of a pairwise independent network model consisting of a complete graph, and can be attained by a noninteractive protocol.
Resumo:
A new hybrid multilevel power converter topology is presented in this paper. The proposed power converter topology uses only one DC source and floating capacitors charged to asymmetrical voltage levels, are used for generating different voltage levels. The SVPWM based control strategy used in this converter maintains the capacitor voltages at the required levels in the entire modulation range including the over-modulation region. For the voltage levels: nine and above, the number of components required in the proposed topology is significantly lower, compared to the conventional multilevel inverter topologies. The number of capacitors required in this topology reduces drastically compared to the conventional flying capacitor topology, when the number of levels in the inverter output increases. This topology has better fault tolerance, as it is capable of operating with reduced number of levels, in the entire modulation range, in the event of any failure in the H-bridges. The transient as well as the steady state performance of the nine-level version of the proposed topology is experimentally verified in the entire modulation range including the over-modulation region.
Resumo:
The evolution of deformation texture in a Ni-60Co alloy with low stacking fault energy and a grain size in the nanometre range has been investigated. The analyses of texture and microstructure suggest different mechanisms of deformation in nanocrystalline as compared to microcrystalline Ni-60Co alloy. In nanocrystalline material, the mechanism responsible for texture formation has been identified as partial slip, whereas in microcrystalline material, a characteristic texture forms due to twinning and shear banding.
Resumo:
Several papers have studied fault attacks on computing a pairing value e(P, Q), where P is a public point and Q is a secret point. In this paper, we observe that these attacks are in fact effective only on a small number of pairing-based protocols, and that too only when the protocols are implemented with specific symmetric pairings. We demonstrate the effectiveness of the fault attacks on a public-key encryption scheme, an identity-based encryption scheme, and an oblivious transfer protocol when implemented with a symmetric pairing derived from a supersingular elliptic curve with embedding degree 2.
Resumo:
Three materials, pure aluminium, Al-4 wt.% Mg, alpha-brass have been chosen to understand the evolution of texture and microstructure during rolling. Pure Al develops a strong copper-type rolling texture and the deformation is entirely slip dominated. In Al-4Mg alloy, texture is copper-type throughout the deformation. The advent of Cu-type shear bands in the later stages of deformation has a negligible effect on the final texture. alpha-brass shows a characteristic brass-type texture from the early stages of rolling. Extensive twinning in the intermediate stages of deformation (epsilon(t) similar to 0.5) causes significant texture reorientation towards alpha-fiber. Beyond 40% reduction, deformation is dominated by Bs-type shear bands, and the banding coincides with the evolution of <111>parallel to ND components. The crystallites within the bands preferentially show <110>parallel to ND components. The absence of the Cu component throughout the deformation process indicates that, for the evolution of brass-type texture, the presence of Cu component is not a necessary condition. The final rolling texture is a synergistic effect of deformation twinning and shear banding.
Resumo:
Exascale systems of the future are predicted to have mean time between failures (MTBF) of less than one hour. At such low MTBFs, employing periodic checkpointing alone will result in low efficiency because of the high number of application failures resulting in large amount of lost work due to rollbacks. In such scenarios, it is highly necessary to have proactive fault tolerance mechanisms that can help avoid significant number of failures. In this work, we have developed a mechanism for proactive fault tolerance using partial replication of a set of application processes. Our fault tolerance framework adaptively changes the set of replicated processes periodically based on failure predictions to avoid failures. We have developed an MPI prototype implementation, PAREP-MPI that allows changing the replica set. We have shown that our strategy involving adaptive process replication significantly outperforms existing mechanisms providing up to 20 percent improvement in application efficiency even for exascale systems.
Resumo:
The time division multiple access (TDMA) based channel access mechanisms perform better than the contention based channel access mechanisms, in terms of channel utilization, reliability and power consumption, specially for high data rate applications in wireless sensor networks (WSNs). Most of the existing distributed TDMA scheduling techniques can be classified as either static or dynamic. The primary purpose of static TDMA scheduling algorithms is to improve the channel utilization by generating a schedule of smaller length. But, they usually take longer time to schedule, and hence, are not suitable for WSNs, in which the network topology changes dynamically. On the other hand, dynamic TDMA scheduling algorithms generate a schedule quickly, but they are not efficient in terms of generated schedule length. In this paper, we propose a novel scheme for TDMA scheduling in WSNs, which can generate a compact schedule similar to static scheduling algorithms, while its runtime performance can be matched with those of dynamic scheduling algorithms. Furthermore, the proposed distributed TDMA scheduling algorithm has the capability to trade-off schedule length with the time required to generate the schedule. This would allow the developers of WSNs, to tune the performance, as per the requirement of prevalent WSN applications, and the requirement to perform re-scheduling. Finally, the proposed TDMA scheduling is fault-tolerant to packet loss due to erroneous wireless channel. The algorithm has been simulated using the Castalia simulator to compare its performance with those of others in terms of generated schedule length and the time required to generate the TDMA schedule. Simulation results show that the proposed algorithm generates a compact schedule in a very less time.
Resumo:
This paper presents the modeling and analysis of a voltage source converter (VSC) based back-to-back (BTB) HVDC link. The case study considers the response to changes in the active and reactive power and disturbance caused by single line to ground (SLG) fault. The controllers at each terminal are designed to inject a variable (magnitude and phase angle) sinusoidal, balanced set of voltages to regulate/control the active and reactive power. It is also possible to regulate the converter bus (AC) voltage by controlling the injected reactive power. The analysis is carried out using both d-q model (neglecting the harmonics in the output voltages of VSC) and three phase detailed model of VSC. While the eigenvalue analysis and controller design is based on the d-q model, the transient simulation considers both models.
Resumo:
Simultaneous consideration of both performance and reliability issues is important in the choice of computer architectures for real-time aerospace applications. One of the requirements for such a fault-tolerant computer system is the characteristic of graceful degradation. A shared and replicated resources computing system represents such an architecture. In this paper, a combinatorial model is used for the evaluation of the instruction execution rate of a degradable, replicated resources computing system such as a modular multiprocessor system. Next, a method is presented to evaluate the computation reliability of such a system utilizing a reliability graph model and the instruction execution rate. Finally, this computation reliability measure, which simultaneously describes both performance and reliability, is applied as a constraint in an architecture optimization model for such computing systems. Index Terms-Architecture optimization, computation
Resumo:
This paper presents on overview of the issues in precisely defining, specifying and evaluating the dependability of software, particularly in the context of computer controlled process systems. Dependability is intended to be a generic term embodying various quality factors and is useful for both software and hardware. While the developments in quality assurance and reliability theories have proceeded mostly in independent directions for hardware and software systems, we present here the case for developing a unified framework of dependability—a facet of operational effectiveness of modern technological systems, and develop a hierarchical systems model helpful in clarifying this view. In the second half of the paper, we survey the models and methods available for measuring and improving software reliability. The nature of software “bugs”, the failure history of the software system in the various phases of its lifecycle, the reliability growth in the development phase, estimation of the number of errors remaining in the operational phase, and the complexity of the debugging process have all been considered to varying degrees of detail. We also discuss the notion of software fault-tolerance, methods of achieving the same, and the status of other measures of software dependability such as maintainability, availability and safety.
Resumo:
This paper is aimed at reviewing the notion of Byzantine-resilient distributed computing systems, the relevant protocols and their possible applications as reported in the literature. The three agreement problems, namely, the consensus problem, the interactive consistency problem, and the generals problem have been discussed. Various agreement protocols for the Byzantine generals problem have been summarized in terms of their performance and level of fault-tolerance. The three classes of Byzantine agreement protocols discussed are the deterministic, randomized, and approximate agreement protocols. Finally, application of the Byzantine agreement protocols to clock synchronization is highlighted.
Resumo:
This paper develops a seven-level inverter structure for open-end winding induction motor drives. The inverter supply is realized by cascading four two-level and two three-level neutral-point-clamped inverters. The inverter control is designed in such a way that the common-mode voltage (CMV) is eliminated. DC-link capacitor voltage balancing is also achieved by using only the switching-state redundancies. The proposed power circuit structure is modular and therefore suitable for fault-tolerant applications. By appropriately isolating some of the inverters, the drive can be operated during fault conditions in a five-level or a three-level inverter mode, with preserved CMV elimination and DC-link capacitor voltage balancing, within a reduced modulation range.
Resumo:
In this paper, the validity of'single fault assumption in deriving diagnostic test sets is examined with respect to crosspoint faults in programmable logic arrays (PLA's). The control input procedure developed here can be used to convert PLA's having undetectable crosspoint faults to crosspoint-irredundant PLA's for testing purposes. All crosspoints will be testable in crosspoint-irredundant PLA's. The control inputs are used as extra variables during testing. They are maintained at logic I during normal operation. A useful heuristic for obtaining a near-minimal number of control inputs is suggested. Expressions for calculating bounds on the number of control inputs have also been obtained.