878 resultados para Fault-proneness
Resumo:
The problem of determining a minimal number of control inputs for converting a programmable logic array (PLA) with undetectable faults to crosspoint-irredundant PLA for testing has been formulated as a nonstandard set covering problem. By representing subsets of sets as cubes, this problem has been reformulated as familiar problems. It is noted that this result has significance because a crosspoint-irredundant PLA can be converted to a completely testable PLA in a straightforward fashion, thus achieving very good fault coverage and easy testability.
Resumo:
Our main result is a new sequential method for the design of decentralized control systems. Controller synthesis is conducted on a loop-by-loop basis, and at each step the designer obtains an explicit characterization of the class C of all compensators for the loop being closed that results in closed-loop system poles being in a specified closed region D of the s-plane, instead of merely stabilizing the closed-loop system. Since one of the primary goals of control system design is to satisfy basic performance requirements that are often directly related to closed-loop pole location (bandwidth, percentage overshoot, rise time, settling time), this approach immediately allows the designer to focus on other concerns such as robustness and sensitivity. By considering only compensators from class C and seeking the optimum member of that set with respect to sensitivity or robustness, the designer has a clearly-defined limited optimization problem to solve without concern for loss of performance. A solution to the decentralized tracking problem is also provided. This design approach has the attractive features of expandability, the use of only 'local models' for controller synthesis, and fault tolerance with respect to certain types of failure.
Resumo:
The problem of denoising damage indicator signals for improved operational health monitoring of systems is addressed by applying soft computing methods to design filters. Since measured data in operational settings is contaminated with noise and outliers, pattern recognition algorithms for fault detection and isolation can give false alarms. A direct approach to improving the fault detection and isolation is to remove noise and outliers from time series of measured data or damage indicators before performing fault detection and isolation. Many popular signal-processing approaches do not work well with damage indicator signals, which can contain sudden changes due to abrupt faults and non-Gaussian outliers. Signal-processing algorithms based on radial basis function (RBF) neural network and weighted recursive median (WRM) filters are explored for denoising simulated time series. The RBF neural network filter is developed using a K-means clustering algorithm and is much less computationally expensive to develop than feedforward neural networks trained using backpropagation. The nonlinear multimodal integer-programming problem of selecting optimal integer weights of the WRM filter is solved using genetic algorithm. Numerical results are obtained for helicopter rotor structural damage indicators based on simulated frequencies. Test signals consider low order polynomial growth of damage indicators with time to simulate gradual or incipient faults and step changes in the signal to simulate abrupt faults. Noise and outliers are added to the test signals. The WRM and RBF filters result in a noise reduction of 54 - 71 and 59 - 73% for the test signals considered in this study, respectively. Their performance is much better than the moving average FIR filter, which causes significant feature distortion and has poor outlier removal capabilities and shows the potential of soft computing methods for specific signal-processing applications. (C) 2005 Elsevier B. V. All rights reserved.
Resumo:
An application of direct methods to dynamic security assessment of power systems using structure-preserving energy functions (SPEF) is presented. The transient energy margin (TEM) is used as an index for checking the stability of the system as well as ranking the contigencies based on their severity. The computation of the TEM requires the evaluation of the critical energy and the energy at fault clearing. Usually this is done by simulating the faulted trajectory, which is time-consuming. In this paper, a new algorithm which eliminates the faulted trajectory estimation is presented to calculate the TEM. The system equations and the SPEF are developed using the centre-of-inertia (COI) formulation and the loads are modelled as arbitrary functions of the respective bus voltages. The critical energy is evaluated using the potential energy boundary surface (PEBS) method. The method is illustrated by considering two realistic power system examples.
Resumo:
The anisotropy of magnetic susceptibility (AMS) study was performed on soft sediment samples from a trenched fault zone across the Himalayan frontal thrust (HFT), western Himalaya. AMS orientation of K-min axes in the trench sediments is consistent with lateral shortening revealed by geometry of deformed regional structures and recent earthquakes. Well-defined vertical magnetic foliation parallel to the flexure cleavage in which a vertical magnetic lineation is developed, high anisotropy, and triaxial ellipsoids suggest large overprinting of earth-quake- related fabrics. The AMS data suggest a gradual variation from layer parallel shortening (LPS) at a distance from the fault trace to a simple shear fabric close to the fault trace. An abrupt change in the shortening direction (K-min) from NE-SW to E-W suggests a juxtaposition of pre-existing layer parallel shortening fabric, and bending-related flexure associated with an earthquake. Hence the orientation pattern of magnetic susceptibility axes helps in identifying co-seismic structures in Late Holocene surface sediments.
Resumo:
The evolution of crystallographic texture in polycrystalline copper and nickel has been studied. The deformation texture evolution in these two materials over seven orders of magnitude of strain rate from 3 x 10(-4) to similar to 2.0 x 10(+3) s(-1) show little dependence on the stacking fault energy (SFE) and the amount of deformation. Higher strain rate deformation in nickel leads to weakerh < 101 > texture because of extensive microband formation and grain fragmentation. This behavior, in turn, causes less plastic spin and hence retards texture evolution. Copper maintains the stable end < 101 > component over large strain rates (from 3 x 10(-4) to 10(+2) s(-1)) because of its higher strain-hardening rate that resists formation of deformation heterogeneities. At higher strain rates of the order of 2 x 10(+3) s(-1), the adiabatic temperature rise assists in continuous dynamic recrystallization that leads to an increase in the volume fraction of the < 101 > component. Thus, strain-hardening behavior plays a significant role in the texture evolution of face-centered cubic materials. In addition, factors governing the onset of restoration mechanisms like purity and melting point govern texture evolution at high strain rates. SFE may play a secondary role by governing the propensity of cross slip that in turn helps in the activation of restoration processes.
Resumo:
I discuss role responsibly, individual responsibility and collective responsibility in corporate multinational setting. My case study is about minerals used in electronics that come from the Democratic Republic of Congo. What I try to show throughout the thesis is how many things need to be taken into consideration when we discuss the responsibility of individuals in corporations. No easy and simple answers are available. Instead, we must keep in mind the complexity of the situation at all times, judging cases on individual basis, emphasizing the importance of individual judgement and virtue, as well as the responsibility we all share as members of groups and the wider society. I begin by discussing the demands that are placed on us as employees. There is always a potential for a conflict between our different roles and also the wider demands placed on us. Role demands are usually much more specific than the wider question of how we should act as human beings. The terminology of roles can also be misleading as it can create illusions about our work selves being somehow radically separated from our everyday, true selves. The nature of collective decision-making and its implications for responsibility is important too. When discussing the moral responsibility of an employee in a corporate setting, one must take into account arguments from individual and collective responsibility, as well as role ethics. Individual responsibility is not a separate or competing notion from that of collective responsibility. Rather, the two are interlinked. Individuals' responsibilities in collective settings combine both individual responsibility and collective responsibility (which is different from aggregate individual responsibility). In the majority of cases, both will apply in various degrees. Some members might have individual responsibility in addition to the collective responsibility, while others just the collective responsibility. There are also times when no-one bears individual moral responsibility but the members are still responsible for the collective part. My intuition is that collective moral responsibility is strongly linked to the way the collective setting affects individual judgements and moulds the decisions, and how the individuals use the collective setting to further their own ends. Individuals remain the moral agents but responsibility is collective if the actions in question are collective in character. I also explore the impacts of bureaucratic ethic and its influence on the individual. Bureaucracies can compartmentalize work to such a degree that individual human action is reduced to mere behaviour. Responsibility is diffused and the people working in the bureaucracy can come to view their actions to be outside the normal human realm where they would be responsible for what they do. Language games and rules, anonymity, internal power struggles, and the fragmentation of information are just some of the reasons responsibility and morality can get blurry in big institutional settings. Throughout the thesis I defend the following theses: ● People act differently depending on their roles. This is necessary for our society to function, but the more specific role demands should always be kept in check by the wider requirements of being a good human being. ● Acts in corporations (and other large collectives) are not reducible to individual actions, and cannot be explained fully by the behaviour of individual employees. ● Individuals are responsible for the actions that they undertake in the collective as role occupiers and are very rarely off the hook. Hiding behind role demands is usually only an excuse and shows a lack of virtue. ● Individuals in roles can be responsible even when the collective is not. This depends on if the act they performed was corporate in nature or not. ● Bureaucratic structure affects individual thinking and is not always a healthy environment to work in. ● Individual members can share responsibility with the collective and our share of the collective responsibility is strongly linked to our relations. ● Corporations and other collectives can be responsible for harm even when no individual is at fault. The structure and the policies of the collective are crucial. ● Socialization plays an important role in our morality at both work and outside it. We are all responsible for the kind of moral context we create. ● When accepting a role or a position in a collective, we are attaching ourselves with the values of that collective. ● Ethical theories should put more emphasis on good judgement and decision-making instead of vague generalisations. My conclusion is that the individual person is always in the centre when it comes to responsibility, and not so easily off the hook as we sometimes think. What we do, and especially who we choose to associate ourselves with, does matter and we should be more careful when we choose who we work for. Individuals within corporations are responsible for choosing that the corporation they associate with is one that they can ascribe to morally, if not fully, then at least for the most part. Individuals are also inclusively responsible to a varying degree for the collective activities they contribute to, even in overdetermined contexts. We all are responsible for the kind of corporations we choose to support through our actions as consumers, investors and citizens.
Resumo:
In this Master's thesis I go through the principals of the good governance. I apply these principals to the Nicaraguan context and especially in two rural municipalities in Chontales department. I clarify the development of the space of participation in Nicaraguan municipal level. I start my examination from the period when Somoza dictatorship ended and first open elections were held, and I end it to the municipal eleccions held in November 2008. These elections were robbed in 33 municipalities and because of this there started a crisis in Nicaragua and among the actors of development cooperation. As a methods of research I use two types of interview in the thesis, the interviews for the citizens and interviews for the experts. These interviews answer to my questions of the methods of participation. I also review the level of the trust of a citizen to an authority by asking if s/he voted in the municipal eleccions in November 2008. Furthermore, I define the work of municipal government in the point of view of the citizen. I also find out if a citizen wants to take more part in the decision making in her/his municipal. I have classified the types of citizens by the interviews I made. Due to this classification I explain how many people actually have opportunity to participate the dialogue of the municipal decision making and how many can follow the activity of the municipal governance. The result is that after the elections in November 2008 only one typed group can freely take part in the dialogue. This does not apply the principals of good governance, especially in subterms of participation and transparency. The incidents after the municipal elections have affected strongly on the co-operation of Finland and Nicaragua. Because of the fault of the elections Finland like the other co-operative countries brought down the directly paid budget support. This has caused a great economical crisis in Nicaragua which the covering will take a long time. The Master's thesis is a case study of two rural municipalities called Santo Tómas and Villa Sandino. Santo Tómas has a sandinista municipal government which is not legitimate. In Villa Sandino the government is liberal and legitimate.
Resumo:
The object of this research is to study the mineralogy of the diabase dykes in Suomussalmi and the relevance of the mineralogy to tectonic events, specifically large block movements in the Archaean crust. Sharp tectonic lines separate two anomalies in the dyke swarms, shown on a geomagnetic map as positive anomalies. In one of these areas, the Toravaara anomaly, the diabases seem to contain pyroxenes as a main component. Outside the Toravaara anomaly hornblende is the main ferromagnesian mineral in diabases. The aim of this paper is to research the differences in the diabases inside and outside the anomalies and interpret the processes that formed the anomalies. The data for this sudy consist of field observations, 120 thin sections, 334 electron microprobe analyses, 19 whole-rock chemical analyses, a U-Pb age analysis and geomagnetic low-altitude aerial survey maps. The methods are interpretation of field observations, chemical analyses, microprobe analyses of single minerals and radiometric age determination, microscopic studies of the thin sections, geothermometers and geobarometers. On the basis of field observations and petrographic studies the diabases in the area are divided into pyroxene diabases, hornblende diabases and the Lohisärkkä porphyritic dyke swarm. Hornblende diabases are found in the entire study area, while the pyroxene diabases concentrate on the area of the Toravaara geomagnetic anomaly. The Lohisärkkä swarm transects the whole area as a thin line from east to west. The diabases are fairly homogenous both chemically and by mineral composition. The few exceptions are part of rarer older swarms or are significantly altered. The Lohisärkkä dyke swarm was dated as 2,21 Ga old, significantly older than the most common 1,98 Ga swarm in the area. The geothermometers applied showed that the diabases on the Toravaara anomaly were stabilized at a much higher temperature than the dykes outside the anomaly. The geobarometers showed the pyroxenes to have crystallized at varying depths. The research showed the Toravaara anomaly to have formed by a vertical block movement, and the fault on its west side to have a total lateral transfer of only a few kilometers. The formation of the second anomaly was also interpreted to be tectonic in nature. In addition, the results of the geothermobarometry uncovered necessary conditions for the study of diabase emplacement depth: the minerals for the study must be chosen by minimum crystallization depth, and a geobarometer capable of determining the magmatic temperature must be used. In addition, it would be more suitable to conduct this kind of study in an area where the dykes are more exposed.
Resumo:
Neutral point clamped (NPC), three level converters with insulated gate bipolar transistor devices are very popular in medium voltage, high power applications. DC bus short circuit protection is usually done, using the sensed voltage across collector and emitter (i.e., V-CE sensing), of all the devices in a leg. This feature is accommodated with the conventional gate drive circuits used in the two level converters. The similar gate drive circuit, when adopted for NPC three level converter protection, leads to false V-CE fault signals for inner devices of the leg. The paper explains the detailed circuit behavior and reasons, which result in the occurrence of such false V-CE fault signals. This paper also illustrates that such a phenomenon shows dependence on the power factor of the supplied three-phase load. Finally, experimental results are presented to support the analysis. It is shown that the problem can be avoided by blocking out the V-CE sense fault signals of the inner devices of the leg.
Resumo:
In the area of testing communication systems, the interfaces between systems to be tested and their testers have great impact on test generation and fault detectability. Several types of such interfaces have been standardized by the International Standardization Organization (ISO). A general distributed test architecture, containing distributed interfaces, has been presented in the literature for testing distributed systems based on the Open Distributing Processing (ODP) Basic Reference Model (BRM), which is a generalized version of ISO distributed test architecture. We study in this paper the issue of test selection with respect to such an test architecture. In particular, we consider communication systems that can be modeled by finite state machines with several distributed interfaces, called ports. A test generation method is developed for generating test sequences for such finite state machines, which is based on the idea of synchronizable test sequences. Starting from the initial effort by Sarikaya, a certain amount of work has been done for generating test sequences for finite state machines with respect to the ISO distributed test architecture, all based on the idea of modifying existing test generation methods to generate synchronizable test sequences. However, none studies the fault coverage provided by their methods. We investigate the issue of fault coverage and point out a fact that the methods given in the literature for the distributed test architecture cannot ensure the same fault coverage as the corresponding original testing methods. We also study the limitation of fault detectability in the distributed test architecture.
Resumo:
Evolution of crystallographic texture during high strain rate deformation in FCC materials with different stacking fault energy (Ni, Cu, and Cu-10Zn alloy) has been studied. The texture evolved in FCC materials at these strain rates show little dependence on the Stacking Fault Energy and the amount of deformation. Copper shows an anomalous behavior that is attributed to the ease of cross slip and continuous Dynamic Recrystallization that are operative under the experimental conditions.
Resumo:
Measured health signals incorporate significant details about any malfunction in a gas turbine. The attenuation of noise and removal of outliers from these health signals while preserving important features is an important problem in gas turbine diagnostics. The measured health signals are a time series of sensor measurements such as the low rotor speed, high rotor speed, fuel flow, and exhaust gas temperature in a gas turbine. In this article, a comparative study is done by varying the window length of acausal and unsymmetrical weighted recursive median filters and numerical results for error minimization are obtained. It is found that optimal filters exist, which can be used for engines where data are available slowly (three-point filter) and rapidly (seven-point filter). These smoothing filters are proposed as preprocessors of measurement delta signals before subjecting them to fault detection and isolation algorithms.
Resumo:
This paper presents the design and development of a comprehensive digital protection scheme for applications in 25 KV a.c railway traction system. The scheme provides distance protection, detection of wrong phase coupling both in the lagging and leading directions, high set instantaneous trip and PT fuse failure. Provision is also made to include fault location and disturbance recording. The digital relaying scheme has been tried on two types of hardware platforms, one with PC/AT based hardware and the other with a custom designed standalone 16-bit microcontroller based card. Compared to the existing scheme, the operating time is around one cycle and the relaying algorithm has been optimised to minimise the number of computations. The prototype has been rigorously tested in the laboratory using a specially designed PC based relay test bench and the results are highly satisfactory.
Resumo:
Detailed Fourier line shape analysis has been performed on three different compositions of the composite matrix of Al-Si-Mg and SiC. The alloy composition in wt% is Al-7%Si, 0.35%Mg, 0.14%Fe and traces of copper and titanium (similar to 0.01%) with SiC varying from 0 to 30wt% in three steps i.e., 0, 10 and 30wt%. The line shift analysis has been performed by considering 111, 200, 220, 311 and 222 reflections after estimating their relative shift. Peak asymmetry analysis has been performed considering neighbouring 111 and 200 reflections and Fourier line shape analysis has been performed after considering the multiple orders 111 and 222, 200 and 400 reflections. Combining all these three analyses it has been found that the deformation stacking faults both intrinsic alpha' and extrinsic alpha " are absent in this alloy system whereas the deformation twin beta has been found to be positive and increases with the increase of SiC concentration. So, like other Al-base alloys this ternary alloy also shows high stacking fault energy, and the addition of SiC introduces deformation twin which increases with its concentration in the deformed lattices.