875 resultados para polygonal fault


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A method is presented for obtaining lower bound on the carrying capacity of reinforced concrete foundation slab-structures subject to non-uniform contact pressure distributions. Functional approach suggested by Vallance for simply supported square slabs subject to uniform pressure distribution has been extended to simply supported rectangular slabs subject to symmetrical non-uniform pressure distributions. Radial solutions, ideally suited for rotationally symmetric problems, are shown to be adoptable for regular polygonal slabs subject to contact pressure paraboloids with constant edge pressures. The functional approach has been shown to be well suited even when the pressure is varying along the edges.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper aims at evaluating the methods of multiclass support vector machines (SVMs) for effective use in distance relay coordination. Also, it describes a strategy of supportive systems to aid the conventional protection philosophy in combating situations where protection systems have maloperated and/or information is missing and provide selective and secure coordinations. SVMs have considerable potential as zone classifiers of distance relay coordination. This typically requires a multiclass SVM classifier to effectively analyze/build the underlying concept between reach of different zones and the apparent impedance trajectory during fault. Several methods have been proposed for multiclass classification where typically several binary SVM classifiers are combined together. Some authors have extended binary SVM classification to one-step single optimization operation considering all classes at once. In this paper, one-step multiclass classification, one-against-all, and one-against-one multiclass methods are compared for their performance with respect to accuracy, number of iterations, number of support vectors, training, and testing time. The performance analysis of these three methods is presented on three data sets belonging to training and testing patterns of three supportive systems for a region and part of a network, which is an equivalent 526-bus system of the practical Indian Western grid.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Power system disturbances are often caused by faults on transmission lines. When faults occur in a power system, the protective relays detect the fault and initiate tripping of appropriate circuit breakers, which isolate the affected part from the rest of the power system. Generally Extra High Voltage (EHV) transmission substations in power systems are connected with multiple transmission lines to neighboring substations. In some cases mal-operation of relays can happen under varying operating conditions, because of inappropriate coordination of relay settings. Due to these actions the power system margins for contingencies are decreasing. Hence, power system protective relaying reliability becomes increasingly important. In this paper an approach is presented using Support Vector Machine (SVM) as an intelligent tool for identifying the faulted line that is emanating from a substation and finding the distance from the substation. Results on 24-bus equivalent EHV system, part of Indian southern grid, are presented for illustration purpose. This approach is particularly important to avoid mal-operation of relays following a disturbance in the neighboring line connected to the same substation and assuring secure operation of the power systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The move towards IT outsourcing is the first step towards an environment where compute infrastructure is treated as a service. In utility computing this IT service has to honor Service Level Agreements (SLA) in order to meet the desired Quality of Service (QoS) guarantees. Such an environment requires reliable services in order to maximize the utilization of the resources and to decrease the Total Cost of Ownership (TCO). Such reliability cannot come at the cost of resource duplication, since it increases the TCO of the data center and hence the cost per compute unit. We, in this paper, look into aspects of projecting impact of hardware failures on the SLAs and techniques required to take proactive recovery steps in case of a predicted failure. By maintaining health vectors of all hardware and system resources, we predict the failure probability of resources based on observed hardware errors/failure events, at runtime. This inturn influences an availability aware middleware to take proactive action (even before the application is affected in case the system and the application have low recoverability). The proposed framework has been prototyped on a system running HP-UX. Our offline analysis of the prediction system on hardware error logs indicate no more than 10% false positives. This work to the best of our knowledge is the first of its kind to perform an end-to-end analysis of the impact of a hardware fault on application SLAs, in a live system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Partition of unity methods, such as the extended finite element method, allows discontinuities to be simulated independently of the mesh (Int. J. Numer. Meth. Engng. 1999; 45:601-620). This eliminates the need for the mesh to be aligned with the discontinuity or cumbersome re-meshing, as the discontinuity evolves. However, to compute the stiffness matrix of the elements intersected by the discontinuity, a subdivision of the elements into quadrature subcells aligned with the discontinuity is commonly adopted. In this paper, we use a simple integration technique, proposed for polygonal domains (Int. J. Nuttier Meth. Engng 2009; 80(1):103-134. DOI: 10.1002/nme.2589) to suppress the need for element subdivision. Numerical results presented for a few benchmark problems in the context of linear elastic fracture mechanics and a multi-material problem show that the proposed method yields accurate results. Owing to its simplicity, the proposed integration technique can be easily integrated in any existing code. Copyright (C) 2010 John Wiley & Sons, Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A fuzzy system is developed using a linearized performance model of the gas turbine engine for performing gas turbine fault isolation from noisy measurements. By using a priori information about measurement uncertainties and through design variable linking, the design of the fuzzy system is posed as an optimization problem with low number of design variables which can be solved using the genetic algorithm in considerably low amount of computer time. The faults modeled are module faults in five modules: fan, low pressure compressor, high pressure compressor, high pressure turbine and low pressure turbine. The measurements used are deviations in exhaust gas temperature, low rotor speed, high rotor speed and fuel flow from a base line 'good engine'. The genetic fuzzy system (GFS) allows rapid development of the rule base if the fault signatures and measurement uncertainties change which happens for different engines and airlines. In addition, the genetic fuzzy system reduces the human effort needed in the trial and error process used to design the fuzzy system and makes the development of such a system easier and faster. A radial basis function neural network (RBFNN) is also used to preprocess the measurements before fault isolation. The RBFNN shows significant noise reduction and when combined with the GFS leads to a diagnostic system that is highly robust to the presence of noise in data. Showing the advantage of using a soft computing approach for gas turbine diagnostics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The problem of denoising damage indicator signals for improved operational health monitoring of systems is addressed by applying soft computing methods to design filters. Since measured data in operational settings is contaminated with noise and outliers, pattern recognition algorithms for fault detection and isolation can give false alarms. A direct approach to improving the fault detection and isolation is to remove noise and outliers from time series of measured data or damage indicators before performing fault detection and isolation. Many popular signal-processing approaches do not work well with damage indicator signals, which can contain sudden changes due to abrupt faults and non-Gaussian outliers. Signal-processing algorithms based on radial basis function (RBF) neural network and weighted recursive median (WRM) filters are explored for denoising simulated time series. The RBF neural network filter is developed using a K-means clustering algorithm and is much less computationally expensive to develop than feedforward neural networks trained using backpropagation. The nonlinear multimodal integer-programming problem of selecting optimal integer weights of the WRM filter is solved using genetic algorithm. Numerical results are obtained for helicopter rotor structural damage indicators based on simulated frequencies. Test signals consider low order polynomial growth of damage indicators with time to simulate gradual or incipient faults and step changes in the signal to simulate abrupt faults. Noise and outliers are added to the test signals. The WRM and RBF filters result in a noise reduction of 54 - 71 and 59 - 73% for the test signals considered in this study, respectively. Their performance is much better than the moving average FIR filter, which causes significant feature distortion and has poor outlier removal capabilities and shows the potential of soft computing methods for specific signal-processing applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The problem of determining a minimal number of control inputs for converting a programmable logic array (PLA) with undetectable faults to crosspoint-irredundant PLA for testing has been formulated as a nonstandard set covering problem. By representing subsets of sets as cubes, this problem has been reformulated as familiar problems. It is noted that this result has significance because a crosspoint-irredundant PLA can be converted to a completely testable PLA in a straightforward fashion, thus achieving very good fault coverage and easy testability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Our main result is a new sequential method for the design of decentralized control systems. Controller synthesis is conducted on a loop-by-loop basis, and at each step the designer obtains an explicit characterization of the class C of all compensators for the loop being closed that results in closed-loop system poles being in a specified closed region D of the s-plane, instead of merely stabilizing the closed-loop system. Since one of the primary goals of control system design is to satisfy basic performance requirements that are often directly related to closed-loop pole location (bandwidth, percentage overshoot, rise time, settling time), this approach immediately allows the designer to focus on other concerns such as robustness and sensitivity. By considering only compensators from class C and seeking the optimum member of that set with respect to sensitivity or robustness, the designer has a clearly-defined limited optimization problem to solve without concern for loss of performance. A solution to the decentralized tracking problem is also provided. This design approach has the attractive features of expandability, the use of only 'local models' for controller synthesis, and fault tolerance with respect to certain types of failure.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The problem of denoising damage indicator signals for improved operational health monitoring of systems is addressed by applying soft computing methods to design filters. Since measured data in operational settings is contaminated with noise and outliers, pattern recognition algorithms for fault detection and isolation can give false alarms. A direct approach to improving the fault detection and isolation is to remove noise and outliers from time series of measured data or damage indicators before performing fault detection and isolation. Many popular signal-processing approaches do not work well with damage indicator signals, which can contain sudden changes due to abrupt faults and non-Gaussian outliers. Signal-processing algorithms based on radial basis function (RBF) neural network and weighted recursive median (WRM) filters are explored for denoising simulated time series. The RBF neural network filter is developed using a K-means clustering algorithm and is much less computationally expensive to develop than feedforward neural networks trained using backpropagation. The nonlinear multimodal integer-programming problem of selecting optimal integer weights of the WRM filter is solved using genetic algorithm. Numerical results are obtained for helicopter rotor structural damage indicators based on simulated frequencies. Test signals consider low order polynomial growth of damage indicators with time to simulate gradual or incipient faults and step changes in the signal to simulate abrupt faults. Noise and outliers are added to the test signals. The WRM and RBF filters result in a noise reduction of 54 - 71 and 59 - 73% for the test signals considered in this study, respectively. Their performance is much better than the moving average FIR filter, which causes significant feature distortion and has poor outlier removal capabilities and shows the potential of soft computing methods for specific signal-processing applications. (C) 2005 Elsevier B. V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An application of direct methods to dynamic security assessment of power systems using structure-preserving energy functions (SPEF) is presented. The transient energy margin (TEM) is used as an index for checking the stability of the system as well as ranking the contigencies based on their severity. The computation of the TEM requires the evaluation of the critical energy and the energy at fault clearing. Usually this is done by simulating the faulted trajectory, which is time-consuming. In this paper, a new algorithm which eliminates the faulted trajectory estimation is presented to calculate the TEM. The system equations and the SPEF are developed using the centre-of-inertia (COI) formulation and the loads are modelled as arbitrary functions of the respective bus voltages. The critical energy is evaluated using the potential energy boundary surface (PEBS) method. The method is illustrated by considering two realistic power system examples.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The anisotropy of magnetic susceptibility (AMS) study was performed on soft sediment samples from a trenched fault zone across the Himalayan frontal thrust (HFT), western Himalaya. AMS orientation of K-min axes in the trench sediments is consistent with lateral shortening revealed by geometry of deformed regional structures and recent earthquakes. Well-defined vertical magnetic foliation parallel to the flexure cleavage in which a vertical magnetic lineation is developed, high anisotropy, and triaxial ellipsoids suggest large overprinting of earth-quake- related fabrics. The AMS data suggest a gradual variation from layer parallel shortening (LPS) at a distance from the fault trace to a simple shear fabric close to the fault trace. An abrupt change in the shortening direction (K-min) from NE-SW to E-W suggests a juxtaposition of pre-existing layer parallel shortening fabric, and bending-related flexure associated with an earthquake. Hence the orientation pattern of magnetic susceptibility axes helps in identifying co-seismic structures in Late Holocene surface sediments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The evolution of crystallographic texture in polycrystalline copper and nickel has been studied. The deformation texture evolution in these two materials over seven orders of magnitude of strain rate from 3 x 10(-4) to similar to 2.0 x 10(+3) s(-1) show little dependence on the stacking fault energy (SFE) and the amount of deformation. Higher strain rate deformation in nickel leads to weakerh < 101 > texture because of extensive microband formation and grain fragmentation. This behavior, in turn, causes less plastic spin and hence retards texture evolution. Copper maintains the stable end < 101 > component over large strain rates (from 3 x 10(-4) to 10(+2) s(-1)) because of its higher strain-hardening rate that resists formation of deformation heterogeneities. At higher strain rates of the order of 2 x 10(+3) s(-1), the adiabatic temperature rise assists in continuous dynamic recrystallization that leads to an increase in the volume fraction of the < 101 > component. Thus, strain-hardening behavior plays a significant role in the texture evolution of face-centered cubic materials. In addition, factors governing the onset of restoration mechanisms like purity and melting point govern texture evolution at high strain rates. SFE may play a secondary role by governing the propensity of cross slip that in turn helps in the activation of restoration processes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

I discuss role responsibly, individual responsibility and collective responsibility in corporate multinational setting. My case study is about minerals used in electronics that come from the Democratic Republic of Congo. What I try to show throughout the thesis is how many things need to be taken into consideration when we discuss the responsibility of individuals in corporations. No easy and simple answers are available. Instead, we must keep in mind the complexity of the situation at all times, judging cases on individual basis, emphasizing the importance of individual judgement and virtue, as well as the responsibility we all share as members of groups and the wider society. I begin by discussing the demands that are placed on us as employees. There is always a potential for a conflict between our different roles and also the wider demands placed on us. Role demands are usually much more specific than the wider question of how we should act as human beings. The terminology of roles can also be misleading as it can create illusions about our work selves being somehow radically separated from our everyday, true selves. The nature of collective decision-making and its implications for responsibility is important too. When discussing the moral responsibility of an employee in a corporate setting, one must take into account arguments from individual and collective responsibility, as well as role ethics. Individual responsibility is not a separate or competing notion from that of collective responsibility. Rather, the two are interlinked. Individuals' responsibilities in collective settings combine both individual responsibility and collective responsibility (which is different from aggregate individual responsibility). In the majority of cases, both will apply in various degrees. Some members might have individual responsibility in addition to the collective responsibility, while others just the collective responsibility. There are also times when no-one bears individual moral responsibility but the members are still responsible for the collective part. My intuition is that collective moral responsibility is strongly linked to the way the collective setting affects individual judgements and moulds the decisions, and how the individuals use the collective setting to further their own ends. Individuals remain the moral agents but responsibility is collective if the actions in question are collective in character. I also explore the impacts of bureaucratic ethic and its influence on the individual. Bureaucracies can compartmentalize work to such a degree that individual human action is reduced to mere behaviour. Responsibility is diffused and the people working in the bureaucracy can come to view their actions to be outside the normal human realm where they would be responsible for what they do. Language games and rules, anonymity, internal power struggles, and the fragmentation of information are just some of the reasons responsibility and morality can get blurry in big institutional settings. Throughout the thesis I defend the following theses: ● People act differently depending on their roles. This is necessary for our society to function, but the more specific role demands should always be kept in check by the wider requirements of being a good human being. ● Acts in corporations (and other large collectives) are not reducible to individual actions, and cannot be explained fully by the behaviour of individual employees. ● Individuals are responsible for the actions that they undertake in the collective as role occupiers and are very rarely off the hook. Hiding behind role demands is usually only an excuse and shows a lack of virtue. ● Individuals in roles can be responsible even when the collective is not. This depends on if the act they performed was corporate in nature or not. ● Bureaucratic structure affects individual thinking and is not always a healthy environment to work in. ● Individual members can share responsibility with the collective and our share of the collective responsibility is strongly linked to our relations. ● Corporations and other collectives can be responsible for harm even when no individual is at fault. The structure and the policies of the collective are crucial. ● Socialization plays an important role in our morality at both work and outside it. We are all responsible for the kind of moral context we create. ● When accepting a role or a position in a collective, we are attaching ourselves with the values of that collective. ● Ethical theories should put more emphasis on good judgement and decision-making instead of vague generalisations. My conclusion is that the individual person is always in the centre when it comes to responsibility, and not so easily off the hook as we sometimes think. What we do, and especially who we choose to associate ourselves with, does matter and we should be more careful when we choose who we work for. Individuals within corporations are responsible for choosing that the corporation they associate with is one that they can ascribe to morally, if not fully, then at least for the most part. Individuals are also inclusively responsible to a varying degree for the collective activities they contribute to, even in overdetermined contexts. We all are responsible for the kind of corporations we choose to support through our actions as consumers, investors and citizens.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this Master's thesis I go through the principals of the good governance. I apply these principals to the Nicaraguan context and especially in two rural municipalities in Chontales department. I clarify the development of the space of participation in Nicaraguan municipal level. I start my examination from the period when Somoza dictatorship ended and first open elections were held, and I end it to the municipal eleccions held in November 2008. These elections were robbed in 33 municipalities and because of this there started a crisis in Nicaragua and among the actors of development cooperation. As a methods of research I use two types of interview in the thesis, the interviews for the citizens and interviews for the experts. These interviews answer to my questions of the methods of participation. I also review the level of the trust of a citizen to an authority by asking if s/he voted in the municipal eleccions in November 2008. Furthermore, I define the work of municipal government in the point of view of the citizen. I also find out if a citizen wants to take more part in the decision making in her/his municipal. I have classified the types of citizens by the interviews I made. Due to this classification I explain how many people actually have opportunity to participate the dialogue of the municipal decision making and how many can follow the activity of the municipal governance. The result is that after the elections in November 2008 only one typed group can freely take part in the dialogue. This does not apply the principals of good governance, especially in subterms of participation and transparency. The incidents after the municipal elections have affected strongly on the co-operation of Finland and Nicaragua. Because of the fault of the elections Finland like the other co-operative countries brought down the directly paid budget support. This has caused a great economical crisis in Nicaragua which the covering will take a long time. The Master's thesis is a case study of two rural municipalities called Santo Tómas and Villa Sandino. Santo Tómas has a sandinista municipal government which is not legitimate. In Villa Sandino the government is liberal and legitimate.