993 resultados para Failure Probability
Resumo:
A reliable method for service life estimation of the structural element is a prerequisite for service life design. A new methodology for durability-based service life estimation of reinforced concrete flexural elements with respect to chloride-induced corrosion of reinforcement is proposed. The methodology takes into consideration the fuzzy and random uncertainties associated with the variables involved in service life estimation by using a hybrid method combining the vertex method of fuzzy set theory with Monte Carlo simulation technique. It is also shown how to determine the bounds for characteristic value of failure probability from the resulting fuzzy set for failure probability with minimal computational effort. Using the methodology, the bounds for the characteristic value of failure probability for a reinforced concrete T-beam bridge girder has been determined. The service life of the structural element is determined by comparing the upper bound of characteristic value of failure probability with the target failure probability. The methodology will be useful for durability-based service life design and also for making decisions regarding in-service inspections.
Resumo:
A new scheme for minimizing handover failure probability in mobile cellular communication systems is presented. The scheme involves a reassignment of priorities for handover requests enqueued in adjacent cells to release a channel for a handover request which is about to fail. Performance evaluation of the new scheme carried out by computer simulation of a four-cell highway cellular system has shown a considerable reduction in handover failure probability
Resumo:
By sample specificity it is meant that specimens with the same nominal material parameters and tested under the same environmental conditions may exhibit different behavior with diversified strength. Such an effect has been widely observed in the testing of material failure and is usually attributed to the heterogeneity of material at the mesoscopic level. The degree with which mesoscopic heterogeneity affects macroscopic failure is still not clear. Recently, the problem has been examined by making use of statistical ensemble evolution of dynamical system and the mesoscopic stress re-distribution model (SRD). Sample specificity was observed for non-global mean stress field models, such as the duster mean field model, stress concentration at tip of microdamage, etc. Certain heterogeneity of microdamage could be sensitive to particular SRD leading to domino type of coalescence. Such an effect could start from the microdamage heterogeneity and then be magnified to other scale levels. This trans-scale sensitivity is the origin of sample specificity. The sample specificity leads to a failure probability Phi (N) with a transitional region 0 <
Resumo:
In this work, the development of a probabilistic approach to robust control is motivated by structural control applications in civil engineering. Often in civil structural applications, a system's performance is specified in terms of its reliability. In addition, the model and input uncertainty for the system may be described most appropriately using probabilistic or "soft" bounds on the model and input sets. The probabilistic robust control methodology contrasts with existing H∞/μ robust control methodologies that do not use probability information for the model and input uncertainty sets, yielding only the guaranteed (i.e., "worst-case") system performance, and no information about the system's probable performance which would be of interest to civil engineers.
The design objective for the probabilistic robust controller is to maximize the reliability of the uncertain structure/controller system for a probabilistically-described uncertain excitation. The robust performance is computed for a set of possible models by weighting the conditional performance probability for a particular model by the probability of that model, then integrating over the set of possible models. This integration is accomplished efficiently using an asymptotic approximation. The probable performance can be optimized numerically over the class of allowable controllers to find the optimal controller. Also, if structural response data becomes available from a controlled structure, its probable performance can easily be updated using Bayes's Theorem to update the probability distribution over the set of possible models. An updated optimal controller can then be produced, if desired, by following the original procedure. Thus, the probabilistic framework integrates system identification and robust control in a natural manner.
The probabilistic robust control methodology is applied to two systems in this thesis. The first is a high-fidelity computer model of a benchmark structural control laboratory experiment. For this application, uncertainty in the input model only is considered. The probabilistic control design minimizes the failure probability of the benchmark system while remaining robust with respect to the input model uncertainty. The performance of an optimal low-order controller compares favorably with higher-order controllers for the same benchmark system which are based on other approaches. The second application is to the Caltech Flexible Structure, which is a light-weight aluminum truss structure actuated by three voice coil actuators. A controller is designed to minimize the failure probability for a nominal model of this system. Furthermore, the method for updating the model-based performance calculation given new response data from the system is illustrated.
Resumo:
In a probabilistic assessment of the performance of structures subjected to uncertain environmental loads such as earthquakes, an important problem is to determine the probability that the structural response exceeds some specified limits within a given duration of interest. This problem is known as the first excursion problem, and it has been a challenging problem in the theory of stochastic dynamics and reliability analysis. In spite of the enormous amount of attention the problem has received, there is no procedure available for its general solution, especially for engineering problems of interest where the complexity of the system is large and the failure probability is small.
The application of simulation methods to solving the first excursion problem is investigated in this dissertation, with the objective of assessing the probabilistic performance of structures subjected to uncertain earthquake excitations modeled by stochastic processes. From a simulation perspective, the major difficulty in the first excursion problem comes from the large number of uncertain parameters often encountered in the stochastic description of the excitation. Existing simulation tools are examined, with special regard to their applicability in problems with a large number of uncertain parameters. Two efficient simulation methods are developed to solve the first excursion problem. The first method is developed specifically for linear dynamical systems, and it is found to be extremely efficient compared to existing techniques. The second method is more robust to the type of problem, and it is applicable to general dynamical systems. It is efficient for estimating small failure probabilities because the computational effort grows at a much slower rate with decreasing failure probability than standard Monte Carlo simulation. The simulation methods are applied to assess the probabilistic performance of structures subjected to uncertain earthquake excitation. Failure analysis is also carried out using the samples generated during simulation, which provide insight into the probable scenarios that will occur given that a structure fails.
Resumo:
We propose a complex fiber bundle model for the optimization of heterogeneous materials, which consists of many simple bundles. We also present an exact and compact recursion relation for the failure probability of a simple fiber bundle model with local load sharing, which is more efficient than the ones reported previously. Using a ''renormalization method'' and the recursion relation developed for the simple bundle, we calculate the failure probabilities of the complex fiber bundle. When the total number of fibers is given, we find that there exists an optimum way to organize the complex bundle, in which one gets a stronger bundle than in other ways.
Resumo:
This paper proposes a new methodology to reduce the probability of occurring states that cause load curtailment, while minimizing the involved costs to achieve that reduction. The methodology is supported by a hybrid method based on Fuzzy Set and Monte Carlo Simulation to catch both randomness and fuzziness of component outage parameters of transmission power system. The novelty of this research work consists in proposing two fundamentals approaches: 1) a global steady approach which deals with building the model of a faulted transmission power system aiming at minimizing the unavailability corresponding to each faulted component in transmission power system. This, results in the minimal global cost investment for the faulted components in a system states sample of the transmission network; 2) a dynamic iterative approach that checks individually the investment’s effect on the transmission network. A case study using the Reliability Test System (RTS) 1996 IEEE 24 Buses is presented to illustrate in detail the application of the proposed methodology.
Resumo:
This paper presents a methodology supported on the data base knowledge discovery process (KDD), in order to find out the failure probability of electrical equipments’, which belong to a real electrical high voltage network. Data Mining (DM) techniques are used to discover a set of outcome failure probability and, therefore, to extract knowledge concerning to the unavailability of the electrical equipments such us power transformers and high-voltages power lines. The framework includes several steps, following the analysis of the real data base, the pre-processing data, the application of DM algorithms, and finally, the interpretation of the discovered knowledge. To validate the proposed methodology, a case study which includes real databases is used. This data have a heavy uncertainty due to climate conditions for this reason it was used fuzzy logic to determine the set of the electrical components failure probabilities in order to reestablish the service. The results reflect an interesting potential of this approach and encourage further research on the topic.
Resumo:
The hazards associated with major accident hazard (MAH) industries are fire, explosion and toxic gas releases. Of these, toxic gas release is the worst as it has the potential to cause extensive fatalities. Qualitative and quantitative hazard analyses are essential for the identitication and quantification of the hazards associated with chemical industries. This research work presents the results of a consequence analysis carried out to assess the damage potential of the hazardous material storages in an industrial area of central Kerala, India. A survey carried out in the major accident hazard (MAH) units in the industrial belt revealed that the major hazardous chemicals stored by the various industrial units are ammonia, chlorine, benzene, naphtha, cyclohexane, cyclohexanone and LPG. The damage potential of the above chemicals is assessed using consequence modelling. Modelling of pool fires for naphtha, cyclohexane, cyclohexanone, benzene and ammonia are carried out using TNO model. Vapor cloud explosion (VCE) modelling of LPG, cyclohexane and benzene are carried out using TNT equivalent model. Boiling liquid expanding vapor explosion (BLEVE) modelling of LPG is also carried out. Dispersion modelling of toxic chemicals like chlorine, ammonia and benzene is carried out using the ALOHA air quality model. Threat zones for different hazardous storages are estimated based on the consequence modelling. The distance covered by the threat zone was found to be maximum for chlorine release from a chlor-alkali industry located in the area. The results of consequence modelling are useful for the estimation of individual risk and societal risk in the above industrial area.Vulnerability assessment is carried out using probit functions for toxic, thermal and pressure loads. Individual and societal risks are also estimated at different locations. Mapping of threat zones due to different incident outcome cases from different MAH industries is done with the help of Are GIS.Fault Tree Analysis (FTA) is an established technique for hazard evaluation. This technique has the advantage of being both qualitative and quantitative, if the probabilities and frequencies of the basic events are known. However it is often difficult to estimate precisely the failure probability of the components due to insufficient data or vague characteristics of the basic event. It has been reported that availability of the failure probability data pertaining to local conditions is surprisingly limited in India. This thesis outlines the generation of failure probability values of the basic events that lead to the release of chlorine from the storage and filling facility of a major chlor-alkali industry located in the area using expert elicitation and proven fuzzy logic. Sensitivity analysis has been done to evaluate the percentage contribution of each basic event that could lead to chlorine release. Two dimensional fuzzy fault tree analysis (TDFFTA) has been proposed for balancing the hesitation factor invo1ved in expert elicitation .
Resumo:
IP based networks still do not have the required degree of reliability required by new multimedia services, achieving such reliability will be crucial in the success or failure of the new Internet generation. Most of existing schemes for QoS routing do not take into consideration parameters concerning the quality of the protection, such as packet loss or restoration time. In this paper, we define a new paradigm to develop new protection strategies for building reliable MPLS networks, based on what we have called the network protection degree (NPD). This NPD consists of an a priori evaluation, the failure sensibility degree (FSD), which provides the failure probability and an a posteriori evaluation, the failure impact degree (FID), to determine the impact on the network in case of failure. Having mathematical formulated these components, we point out the most relevant components. Experimental results demonstrate the benefits of the utilization of the NPD, when used to enhance some current QoS routing algorithms to offer a certain degree of protection
Resumo:
Les noves tecnologies a la xarxa ens permeten transportar, cada cop més, grans volums d' informació i trànsit de xarxa amb diferents nivells de prioritat. En aquest escenari, on s'ofereix una millor qualitat de servei, les conseqüències d'una fallada en un enllaç o en un node esdevenen més importants. Multiprotocol Lavel Switching (MPLS), juntament amb l'extensió a MPLS generalitzat (GMPLS), proporcionen mecanismes ràpids de recuperació de fallada establint camins, Label Switch Path (LSPs), redundants per ser utilitzats com a camins alternatius. En cas de fallada podrem utilitzar aquests camins per redireccionar el trànsit. El principal objectiu d'aquesta tesi ha estat millorar alguns dels actuals mecanismes de recuperació de fallades MPLS/GMPLS, amb l'objectiu de suportar els requeriments de protecció dels serveis proporcionats per la nova Internet. Per tal de fer aquesta avaluació s'han tingut en compte alguns paràmetres de qualitat de protecció com els temps de recuperació de fallada, les pèrdues de paquets o el consum de recursos. En aquesta tesi presentem una completa revisió i comparació dels principals mètodes de recuperació de fallada basats en MPLS. Aquest anàlisi inclou els mètodes de protecció del camí (backups globals, backups inversos i protecció 1+1), els mètodes de protecció locals i els mètodes de protecció de segments. També s'ha tingut en compte l'extensió d'aquests mecanismes a les xarxes òptiques mitjançant el pla de control proporcionat per GMPLS. En una primera fase d'aquest treball, cada mètode de recuperació de fallades és analitzat sense tenir en compte restriccions de recursos o de topologia. Aquest anàlisi ens dóna una primera classificació dels millors mecanismes de protecció en termes de pèrdues de paquets i temps de recuperació. Aquest primer anàlisi no és aplicable a xarxes reals. Per tal de tenir en compte aquest nou escenari, en una segona fase, s'analitzen els algorismes d'encaminament on sí tindrem en compte aquestes limitacions i restriccions de la xarxa. Es presenten alguns dels principals algorismes d'encaminament amb qualitat de servei i alguna de les principals propostes d'encaminament per xarxes MPLS. La majoria dels actual algorismes d'encaminament no tenen en compte l'establiment de rutes alternatives o utilitzen els mateixos objectius per seleccionar els camins de treball i els de protecció. Per millorar el nivell de protecció introduïm i formalitzem dos nous conceptes: la Probabilitat de fallada de la xarxa i l'Impacte de fallada. Un anàlisi de la xarxa a nivell físic proporciona un primer element per avaluar el nivell de protecció en termes de fiabilitat i disponibilitat de la xarxa. Formalitzem l'impacte d'una fallada, quant a la degradació de la qualitat de servei (en termes de retard i pèrdues de paquets). Expliquem la nostra proposta per reduir la probabilitat de fallada i l'impacte de fallada. Per últim fem una nova definició i classificació dels serveis de xarxa segons els valors requerits de probabilitat de fallada i impacte. Un dels aspectes que destaquem dels resultats d'aquesta tesi és que els mecanismes de protecció global del camí maximitzen la fiabilitat de la xarxa, mentre que les tècniques de protecció local o de segments de xarxa minimitzen l'impacte de fallada. Per tant podem assolir mínim impacte i màxima fiabilitat aplicant protecció local a tota la xarxa, però no és una proposta escalable en termes de consum de recursos. Nosaltres proposem un mecanisme intermig, aplicant protecció de segments combinat amb el nostre model d'avaluació de la probabilitat de fallada. Resumint, aquesta tesi presenta diversos mecanismes per l'anàlisi del nivell de protecció de la xarxa. Els resultats dels models i mecanismes proposats milloren la fiabilitat i minimitzen l'impacte d'una fallada en la xarxa.
Resumo:
Fiber reinforced polymer composites have been widely applied in the aeronautical field. However, composite processing, which uses unlocked molds, should be avoided in view of the tight requirements and also due to possible environmental contamination. To produce high performance structural frames meeting aeronautical reproducibility and low cost criteria, the Brazilian industry has shown interest to investigate the resin transfer molding process (RTM) considering being a closed-mold pressure injection system which allows faster gel and cure times. Due to the fibrous composite anisotropic and non homogeneity characteristics, the fatigue behavior is a complex phenomenon quite different from to metals materials crucial to be investigated considering the aeronautical application. Fatigue sub-scale specimens of intermediate modulus carbon fiber non-crimp multi-axial reinforcement and epoxy mono-component system composite were produced according to the ASTM 3039 D. Axial fatigue tests were carried out according to ASTM D 3479. A sinusoidal load of 10 Hz frequency and load ratio R = 0.1. It was observed a high fatigue interval obtained for NCF/RTM6 composites. Weibull statistical analysis was applied to describe the failure probability of materials under cyclic loads and fractures pattern was observed by scanning electron microscopy. (C) 2010 Published by Elsevier Ltd.
Resumo:
The Nailed Box Beam structural efficiency is directly dependent of the flange-web joint behavior, which determines the partial composition of the section, as the displacement between elements reduces the effective rigidity of the section and changes the stress distribution and the total displacement of the section. This work discusses the use of Nailed Plywood Box Beams in small span timber bridges, focusing on the reliability of the beam element. It is presented the results of tests carried out in 21 full scale Nailed Plywood Box Beams. The analysis of maximum load tests results shows that it presents a normal distribution, permitting the characteristic values calculation as the normal distribution theory specifies. The reliability of those elements was analyzed focusing on a timber bridge design, to estimate the failure probability in function of the load level.
Specialist tool for monitoring the measurement degradation process of induction active energy meters
Resumo:
This paper presents a methodology and a specialist tool for failure probability analysis of induction type watt-hour meters, considering the main variables related to their measurement degradation processes. The database of the metering park of a distribution company, named Elektro Electricity and Services Co., was used for determining the most relevant variables and to feed the data in the software. The modeling developed to calculate the watt-hour meters probability of failure was implemented in a tool through a user friendly platform, written in Delphi language. Among the main features of this tool are: analysis of probability of failure by risk range; geographical localization of the meters in the metering park, and automatic sampling of induction type watt-hour meters, based on a risk classification expert system, in order to obtain information to aid the management of these meters. The main goals of the specialist tool are following and managing the measurement degradation, maintenance and replacement processes for induction watt-hour meters. © 2011 IEEE.
Resumo:
Pós-graduação em Engenharia Elétrica - FEIS