903 resultados para failure tree analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The hazards associated with major accident hazard (MAH) industries are fire, explosion and toxic gas releases. Of these, toxic gas release is the worst as it has the potential to cause extensive fatalities. Qualitative and quantitative hazard analyses are essential for the identitication and quantification of the hazards associated with chemical industries. This research work presents the results of a consequence analysis carried out to assess the damage potential of the hazardous material storages in an industrial area of central Kerala, India. A survey carried out in the major accident hazard (MAH) units in the industrial belt revealed that the major hazardous chemicals stored by the various industrial units are ammonia, chlorine, benzene, naphtha, cyclohexane, cyclohexanone and LPG. The damage potential of the above chemicals is assessed using consequence modelling. Modelling of pool fires for naphtha, cyclohexane, cyclohexanone, benzene and ammonia are carried out using TNO model. Vapor cloud explosion (VCE) modelling of LPG, cyclohexane and benzene are carried out using TNT equivalent model. Boiling liquid expanding vapor explosion (BLEVE) modelling of LPG is also carried out. Dispersion modelling of toxic chemicals like chlorine, ammonia and benzene is carried out using the ALOHA air quality model. Threat zones for different hazardous storages are estimated based on the consequence modelling. The distance covered by the threat zone was found to be maximum for chlorine release from a chlor-alkali industry located in the area. The results of consequence modelling are useful for the estimation of individual risk and societal risk in the above industrial area.Vulnerability assessment is carried out using probit functions for toxic, thermal and pressure loads. Individual and societal risks are also estimated at different locations. Mapping of threat zones due to different incident outcome cases from different MAH industries is done with the help of Are GIS.Fault Tree Analysis (FTA) is an established technique for hazard evaluation. This technique has the advantage of being both qualitative and quantitative, if the probabilities and frequencies of the basic events are known. However it is often difficult to estimate precisely the failure probability of the components due to insufficient data or vague characteristics of the basic event. It has been reported that availability of the failure probability data pertaining to local conditions is surprisingly limited in India. This thesis outlines the generation of failure probability values of the basic events that lead to the release of chlorine from the storage and filling facility of a major chlor-alkali industry located in the area using expert elicitation and proven fuzzy logic. Sensitivity analysis has been done to evaluate the percentage contribution of each basic event that could lead to chlorine release. Two dimensional fuzzy fault tree analysis (TDFFTA) has been proposed for balancing the hesitation factor invo1ved in expert elicitation .

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research explores Bayesian updating as a tool for estimating parameters probabilistically by dynamic analysis of data sequences. Two distinct Bayesian updating methodologies are assessed. The first approach focuses on Bayesian updating of failure rates for primary events in fault trees. A Poisson Exponentially Moving Average (PEWMA) model is implemnented to carry out Bayesian updating of failure rates for individual primary events in the fault tree. To provide a basis for testing of the PEWMA model, a fault tree is developed based on the Texas City Refinery incident which occurred in 2005. A qualitative fault tree analysis is then carried out to obtain a logical expression for the top event. A dynamic Fault Tree analysis is carried out by evaluating the top event probability at each Bayesian updating step by Monte Carlo sampling from posterior failure rate distributions. It is demonstrated that PEWMA modeling is advantageous over conventional conjugate Poisson-Gamma updating techniques when failure data is collected over long time spans. The second approach focuses on Bayesian updating of parameters in non-linear forward models. Specifically, the technique is applied to the hydrocarbon material balance equation. In order to test the accuracy of the implemented Bayesian updating models, a synthetic data set is developed using the Eclipse reservoir simulator. Both structured grid and MCMC sampling based solution techniques are implemented and are shown to model the synthetic data set with good accuracy. Furthermore, a graphical analysis shows that the implemented MCMC model displays good convergence properties. A case study demonstrates that Likelihood variance affects the rate at which the posterior assimilates information from the measured data sequence. Error in the measured data significantly affects the accuracy of the posterior parameter distributions. Increasing the likelihood variance mitigates random measurement errors, but casuses the overall variance of the posterior to increase. Bayesian updating is shown to be advantageous over deterministic regression techniques as it allows for incorporation of prior belief and full modeling uncertainty over the parameter ranges. As such, the Bayesian approach to estimation of parameters in the material balance equation shows utility for incorporation into reservoir engineering workflows.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

High reliability of railway power systems is one of the essential criteria to ensure quality and cost-effectiveness of railway services. Evaluation of reliability at system level is essential for not only scheduling maintenance activities, but also identifying reliability-critical components. Various methods to compute reliability on individual components or regularly structured systems have been developed and proven to be effective. However, they are not adequate for evaluating complicated systems with numerous interconnected components, such as railway power systems, and locating the reliability critical components. Fault tree analysis (FTA) integrates the reliability of individual components into the overall system reliability through quantitative evaluation and identifies the critical components by minimum cut sets and sensitivity analysis. The paper presents the reliability evaluation of railway power systems by FTA and investigates the impact of maintenance activities on overall reliability. The applicability of the proposed methods is illustrated by case studies in AC railways.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fault tree analysis (FTA) is presented to model the reliability of a railway traction power system in this paper. First, the construction of fault tree is introduced to integrate components in traction power systems into a fault tree; then the binary decision diagram (BDD) method is used to evaluate fault trees qualitatively and quantitatively. The components contributing to the reliability of overall system are identified with their relative importance through sensitivity analysis. Finally, an AC traction power system is evaluated by the proposed methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The hazards associated with major accident hazard (MAN) industries are fire, explosion and toxic gas releases. Of these, toxic gas release is the worst as it has the potential to cause extensive fatalities. Qualitative and quantitative hazard analyses are essential for the identification and quantification of these hazards related to chemical industries. Fault tree analysis (FTA) is an established technique in hazard identification. This technique has the advantage of being both qualitative and quantitative, if the probabilities and frequencies of the basic events are known. This paper outlines the estimation of the probability of release of chlorine from storage and filling facility of chlor-alkali industry using FTA. An attempt has also been made to arrive at the probability of chlorine release using expert elicitation and proven fuzzy logic technique for Indian conditions. Sensitivity analysis has been done to evaluate the percentage contribution of each basic event that could lead to chlorine release. Two-dimensional fuzzy fault tree analysis (TDFFTA) has been proposed for balancing the hesitation factor involved in expert elicitation. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The microstructural heterogeneity and stress fluctuation play important roles in the failure process of brittle materials. In this paper, a generalized driven nonlinear threshold model with stress fluctuation is presented to study the effects of microstructural heterogeneity on continuum damage evolution. As an illustration, the failure process of cement material under explosive loading is analyzed using the model. The result agrees well with the experimental one, which proves the efficiency of the model.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Retrospective clinical datasets are often characterized by a relatively small sample size and many missing data. In this case, a common way for handling the missingness consists in discarding from the analysis patients with missing covariates, further reducing the sample size. Alternatively, if the mechanism that generated the missing allows, incomplete data can be imputed on the basis of the observed data, avoiding the reduction of the sample size and allowing methods to deal with complete data later on. Moreover, methodologies for data imputation might depend on the particular purpose and might achieve better results by considering specific characteristics of the domain. The problem of missing data treatment is studied in the context of survival tree analysis for the estimation of a prognostic patient stratification. Survival tree methods usually address this problem by using surrogate splits, that is, splitting rules that use other variables yielding similar results to the original ones. Instead, our methodology consists in modeling the dependencies among the clinical variables with a Bayesian network, which is then used to perform data imputation, thus allowing the survival tree to be applied on the completed dataset. The Bayesian network is directly learned from the incomplete data using a structural expectation–maximization (EM) procedure in which the maximization step is performed with an exact anytime method, so that the only source of approximation is due to the EM formulation itself. On both simulated and real data, our proposed methodology usually outperformed several existing methods for data imputation and the imputation so obtained improved the stratification estimated by the survival tree (especially with respect to using surrogate splits).

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There are several versions of the lognormal distribution in the statistical literature, one is based in the exponential transformation of generalized normal distribution (GN). This paper presents the Bayesian analysis for the generalized lognormal distribution (logGN) considering independent non-informative Jeffreys distributions for the parameters as well as the procedure for implementing the Gibbs sampler to obtain the posterior distributions of parameters. The results are used to analyze failure time models with right-censored and uncensored data. The proposed method is illustrated using actual failure time data of computers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: The beneficial effects of beta-blockers and aldosterone receptor antagonists are now well established in patients with severe systolic chronic heart failure (CHF). However, it is unclear whether beta-blockers are able to provide additional benefit in patients already receiving aldosterone antagonists. We therefore examined this question in the COPERNICUS study of 2289 patients with severe CHF receiving the beta1-beta2/alpha1 blocker carvedilol compared with placebo. METHODS: Patients were divided post hoc into subgroups according to whether they were receiving spironolactone (n = 445) or not (n = 1844) at baseline. Consistency of the effect of carvedilol versus placebo was examined for these subgroups with respect to the predefined end points of all-cause mortality, death or CHF-related hospitalizations, death or cardiovascular hospitalizations, and death or all-cause hospitalizations. RESULTS: The beneficial effect of carvedilol was similar among patients who were or were not receiving spironolactone for each of the 4 efficacy measures. For all-cause mortality, the Cox model hazard ratio for carvedilol compared with placebo was 0.65 (95% CI 0.36-1.15) in patients receiving spironolactone and 0.65 (0.51-0.83) in patients not receiving spironolactone. Hazard ratios for death or all-cause hospitalization were 0.76 (0.55-1.05) versus 0.76 (0.66-0.88); for death or cardiovascular hospitalization, 0.61 (0.42-0.89) versus 0.75 (0.64-0.88); and for death or CHF hospitalization, 0.63 (0.43-0.94) versus 0.70 (0.59-0.84), in patients receiving and not receiving spironolactone, respectively. The safety and tolerability of treatment with carvedilol were also similar, regardless of background spironolactone. CONCLUSION: Carvedilol remained clinically efficacious in the COPERNICUS study of patients with severe CHF when added to background spironolactone in patients who were practically all receiving angiotensin-converting enzyme inhibitor (or angiotensin II antagonist) therapy. Therefore, the use of spironolactone in patients with severe CHF does not obviate the necessity of additional treatment that interferes with the adverse effects of sympathetic activation, specifically beta-blockade.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work presents a systematic method for the generation and treatment of the alarms' graphs, being its final object to find the Alarm Root Cause of the Massive Alarms that are produced in the dispatching centers. Although many works about this matter have been already developed, the problem about the alarm management in the industry is still completely unsolved. In this paper, a simple statistic analysis of the historical data base is conducted. The results obtained by the acquisition alarm systems, are used to generate a directed graph from which the more significant alarms are extracted, previously analyzing any possible case in which a great quantity of alarms are produced.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Phylogenetic analyses are increasingly used in attempts to clarify transmission patterns of human immunodeficiency virus type 1 (HIV-1), but there is a continuing discussion about their validity because convergent evolution and transmission of minor HIV variants may obscure epidemiological patterns. Here we have studied a unique HIV-1 transmission cluster consisting of nine infected individuals, for whom the time and direction of each virus transmission was exactly known. Most of the transmissions occurred between 1981 and 1983, and a total of 13 blood samples were obtained approximately 2-12 years later. The p17 gag and env V3 regions of the HIV-1 genome were directly sequenced from uncultured lymphocytes. A true phylogenetic tree was constructed based on the knowledge about when the transmissions had occurred and when the samples were obtained. This complex, known HIV-1 transmission history was compared with reconstructed molecular trees, which were calculated from the DNA sequences by several commonly used phylogenetic inference methods [Fitch-Margoliash, neighbor-joining, minimum-evolution, maximum-likelihood, maximum-parsimony, unweighted pair group method using arithmetic averages (UPGMA), and a Fitch-Margoliash method assuming a molecular clock (KITSCH)]. A majority of the reconstructed trees were good estimates of the true phylogeny; 12 of 13 taxa were correctly positioned in the most accurate trees. The choice of gene fragment was found to be more important than the choice of phylogenetic method and substitution model. However, methods that are sensitive to unequal rates of change performed more poorly (such as UPGMA and KITSCH, which assume a constant molecular clock). The rapidly evolving V3 fragment gave better reconstructions than p17, but a combined data set of both p17 and V3 performed best. The accuracy of the phylogenetic methods justifies their use in HIV-1 research and argues against convergent evolution and selective transmission of certain virus variants.