81 resultados para Failure Probability
em Indian Institute of Science - Bangalore - Índia
Resumo:
We reconsider standard uniaxial fatigue test data obtained from handbooks. Many S-N curve fits to such data represent the median life and exclude load-dependent variance in life. Presently available approaches for incorporating probabilistic aspects explicitly within the S-N curves have some shortcomings, which we discuss. We propose a new linear S-N fit with a prespecified failure probability, load-dependent variance, and reasonable behavior at extreme loads. We fit our parameters using maximum likelihood, show the reasonableness of the fit using Q-Q plots, and obtain standard error estimates via Monte Carlo simulations. The proposed fitting method may be used for obtaining S-N curves from the same data as already available, with the same mathematical form, but in cases in which the failure probability is smaller, say, 10 % instead of 50 %, and in which the fitted line is not parallel to the 50 % (median) line.
Resumo:
The move towards IT outsourcing is the first step towards an environment where compute infrastructure is treated as a service. In utility computing this IT service has to honor Service Level Agreements (SLA) in order to meet the desired Quality of Service (QoS) guarantees. Such an environment requires reliable services in order to maximize the utilization of the resources and to decrease the Total Cost of Ownership (TCO). Such reliability cannot come at the cost of resource duplication, since it increases the TCO of the data center and hence the cost per compute unit. We, in this paper, look into aspects of projecting impact of hardware failures on the SLAs and techniques required to take proactive recovery steps in case of a predicted failure. By maintaining health vectors of all hardware and system resources, we predict the failure probability of resources based on observed hardware errors/failure events, at runtime. This inturn influences an availability aware middleware to take proactive action (even before the application is affected in case the system and the application have low recoverability). The proposed framework has been prototyped on a system running HP-UX. Our offline analysis of the prediction system on hardware error logs indicate no more than 10% false positives. This work to the best of our knowledge is the first of its kind to perform an end-to-end analysis of the impact of a hardware fault on application SLAs, in a live system.
Resumo:
A reliable method for service life estimation of the structural element is a prerequisite for service life design. A new methodology for durability-based service life estimation of reinforced concrete flexural elements with respect to chloride-induced corrosion of reinforcement is proposed. The methodology takes into consideration the fuzzy and random uncertainties associated with the variables involved in service life estimation by using a hybrid method combining the vertex method of fuzzy set theory with Monte Carlo simulation technique. It is also shown how to determine the bounds for characteristic value of failure probability from the resulting fuzzy set for failure probability with minimal computational effort. Using the methodology, the bounds for the characteristic value of failure probability for a reinforced concrete T-beam bridge girder has been determined. The service life of the structural element is determined by comparing the upper bound of characteristic value of failure probability with the target failure probability. The methodology will be useful for durability-based service life design and also for making decisions regarding in-service inspections.
Resumo:
A new scheme for minimizing handover failure probability in mobile cellular communication systems is presented. The scheme involves a reassignment of priorities for handover requests enqueued in adjacent cells to release a channel for a handover request which is about to fail. Performance evaluation of the new scheme carried out by computer simulation of a four-cell highway cellular system has shown a considerable reduction in handover failure probability
Resumo:
The stochastic version of Pontryagin's maximum principle is applied to determine an optimal maintenance policy of equipment subject to random deterioration. The deterioration of the equipment with age is modelled as a random process. Next the model is generalized to include random catastrophic failure of the equipment. The optimal maintenance policy is derived for two special probability distributions of time to failure of the equipment, namely, exponential and Weibull distributions Both the salvage value and deterioration rate of the equipment are treated as state variables and the maintenance as a control variable. The result is illustrated by an example
Resumo:
A link failure in the path of a virtual circuit in a packet data network will lead to premature disconnection of the circuit by the end-points. A soft failure will result in degraded throughput over the virtual circuit. If these failures can be detected quickly and reliably, then appropriate rerouteing strategies can automatically reroute the virtual circuits that use the failed facility. In this paper, we develop a methodology for analysing and designing failure detection schemes for digital facilities. Based on errored second data, we develop a Markov model for the error and failure behaviour of a T1 trunk. The performance of a detection scheme is characterized by its false alarm probability and the detection delay. Using the Markov model, we analyse the performance of detection schemes that use physical layer or link layer information. The schemes basically rely upon detecting the occurrence of severely errored seconds (SESs). A failure is declared when a counter, that is driven by the occurrence of SESs, reaches a certain threshold.For hard failures, the design problem reduces to a proper choice;of the threshold at which failure is declared, and on the connection reattempt parameters of the virtual circuit end-point session recovery procedures. For soft failures, the performance of a detection scheme depends, in addition, on how long and how frequent the error bursts are in a given failure mode. We also propose and analyse a novel Level 2 detection scheme that relies only upon anomalies observable at Level 2, i.e. CRC failures and idle-fill flag errors. Our results suggest that Level 2 schemes that perform as well as Level 1 schemes are possible.
Resumo:
Optimal maintenance policies for a machine with degradation in performance with age and subject to failure are derived using optimal control theory. The optimal policies are shown to be, normally, of bang-coast nature, except in the case when probability of machine failure is a function of maintenance. It is also shown, in the deterministic case that a higher depreciation rate tends to reverse this policy to coast-bang. When the probability of failure is a function of maintenance, considerable computational effort is needed to obtain an optimal policy and the resulting policy is not easily implementable. For this case also, an optimal policy in the class of bang-coast policies is derived, using a semi-Markov decision model. A simple procedure for modifying the probability of machine failure with maintenance is employed. The results obtained extend and unify the recent results for this problem along both theoretical and practical lines. Numerical examples are presented to illustrate the results obtained.
Resumo:
An expression is derived for the probability that the determinant of an n x n matrix over a finite field vanishes; from this it is deduced that for a fixed field this probability tends to 1 as n tends to.
Resumo:
The statistical minimum risk pattern recognition problem, when the classification costs are random variables of unknown statistics, is considered. Using medical diagnosis as a possible application, the problem of learning the optimal decision scheme is studied for a two-class twoaction case, as a first step. This reduces to the problem of learning the optimum threshold (for taking appropriate action) on the a posteriori probability of one class. A recursive procedure for updating an estimate of the threshold is proposed. The estimation procedure does not require the knowledge of actual class labels of the sample patterns in the design set. The adaptive scheme of using the present threshold estimate for taking action on the next sample is shown to converge, in probability, to the optimum. The results of a computer simulation study of three learning schemes demonstrate the theoretically predictable salient features of the adaptive scheme.
Resumo:
A quantitative expression has been obtained for the equivalent resistance of an internal short in rechargeable cells under constant voltage charging.
Resumo:
The behavior of pile foundations in non liquefiable soil under seismic loading is considerably influenced by the variability in the soil and seismic design parameters. Hence, probabilistic models for the assessment of seismic pile design are necessary. Deformation of pile foundation in non liquefiable soil is dominated by inertial force from superstructure. The present study considers a pseudo-static approach based on code specified design response spectra. The response of the pile is determined by equivalent cantilever approach. The soil medium is modeled as a one-dimensional random field along the depth. The variability associated with undrained shear strength, design response spectrum ordinate, and superstructure mass is taken into consideration. Monte Carlo simulation technique is adopted to determine the probability of failure and reliability indices based on pile failure modes, namely exceedance of lateral displacement limit and moment capacity. A reliability-based design approach for the free head pile under seismic force is suggested that enables a rational choice of pile design parameters.
Resumo:
The stress corrosion cracking (SCC) characteristics of agr-titanium sheets in a bromine-methanol solution have been studied in the annealed and cold-rolled conditions using longitudinal and transverse specimens. The times to failure for annealed longitudinal specimens were longer than those for similarly tested transverse specimens. The cold-rolled specimens developed resistance to SCC, but failed by cleavage when notched, unlike the intergranular separation in annealed titanium. The apparent activation energy was found to be texture dependent and was in the range 30 to 51 kJ mol–1 for annealed titanium, and 15kJ mol–1 for cold-rolled titanium. The dependence of SCC behaviour on the texture is related to the changes in the crack initiation times. These are caused by changes in the passivation and repassivation characteristics of the particular thickness plane. The thickness planes are identified with the help of X-ray pole figures obtained on annealed and cold-rolled material. On the basis of the activation energy and the electrochemical measurements, the mechanism of SCC in annealed titanium is identified to be the one involving stress-aided anodic dissolution. On the other hand, the results on the cold-rolled titanium are in support of the hydrogen embrittlement mechanism consisting of hydride precipitation. The cleavage planes identified from the texture data match with the reported habit planes for hydride formation.
Resumo:
Optimal bang-coast maintenance policies for a machine, subject to failure, are considered. The approach utilizes a semi-Markov model for the system. A simplified model for modifying the probability of machine failure with maintenance is employed. A numerical example is presented to illustrate the procedure and results.
Resumo:
Hydrologic impacts of climate change are usually assessed by downscaling the General Circulation Model (GCM) output of large-scale climate variables to local-scale hydrologic variables. Such an assessment is characterized by uncertainty resulting from the ensembles of projections generated with multiple GCMs, which is known as intermodel or GCM uncertainty. Ensemble averaging with the assignment of weights to GCMs based on model evaluation is one of the methods to address such uncertainty and is used in the present study for regional-scale impact assessment. GCM outputs of large-scale climate variables are downscaled to subdivisional-scale monsoon rainfall. Weights are assigned to the GCMs on the basis of model performance and model convergence, which are evaluated with the Cumulative Distribution Functions (CDFs) generated from the downscaled GCM output (for both 20th Century [20C3M] and future scenarios) and observed data. Ensemble averaging approach, with the assignment of weights to GCMs, is characterized by the uncertainty caused by partial ignorance, which stems from nonavailability of the outputs of some of the GCMs for a few scenarios (in Intergovernmental Panel on Climate Change [IPCC] data distribution center for Assessment Report 4 [AR4]). This uncertainty is modeled with imprecise probability, i.e., the probability being represented as an interval gray number. Furthermore, the CDF generated with one GCM is entirely different from that with another and therefore the use of multiple GCMs results in a band of CDFs. Representing this band of CDFs with a single valued weighted mean CDF may be misleading. Such a band of CDFs can only be represented with an envelope that contains all the CDFs generated with a number of GCMs. Imprecise CDF represents such an envelope, which not only contains the CDFs generated with all the available GCMs but also to an extent accounts for the uncertainty resulting from the missing GCM output. This concept of imprecise probability is also validated in the present study. The imprecise CDFs of monsoon rainfall are derived for three 30-year time slices, 2020s, 2050s and 2080s, with A1B, A2 and B1 scenarios. The model is demonstrated with the prediction of monsoon rainfall in Orissa meteorological subdivision, which shows a possible decreasing trend in the future.