984 resultados para Reliability (Engineering)
Resumo:
Many systems might have a long time dormant period, during which the systems are not operated. For example, most building services products are installed while a building is constructed, but they are not operated until the building is commissioned. Warranty terms for such products may cover the time starting from their installation times to the end of their warranty periods. Prior to the commissioning of the building, the building services products are protected by warranty although they are not operating. Developing optimal burn-in policies for such products is important when warranty cost is analysed. This paper considers two burn-in policies, which incur different burn-in costs, and have different burn-in effects on the products. A special case about the relationship between the failure rates of the products at the dormant state and at the operating state is presented. Numerical examples compare the mean total warranty costs of these two burn-in policies.
Resumo:
Importance measures in reliability engineering are used to identify weak areas of a system and signify the roles of components in either causing or contributing to proper functioning of the system. Traditional importance measures for multistate systems mainly concern reliability importance of an individual component and seldom consider the utility performance of the systems. This paper extends the joint importance concepts of two components from the binary system case to the multistate system case. A joint structural importance and a joint reliability importance are defined on the basis of the performance utility of the system. The joint structural importance measures the relationship of two components when the reliabilities of components are not available. The joint reliability importance is inferred when the reliabilities of the components are given. The properties of the importance measures are also investigated. A case study for an offshore electrical power generation system is given.
Resumo:
In real-world environments it is usually difficult to specify the quality of a preventive maintenance (PM) action precisely. This uncertainty makes it problematic to optimise maintenance policy.-This problem is tackled in this paper by assuming that the-quality of a PM action is a random variable following a probability distribution. Two frequently studied PM models, a failure rate PM model and an age reduction PM model, are investigated. The optimal PM policies are presented and optimised. Numerical examples are also given.
Resumo:
This paper addresses the need for accurate predictions on the fault inflow, i.e. the number of faults found in the consecutive project weeks, in highly iterative processes. In such processes, in contrast to waterfall-like processes, fault repair and development of new features run almost in parallel. Given accurate predictions on fault inflow, managers could dynamically re-allocate resources between these different tasks in a more adequate way. Furthermore, managers could react with process improvements when the expected fault inflow is higher than desired. This study suggests software reliability growth models (SRGMs) for predicting fault inflow. Originally developed for traditional processes, the performance of these models in highly iterative processes is investigated. Additionally, a simple linear model is developed and compared to the SRGMs. The paper provides results from applying these models on fault data from three different industrial projects. One of the key findings of this study is that some SRGMs are applicable for predicting fault inflow in highly iterative processes. Moreover, the results show that the simple linear model represents a valid alternative to the SRGMs, as it provides reasonably accurate predictions and performs better in many cases.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Traditionally, an (X) over bar chart is used to control the process mean and an R chart is used to control the process variance. However, these charts are not sensitive to small changes in the process parameters. The adaptive ($) over bar and R charts might be considered if the aim is to detect small disturbances. Due to the statistical character of the joint (X) over bar and R charts with fixed or adaptive parameters, they are not reliable in identifing the nature of the disturbance, whether it is one that shifts the process mean, increases the process variance, or leads to a combination of both effects. In practice, the speed with which the control charts detect process changes may be more important than their ability in identifying the nature of the change. Under these circumstances, it seems to be advantageous to consider a single chart, based on only one statistic, to simultaneously monitor the process mean and variance. In this paper, we propose the adaptive non-central chi-square statistic chart. This new chart is more effective than the adaptive (X) over bar and R charts in detecting disturbances that shift the process mean, increase the process variance, or lead to a combination of both effects. Copyright (c) 2006 John Wiley & Sons, Ltd.
Resumo:
When joint (X) over bar and R charts are in use, samples of fixed size are regularly taken from the process, and their means and ranges are plotted on the (X) over bar and R charts, respectively. In this article, joint (X) over bar and R charts have been used for monitoring continuous production processes. The sampling is performed, in two stages. During the first stage, one item of the sample is inspected and, depending on the result, the sampling is interrupted if the process is found to be in control; otherwise, it goes on to the second stage, where the remaining sample items are inspected. The two-stage sampling procedure speeds up the detection of process disturbances. The proposed joint (X) over bar and R charts are easier to administer and are more efficient than the joint (X) over bar and R charts with variable sample size where the quality characteristic of interest can be evaluated either by attribute or variable. Copyright (C) 2004 John Wiley Sons, Ltd.
Resumo:
The VSS X chart, dedicated to the detection of small to moderate mean shifts in the process, has been investigated by several researchers under the assumption of known process parameters. In practice, the process parameters are rarely known and are usually estimated from an in-control Phase I data set. In this paper, we evaluate the (run length) performances of the VSS chart when the process parameters are estimated, we compare them in the case where the process parameters are assumed known and we propose specific optimal control chart parameters taking the number of Phase I samples into account.
Resumo:
The steady-state average run length is used to measure the performance of the recently proposed synthetic double sampling (X) over bar chart (synthetic DS chart). The overall performance of the DS X chart in signaling process mean shifts of different magnitudes does not improve when it is integrated with the conforming run length chart, except when the integrated charts are designed to offer very high protection against false alarms, and the use of large samples is prohibitive. The synthetic chart signals when a second point falls beyond the control limits, no matter whether one of them falls above the centerline and the other falls below it; with the side-sensitive feature, the synthetic chart does not signal when they fall on opposite sides of the centerline. We also investigated the steady-state average run length of the side-sensitive synthetic DS X chart. With the side-sensitive feature, the overall performance of the synthetic DS X chart improves, but not enough to outperform the non-synthetic DS X chart. Copyright (C) 2014 John Wiley &Sons, Ltd.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Maritime accidents involving ships carrying passengers may pose a high risk with respect to human casualties. For effective risk mitigation, an insight into the process of risk escalation is needed. This requires a proactive approach when it comes to risk modelling for maritime transportation systems. Most of the existing models are based on historical data on maritime accidents, and thus they can be considered reactive instead of proactive. This paper introduces a systematic, transferable and proactive framework estimating the risk for maritime transportation systems, meeting the requirements stemming from the adopted formal definition of risk. The framework focuses on ship-ship collisions in the open sea, with a RoRo/Passenger ship (RoPax) being considered as the struck ship. First, it covers an identification of the events that follow a collision between two ships in the open sea, and, second, it evaluates the probabilities of these events, concluding by determining the severity of a collision. The risk framework is developed with the use of Bayesian Belief Networks and utilizes a set of analytical methods for the estimation of the risk model parameters. The model can be run with the use of GeNIe software package. Finally, a case study is presented, in which the risk framework developed here is applied to a maritime transportation system operating in the Gulf of Finland (GoF). The results obtained are compared to the historical data and available models, in which a RoPax was involved in a collision, and good agreement with the available records is found.
Resumo:
Abstract We consider a wide class of models that includes the highly reliable Markovian systems (HRMS) often used to represent the evolution of multi-component systems in reliability settings. Repair times and component lifetimes are random variables that follow a general distribution, and the repair service adopts a priority repair rule based on system failure risk. Since crude simulation has proved to be inefficient for highly-dependable systems, the RESTART method is used for the estimation of steady-state unavailability and other reliability measures. In this method, a number of simulation retrials are performed when the process enters regions of the state space where the chance of occurrence of a rare event (e.g., a system failure) is higher. The main difficulty involved in applying this method is finding a suitable function, called the importance function, to define the regions. In this paper we introduce an importance function which, for unbalanced systems, represents a great improvement over the importance function used in previous papers. We also demonstrate the asymptotic optimality of RESTART estimators in these models. Several examples are presented to show the effectiveness of the new approach, and probabilities up to the order of 10-42 are accurately estimated with little computational effort.
Resumo:
Piotr Omenzetter and Simon Hoell’s work within the Lloyd’s Register Foundation Centre for Safety and Reliability Engineering at the University of Aberdeen is supported by Lloyd’s Register Foundation. The Foundation helps to protect life and property by supporting engineering-related education, public engagement and the application of research.