11 resultados para Reliability Analysis
em Cochin University of Science
Resumo:
In the present environment, industry should provide the products of high quality. Quality of products is judged by the period of time they can successfully perform their intended functions without failure. The cause of the failures can be ascertained through life testing experiments and the times to failure due to different cause are likely to follow different distributions. Knowledge of this distribution is essential to eliminate causes of failures and thereby to improve the quality and the reliability of products. The main accomplishment expected to the study is to develop statistical tools that could facilitate solution to lifetime data arising in such and similar contexts
Resumo:
Reliability analysis is a well established branch of statistics that deals with the statistical study of different aspects of lifetimes of a system of components. As we pointed out earlier that major part of the theory and applications in connection with reliability analysis were discussed based on the measures in terms of distribution function. In the beginning chapters of the thesis, we have described some attractive features of quantile functions and the relevance of its use in reliability analysis. Motivated by the works of Parzen (1979), Freimer et al. (1988) and Gilchrist (2000), who indicated the scope of quantile functions in reliability analysis and as a follow up of the systematic study in this connection by Nair and Sankaran (2009), in the present work we tried to extend their ideas to develop necessary theoretical framework for lifetime data analysis. In Chapter 1, we have given the relevance and scope of the study and a brief outline of the work we have carried out. Chapter 2 of this thesis is devoted to the presentation of various concepts and their brief reviews, which were useful for the discussions in the subsequent chapters .In the introduction of Chapter 4, we have pointed out the role of ageing concepts in reliability analysis and in identifying life distributions .In Chapter 6, we have studied the first two L-moments of residual life and their relevance in various applications of reliability analysis. We have shown that the first L-moment of residual function is equivalent to the vitality function, which have been widely discussed in the literature .In Chapter 7, we have defined percentile residual life in reversed time (RPRL) and derived its relationship with reversed hazard rate (RHR). We have discussed the characterization problem of RPRL and demonstrated with an example that the RPRL for given does not determine the distribution uniquely
Resumo:
Warships are generally sleek, slender with V shaped sections and block coefficient below 0.5, compared to fuller forms and higher values for commercial ships. They normally operate in the higher Froude number regime, and the hydrodynamic design is primarily aimed at achieving higher speeds with the minimum power. Therefore the structural design and analysis methods are different from those for commercial ships. Certain design guidelines have been given in documents like Naval Engineering Standards and one of the new developments in this regard is the introduction of classification society rules for the design of warships.The marine environment imposes subjective and objective uncertainties on ship structure. The uncertainties in loads, material properties etc.,. make reliable predictions of ship structural response a difficult task. Strength, stiffness and durability criteria for warship structures can be established by investigations on elastic analysis, ultimate strength analysis and reliability analysis. For analysis of complicated warship structures, special means and valid approximations are required.Preliminary structural design of a frigate size ship has been carried out . A finite element model of the hold model, representative of the complexities in the geometric configuration has been created using the finite element software NISA. Two other models representing the geometry to a limited extent also have been created —- one with two transverse frames and the attached plating alongwith the longitudinal members and the other representing the plating and longitudinal stiffeners between two transverse frames. Linear static analysis of the three models have been carried out and each one with three different boundary conditions. The structural responses have been checked for deflections and stresses against the permissible values. The structure has been found adequate in all the cases. The stresses and deflections predicted by the frame model are comparable with those of the hold model. But no such comparison has been realized for the interstiffener plating model with the other two models.Progressive collapse analyses of the models have been conducted for the three boundary conditions, considering geometric nonlinearity and then combined geometric and material nonlinearity for the hold and the frame models. von Mises — lllyushin yield criteria with elastic-perfectly plastic stress-strain curve has been chosen. ln each case, P-Delta curves have been generated and the ultimate load causing failure (ultimate load factor) has been identified as a multiple of the design load specified by NES.Reliability analysis of the hull module under combined geometric and material nonlinearities have been conducted. The Young's Modulus and the shell thickness have been chosen as the variables. Randomly generated values have been used in the analysis. First Order Second Moment has been used to predict the reliability index and thereafter, the probability of failure. The values have been compared against standard values published in literature.
Resumo:
The service quality of any sector has two major aspects namely technical and functional. Technical quality can be attained by maintaining technical specification as decided by the organization. Functional quality refers to the manner which service is delivered to customer which can be assessed by the customer feed backs. A field survey was conducted based on the management tool SERVQUAL, by designing 28 constructs under 7 dimensions of service quality. Stratified sampling techniques were used to get 336 valid responses and the gap scores of expectations and perceptions are analyzed using statistical techniques to identify the weakest dimension. To assess the technical aspects of availability six months live outage data of base transceiver were collected. The statistical and exploratory techniques were used to model the network performance. The failure patterns have been modeled in competing risk models and probability distribution of service outage and restorations were parameterized. Since the availability of network is a function of the reliability and maintainability of the network elements, any service provider who wishes to keep up their service level agreements on availability should be aware of the variability of these elements and its effects on interactions. The availability variations were studied by designing a discrete time event simulation model with probabilistic input parameters. The probabilistic distribution parameters arrived from live data analysis was used to design experiments to define the availability domain of the network under consideration. The availability domain can be used as a reference for planning and implementing maintenance activities. A new metric is proposed which incorporates a consistency index along with key service parameters that can be used to compare the performance of different service providers. The developed tool can be used for reliability analysis of mobile communication systems and assumes greater significance in the wake of mobile portability facility. It is also possible to have a relative measure of the effectiveness of different service providers.
Resumo:
So far, in the bivariate set up, the analysis of lifetime (failure time) data with multiple causes of failure is done by treating each cause of failure separately. with failures from other causes considered as independent censoring. This approach is unrealistic in many situations. For example, in the analysis of mortality data on married couples one would be interested to compare the hazards for the same cause of death as well as to check whether death due to one cause is more important for the partners’ risk of death from other causes. In reliability analysis. one often has systems with more than one component and many systems. subsystems and components have more than one cause of failure. Design of high-reliability systems generally requires that the individual system components have extremely high reliability even after long periods of time. Knowledge of the failure behaviour of a component can lead to savings in its cost of production and maintenance and. in some cases, to the preservation of human life. For the purpose of improving reliability. it is necessary to identify the cause of failure down to the component level. By treating each cause of failure separately with failures from other causes considered as independent censoring, the analysis of lifetime data would be incomplete. Motivated by this. we introduce a new approach for the analysis of bivariate competing risk data using the bivariate vector hazard rate of Johnson and Kotz (1975).
Resumo:
In the present scenario of energy demand overtaking energy supply top priority is given for energy conservation programs and policies. Most of the process plants are operated on continuous basis and consumes large quantities of energy. Efficient management of process system can lead to energy savings, improved process efficiency, lesser operating and maintenance cost, and greater environmental safety. Reliability and maintainability of the system are usually considered at the design stage and is dependent on the system configuration. However, with the growing need for energy conservation, most of the existing process systems are either modified or are in a state of modification with a view for improving energy efficiency. Often these modifications result in a change in system configuration there by affecting the system reliability. It is important that system modifications for improving energy efficiency should not be at the cost of reliability. Any new proposal for improving the energy efficiency of the process or equipments should prove itself to be economically feasible for gaining acceptance for implementation. In order to arrive at the economic feasibility of the new proposal, the general trend is to compare the benefits that can be derived over the lifetime as well as the operating and maintenance costs with the investment to be made. Quite often it happens that the reliability aspects (or loss due to unavailability) are not taken into consideration. Plant availability is a critical factor for the economic performance evaluation of any process plant.The focus of the present work is to study the effect of system modification for improving energy efficiency on system reliability. A generalized model for the valuation of process system incorporating reliability is developed, which is used as a tool for the analysis. It can provide an awareness of the potential performance improvements of the process system and can be used to arrive at the change in process system value resulting from system modification. The model also arrives at the pay back of the modified system by taking reliability aspects also into consideration. It is also used to study the effect of various operating parameters on system value. The concept of breakeven availability is introduced and an algorithm for allocation of component reliabilities of the modified process system based on the breakeven system availability is also developed. The model was applied to various industrial situations.
Resumo:
Occupational stress is becoming a major issue in both corporate and social agenda .In industrialized countries, there have been quite dramatic changes in the conditions at work, during the last decade ,caused by economic, social and technical development. As a consequence, the people today at work are exposed to high quantitative and qualitative demands as well as hard competition caused by global economy. A recent report says that ailments due to work related stress is likely to cost India’s exchequer around 72000 crores between 2009 and 2015. Though India is a fast developing country, it is yet to create facilities to mitigate the adverse effects of work stress, more over only little efforts have been made to assess the work related stress.In the absence of well defined standards to assess the work related stress in India, an attempt is made in this direction to develop the factors for the evaluation of work stress. Accordingly, with the help of existing literature and in consultation with the safety experts, seven factors for the evaluation of work stress is developed. An instrument ( Questionnaire) was developed using these seven factors for the evaluation of work stress .The validity , and unidimensionality of the questionnaire was ensured by confirmatory factor analysis. The reliability of the questionnaire was ensured before administration. While analyzing the relation ship between the variables, it is noted that no relationship exists between them, and hence the above factors are treated as independent factors/ variables for the purpose of research .Initially five profit making manufacturing industries, under public sector in the state of Kerala, were selected for the study. The influence of factors responsible for work stress is analyzed in these industries. These industries were classified in to two types, namely chemical and heavy engineering ,based on the product manufactured and work environment and the analysis is further carried out for these two categories.The variation of work stress with different age , designation and experience of the employees are analyzed by means of one-way ANOVA. Further three different type of modelling of work stress, namely factor modelling, structural equation modelling and multinomial logistic regression modelling was done to analyze the association of factors responsible for work stress. All these models are found equally good in predicting the work stress.The present study indicates that work stress exists among the employees in public sector industries in Kerala. Employees belonging to age group 40-45yrs and experience groups 15-20yrs had relatively higher work demand ,low job control, and low support at work. Low job control was noted among lower designation levels, particularly at the worker level in these industries. Hence the instrument developed using the seven factors namely demand, control, manager support, peer support, relationship, role and change can be effectively used for the evaluation of work stress in industries.
Resumo:
This thesis entitled Reliability Modelling and Analysis in Discrete time Some Concepts and Models Useful in the Analysis of discrete life time data.The present study consists of five chapters. In Chapter II we take up the derivation of some general results useful in reliability modelling that involves two component mixtures. Expression for the failure rate, mean residual life and second moment of residual life of the mixture distributions in terms of the corresponding quantities in the component distributions are investigated. Some applications of these results are also pointed out. The role of the geometric,Waring and negative hypergeometric distributions as models of life lengths in the discrete time domain has been discussed already. While describing various reliability characteristics, it was found that they can be often considered as a class. The applicability of these models in single populations naturally extends to the case of populations composed of sub-populations making mixtures of these distributions worth investigating. Accordingly the general properties, various reliability characteristics and characterizations of these models are discussed in chapter III. Inference of parameters in mixture distribution is usually a difficult problem because the mass function of the mixture is a linear function of the component masses that makes manipulation of the likelihood equations, leastsquare function etc and the resulting computations.very difficult. We show that one of our characterizations help in inferring the parameters of the geometric mixture without involving computational hazards. As mentioned in the review of results in the previous sections, partial moments were not studied extensively in literature especially in the case of discrete distributions. Chapters IV and V deal with descending and ascending partial factorial moments. Apart from studying their properties, we prove characterizations of distributions by functional forms of partial moments and establish recurrence relations between successive moments for some well known families. It is further demonstrated that partial moments are equally efficient and convenient compared to many of the conventional tools to resolve practical problems in reliability modelling and analysis. The study concludes by indicating some new problems that surfaced during the course of the present investigation which could be the subject for a future work in this area.
Resumo:
In spite of the far longed practices of technical analysis by many participants in Indian stock market, none have arrived at the exact position of technical analysis as a tool for foretelling share prices. There is no evidence supporting that one has established its definite role in predicting the behaviour of share price and also to see the extent of validity (how far reliable) of technical tools in Indian stock market. The problem is the vacuum in the arena of securities market analysis where an unrecognised tool is practised, i.e., whether to hold on to technical analysis or to drop it. Again, as already stated in this chapter, its validity need not continue forever. It may become futile as happened in developed markets. Continuous practice of a tool, which is valid only during discontinuous times is also an error. The efficacy of different market phenomena in terms of their ability to foretell the extent and direction of the price movements and reliability thereof remain as not yet proved in. This requires further study in this area so that this controversy may be settled. A solution to the problem requires enquiring and establishing the applicability of technical analysis, if any, there is in the Indian stock market. The study has the following two broad objectives for the purpose of confirming the applicability, if any, of technical analysis in the Indian stock market. The first objective is to ascertain the current validity of ‘traditional holding with respect to patterns’ and the second objective is to ascertain the ‘consistent superiority’, if any, of technical indicators over non-signal strategies in return generation. The study analyses the five patterns, which are widely known and commonly found in publications. They are: (1) Symmetrical Triangles, (2) Rising Wedges, (3) Falling Wedges, (4) Head and Shoulders Top and (5) Head and Shoulders Bottom.
Resumo:
Partial moments are extensively used in literature for modeling and analysis of lifetime data. In this paper, we study properties of partial moments using quantile functions. The quantile based measure determines the underlying distribution uniquely. We then characterize certain lifetime quantile function models. The proposed measure provides alternate definitions for ageing criteria. Finally, we explore the utility of the measure to compare the characteristics of two lifetime distributions
Resumo:
Refiners today operate their equipment for prolonged periods without shutdown. This is primarily due to the increased pressures of the market resulting in extended shutdown-to-shutdown intervals. This places extreme demands on the reliability of the plant equipment. The traditional methods of reliability assurance, like Preventive Maintenance, Predictive Maintenance and Condition Based Maintenance become inadequate in the face of such demands. The alternate approaches to reliability improvement, being adopted the world over are implementation of RCFA programs and Reliability Centered Maintenance. However refiners and process plants find it difficult to adopt this standardized methodology of RCM mainly due to the complexity and the large amount of analysis that needs to be done, resulting in a long drawn out implementation, requiring the services of a number of skilled people. These results in either an implementation restricted to only few equipment or alternately, one that is non-standard. The paper presents the current models in use, the core requirements of a standard RCM model, the alternatives to classical RCM, limitations in the existing model, classical RCM and available alternatives to RCM and will then go on to present an ‗Accelerated‘ approach to RCM implementation, that, while ensuring close conformance to the standard, does not place a large burden on the implementers