13 resultados para SOFTWARE-RELIABILITY MODELS
em Cochin University of Science
Resumo:
Software systems are progressively being deployed in many facets of human life. The implication of the failure of such systems, has an assorted impact on its customers. The fundamental aspect that supports a software system, is focus on quality. Reliability describes the ability of the system to function under specified environment for a specified period of time and is used to objectively measure the quality. Evaluation of reliability of a computing system involves computation of hardware and software reliability. Most of the earlier works were given focus on software reliability with no consideration for hardware parts or vice versa. However, a complete estimation of reliability of a computing system requires these two elements to be considered together, and thus demands a combined approach. The present work focuses on this and presents a model for evaluating the reliability of a computing system. The method involves identifying the failure data for hardware components, software components and building a model based on it, to predict the reliability. To develop such a model, focus is given to the systems based on Open Source Software, since there is an increasing trend towards its use and only a few studies were reported on the modeling and measurement of the reliability of such products. The present work includes a thorough study on the role of Free and Open Source Software, evaluation of reliability growth models, and is trying to present an integrated model for the prediction of reliability of a computational system. The developed model has been compared with existing models and its usefulness of is being discussed.
Resumo:
This thesis entitled Reliability Modelling and Analysis in Discrete time Some Concepts and Models Useful in the Analysis of discrete life time data.The present study consists of five chapters. In Chapter II we take up the derivation of some general results useful in reliability modelling that involves two component mixtures. Expression for the failure rate, mean residual life and second moment of residual life of the mixture distributions in terms of the corresponding quantities in the component distributions are investigated. Some applications of these results are also pointed out. The role of the geometric,Waring and negative hypergeometric distributions as models of life lengths in the discrete time domain has been discussed already. While describing various reliability characteristics, it was found that they can be often considered as a class. The applicability of these models in single populations naturally extends to the case of populations composed of sub-populations making mixtures of these distributions worth investigating. Accordingly the general properties, various reliability characteristics and characterizations of these models are discussed in chapter III. Inference of parameters in mixture distribution is usually a difficult problem because the mass function of the mixture is a linear function of the component masses that makes manipulation of the likelihood equations, leastsquare function etc and the resulting computations.very difficult. We show that one of our characterizations help in inferring the parameters of the geometric mixture without involving computational hazards. As mentioned in the review of results in the previous sections, partial moments were not studied extensively in literature especially in the case of discrete distributions. Chapters IV and V deal with descending and ascending partial factorial moments. Apart from studying their properties, we prove characterizations of distributions by functional forms of partial moments and establish recurrence relations between successive moments for some well known families. It is further demonstrated that partial moments are equally efficient and convenient compared to many of the conventional tools to resolve practical problems in reliability modelling and analysis. The study concludes by indicating some new problems that surfaced during the course of the present investigation which could be the subject for a future work in this area.
Resumo:
Application of Queueing theory in areas like Computer networking, ATM facilities, Telecommunications and to many other numerous situation made people study Queueing models extensively and it has become an ever expanding branch of applied probability. The thesis discusses Reliability of a ‘k-out-of-n system’ where the server also attends external customers when there are no failed components (main customers), under a retrial policy, which can be explained in detail. It explains the reliability of a ‘K-out-of-n-system’ where the server also attends external customers and studies a multi-server infinite capacity Queueing system where each customer arrives as ordinary but can generate into priority customer which waiting in the queue. The study gives details on a finite capacity multi-server queueing system with self-generation of priority customers and also on a single server infinite capacity retrial Queue where the customer in the orbit can generate into a priority customer and leaves the system if the server is already busy with a priority generated customer; else he is taken for service immediately. Arrival process is according to a MAP and service times follow MSP.
Resumo:
This research was undertaken with an objective of studying software development project risk, risk management, project outcomes and their inter-relationship in the Indian context. Validated instruments were used to measure risk, risk management and project outcome in software development projects undertaken in India. A second order factor model was developed for risk with five first order factors. Risk management was also identified as a second order construct with four first order factors. These structures were validated using confirmatory factor analysis. Variation in risk across categories of select organization / project characteristics was studied through a series of one way ANOVA tests. Regression model was developed for each of the risk factors by linking it to risk management factors and project /organization characteristics. Similarly regression models were developed for the project outcome measures linking them to risk factors. Integrated models linking risk factors, risk management factors and project outcome measures were tested through structural equation modeling. Quality of the software developed was seen to have a positive relationship with risk management and negative relationship with risk. The other outcome variables, namely time overrun and cost over run, had strong positive relationship with risk. Risk management did not have direct effect on overrun variables. Risk was seen to be acting as an intervening variable between risk management and overrun variables.
Resumo:
Warships are generally sleek, slender with V shaped sections and block coefficient below 0.5, compared to fuller forms and higher values for commercial ships. They normally operate in the higher Froude number regime, and the hydrodynamic design is primarily aimed at achieving higher speeds with the minimum power. Therefore the structural design and analysis methods are different from those for commercial ships. Certain design guidelines have been given in documents like Naval Engineering Standards and one of the new developments in this regard is the introduction of classification society rules for the design of warships.The marine environment imposes subjective and objective uncertainties on ship structure. The uncertainties in loads, material properties etc.,. make reliable predictions of ship structural response a difficult task. Strength, stiffness and durability criteria for warship structures can be established by investigations on elastic analysis, ultimate strength analysis and reliability analysis. For analysis of complicated warship structures, special means and valid approximations are required.Preliminary structural design of a frigate size ship has been carried out . A finite element model of the hold model, representative of the complexities in the geometric configuration has been created using the finite element software NISA. Two other models representing the geometry to a limited extent also have been created —- one with two transverse frames and the attached plating alongwith the longitudinal members and the other representing the plating and longitudinal stiffeners between two transverse frames. Linear static analysis of the three models have been carried out and each one with three different boundary conditions. The structural responses have been checked for deflections and stresses against the permissible values. The structure has been found adequate in all the cases. The stresses and deflections predicted by the frame model are comparable with those of the hold model. But no such comparison has been realized for the interstiffener plating model with the other two models.Progressive collapse analyses of the models have been conducted for the three boundary conditions, considering geometric nonlinearity and then combined geometric and material nonlinearity for the hold and the frame models. von Mises — lllyushin yield criteria with elastic-perfectly plastic stress-strain curve has been chosen. ln each case, P-Delta curves have been generated and the ultimate load causing failure (ultimate load factor) has been identified as a multiple of the design load specified by NES.Reliability analysis of the hull module under combined geometric and material nonlinearities have been conducted. The Young's Modulus and the shell thickness have been chosen as the variables. Randomly generated values have been used in the analysis. First Order Second Moment has been used to predict the reliability index and thereafter, the probability of failure. The values have been compared against standard values published in literature.
Resumo:
The service quality of any sector has two major aspects namely technical and functional. Technical quality can be attained by maintaining technical specification as decided by the organization. Functional quality refers to the manner which service is delivered to customer which can be assessed by the customer feed backs. A field survey was conducted based on the management tool SERVQUAL, by designing 28 constructs under 7 dimensions of service quality. Stratified sampling techniques were used to get 336 valid responses and the gap scores of expectations and perceptions are analyzed using statistical techniques to identify the weakest dimension. To assess the technical aspects of availability six months live outage data of base transceiver were collected. The statistical and exploratory techniques were used to model the network performance. The failure patterns have been modeled in competing risk models and probability distribution of service outage and restorations were parameterized. Since the availability of network is a function of the reliability and maintainability of the network elements, any service provider who wishes to keep up their service level agreements on availability should be aware of the variability of these elements and its effects on interactions. The availability variations were studied by designing a discrete time event simulation model with probabilistic input parameters. The probabilistic distribution parameters arrived from live data analysis was used to design experiments to define the availability domain of the network under consideration. The availability domain can be used as a reference for planning and implementing maintenance activities. A new metric is proposed which incorporates a consistency index along with key service parameters that can be used to compare the performance of different service providers. The developed tool can be used for reliability analysis of mobile communication systems and assumes greater significance in the wake of mobile portability facility. It is also possible to have a relative measure of the effectiveness of different service providers.
Resumo:
So far, in the bivariate set up, the analysis of lifetime (failure time) data with multiple causes of failure is done by treating each cause of failure separately. with failures from other causes considered as independent censoring. This approach is unrealistic in many situations. For example, in the analysis of mortality data on married couples one would be interested to compare the hazards for the same cause of death as well as to check whether death due to one cause is more important for the partners’ risk of death from other causes. In reliability analysis. one often has systems with more than one component and many systems. subsystems and components have more than one cause of failure. Design of high-reliability systems generally requires that the individual system components have extremely high reliability even after long periods of time. Knowledge of the failure behaviour of a component can lead to savings in its cost of production and maintenance and. in some cases, to the preservation of human life. For the purpose of improving reliability. it is necessary to identify the cause of failure down to the component level. By treating each cause of failure separately with failures from other causes considered as independent censoring, the analysis of lifetime data would be incomplete. Motivated by this. we introduce a new approach for the analysis of bivariate competing risk data using the bivariate vector hazard rate of Johnson and Kotz (1975).
Resumo:
The term reliability of an equipment or device is often meant to indicate the probability that it carries out the functions expected of it adequately or without failure and within specified performance limits at a given age for a desired mission time when put to use under the designated application and operating environmental stress. A broad classification of the approaches employed in relation to reliability studies can be made as probabilistic and deterministic, where the main interest in the former is to device tools and methods to identify the random mechanism governing the failure process through a proper statistical frame work, while the latter addresses the question of finding the causes of failure and steps to reduce individual failures thereby enhancing reliability. In the probabilistic attitude to which the present study subscribes to, the concept of life distribution, a mathematical idealisation that describes the failure times, is fundamental and a basic question a reliability analyst has to settle is the form of the life distribution. It is for no other reason that a major share of the literature on the mathematical theory of reliability is focussed on methods of arriving at reasonable models of failure times and in showing the failure patterns that induce such models. The application of the methodology of life time distributions is not confined to the assesment of endurance of equipments and systems only, but ranges over a wide variety of scientific investigations where the word life time may not refer to the length of life in the literal sense, but can be concieved in its most general form as a non-negative random variable. Thus the tools developed in connection with modelling life time data have found applications in other areas of research such as actuarial science, engineering, biomedical sciences, economics, extreme value theory etc.
Resumo:
Motivation for Speaker recognition work is presented in the first part of the thesis. An exhaustive survey of past work in this field is also presented. A low cost system not including complex computation has been chosen for implementation. Towards achieving this a PC based system is designed and developed. A front end analog to digital convertor (12 bit) is built and interfaced to a PC. Software to control the ADC and to perform various analytical functions including feature vector evaluation is developed. It is shown that a fixed set of phrases incorporating evenly balanced phonemes is aptly suited for the speaker recognition work at hand. A set of phrases are chosen for recognition. Two new methods are adopted for the feature evaluation. Some new measurements involving a symmetry check method for pitch period detection and ACE‘ are used as featured. Arguments are provided to show the need for a new model for speech production. Starting from heuristic, a knowledge based (KB) speech production model is presented. In this model, a KB provides impulses to a voice producing mechanism and constant correction is applied via a feedback path. It is this correction that differs from speaker to speaker. Methods of defining measurable parameters for use as features are described. Algorithms for speaker recognition are developed and implemented. Two methods are presented. The first is based on the model postulated. Here the entropy on the utterance of a phoneme is evaluated. The transitions of voiced regions are used as speaker dependent features. The second method presented uses features found in other works, but evaluated differently. A knock—out scheme is used to provide the weightage values for the selection of features. Results of implementation are presented which show on an average of 80% recognition. It is also shown that if there are long gaps between sessions, the performance deteriorates and is speaker dependent. Cross recognition percentages are also presented and this in the worst case rises to 30% while the best case is 0%. Suggestions for further work are given in the concluding chapter.
Resumo:
Recently, reciprocal subtangent has been used as a useful tool to describe the behaviour of a density curve. Motivated by this, in the present article we extend the concept to the weighted models. Characterization results are proved for models viz. gamma, Rayleigh, equilibrium, residual lifetime, and proportional hazards. An identity under weighted distribution is also obtained when the reciprocal subtangent takes the form of a general class of distributions. Finally, an extension of reciprocal subtangent for the weighted models in the bivariate and multivariate cases are introduced and proved some useful results
Resumo:
Partial moments are extensively used in literature for modeling and analysis of lifetime data. In this paper, we study properties of partial moments using quantile functions. The quantile based measure determines the underlying distribution uniquely. We then characterize certain lifetime quantile function models. The proposed measure provides alternate definitions for ageing criteria. Finally, we explore the utility of the measure to compare the characteristics of two lifetime distributions
Resumo:
Refiners today operate their equipment for prolonged periods without shutdown. This is primarily due to the increased pressures of the market resulting in extended shutdown-to-shutdown intervals. This places extreme demands on the reliability of the plant equipment. The traditional methods of reliability assurance, like Preventive Maintenance, Predictive Maintenance and Condition Based Maintenance become inadequate in the face of such demands. The alternate approaches to reliability improvement, being adopted the world over are implementation of RCFA programs and Reliability Centered Maintenance. However refiners and process plants find it difficult to adopt this standardized methodology of RCM mainly due to the complexity and the large amount of analysis that needs to be done, resulting in a long drawn out implementation, requiring the services of a number of skilled people. These results in either an implementation restricted to only few equipment or alternately, one that is non-standard. The paper presents the current models in use, the core requirements of a standard RCM model, the alternatives to classical RCM, limitations in the existing model, classical RCM and available alternatives to RCM and will then go on to present an ‗Accelerated‘ approach to RCM implementation, that, while ensuring close conformance to the standard, does not place a large burden on the implementers
Resumo:
The assessment of maturity of software is an important area in the general software sector. The field of OSS also applies various models to measure software maturity. However, measuring maturity of OSS being used for several applications in libraries is an area left with no research so far. This study has attempted to fill the research gap. Measuring maturity of software contributes knowledge on its sustainability over the long term. Maturity of software is one of the factors that positively influence adoption. The investigator measured the maturity of DSpace software using Woods and Guliani‟s Open Source Maturity Model-2005. The present study is significant as it addresses the aspects of maturity of OSS for libraries and fills the research gap on the area. In this sense the study opens new avenues to the field of library and information science by providing an additional tool for librarians in the selection and adoption of OSS. Measuring maturity brings in-depth knowledge on an OSS which will contribute towards the perceived usefulness and perceived ease of use as explained in the Technology Acceptance Model theory.