979 resultados para Restraint System Failures.


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Queueing theory provides models, structural insights, problem solutions and algorithms to many application areas. Due to its practical applicability to production, manufacturing, home automation, communications technology, etc, more and more complex systems requires more elaborated models, tech- niques, algorithm, etc. need to be developed. Discrete-time models are very suitable in many situations and a feature that makes the analysis of discrete time systems technically more involved than its continuous time counterparts. In this paper we consider a discrete-time queueing system were failures in the server can occur as-well as priority messages. The possibility of failures of the server with general life time distribution is considered. We carry out an extensive study of the system by computing generating functions for the steady-state distribution of the number of messages in the queue and in the system. We also obtain generating functions for the stationary distribution of the busy period and sojourn times of a message in the server and in the system. Performance measures of the system are also provided.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Engineering assets are often complex systems. In a complex system, components often have failure interactions which lead to interactive failures. A system with interactive failures may lead to an increased failure probability. Hence, one may have to take the interactive failures into account when designing and maintaining complex engineering systems. To address this issue, Sun et al have developed an analytical model for the interactive failures. In this model, the degree of interaction between two components is represented by interactive coefficients. To use this model for failure analysis, the related interactive coefficients must be estimated. However, methods for estimating the interactive coefficients have not been reported. To fill this gap, this paper presents five methods to estimate the interactive coefficients including probabilistic method; failure data based analysis method; laboratory experimental method; failure interaction mechanism based method; and expert estimation method. Examples are given to demonstrate the applications of the proposed methods. Comparisons among these methods are also presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The modern society has come to expect the electrical energy on demand, while many of the facilities in power systems are aging beyond repair and maintenance. The risk of failure is increasing with the aging equipments and can pose serious consequences for continuity of electricity supply. As the equipments used in high voltage power networks are very expensive, economically it may not be feasible to purchase and store spares in a warehouse for extended periods of time. On the other hand, there is normally a significant time before receiving equipment once it is ordered. This situation has created a considerable interest in the evaluation and application of probability methods for aging plant and provisions of spares in bulk supply networks, and can be of particular importance for substations. Quantitative adequacy assessment of substation and sub-transmission power systems is generally done using a contingency enumeration approach which includes the evaluation of contingencies, classification of the contingencies based on selected failure criteria. The problem is very complex because of the need to include detailed modelling and operation of substation and sub-transmission equipment using network flow evaluation and to consider multiple levels of component failures. In this thesis a new model associated with aging equipment is developed to combine the standard tools of random failures, as well as specific model for aging failures. This technique is applied in this thesis to include and examine the impact of aging equipments on system reliability of bulk supply loads and consumers in distribution network for defined range of planning years. The power system risk indices depend on many factors such as the actual physical network configuration and operation, aging conditions of the equipment, and the relevant constraints. The impact and importance of equipment reliability on power system risk indices in a network with aging facilities contains valuable information for utilities to better understand network performance and the weak links in the system. In this thesis, algorithms are developed to measure the contribution of individual equipment to the power system risk indices, as part of the novel risk analysis tool. A new cost worth approach was developed in this thesis that can make an early decision in planning for replacement activities concerning non-repairable aging components, in order to maintain a system reliability performance which economically is acceptable. The concepts, techniques and procedures developed in this thesis are illustrated numerically using published test systems. It is believed that the methods and approaches presented, substantially improve the accuracy of risk predictions by explicit consideration of the effect of equipment entering a period of increased risk of a non-repairable failure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite promising benefits and advantages, there are reports of failures and low realisation of benefits in Enterprise System (ES) initiatives. Among the research on the factors that influence ES success, there is a dearth of studies on the knowledge implications of multiple end-user groups using the same ES application. An ES facilitates the work of several user groups, ranging from strategic management, management, to operational staff, all using the same system for multiple objectives. Given the fundamental characteristics of ES – integration of modules, business process views, and aspects of information transparency – it is necessary that all frequent end-users share a reasonable amount of common knowledge and integrate their knowledge to yield new knowledge. Recent literature on ES implementation highlights the importance of Knowledge Integration (KI) for implementation success. Unfortunately, the importance of KI is often overlooked and little about the role of KI in ES success is known. Many organisations do not achieve the potential benefits from their ES investment because they do not consider the need or their ability to integrate their employees’ knowledge. This study is designed to improve our understanding of the influence of KI among ES end-users on operational ES success. The three objectives of the study are: (I) to identify and validate the antecedents of KI effectiveness, (II) to investigate the impact of KI effectiveness on the goodness of individuals’ ES-knowledge base, and (III) to examine the impact of the goodness of individuals’ ES-knowledge base on the operational ES success. For this purpose, we employ the KI factors identified by Grant (1996) and an IS-impact measurement model from the work of Gable et al. (2008) to examine ES success. The study derives its findings from data gathered from six Malaysian companies in order to obtain the three-fold goal of this thesis as outlined above. The relationships between the antecedents of KI effectiveness and its consequences are tested using 188 responses to a survey representing the views of management and operational employment cohorts. Using statistical methods, we confirm three antecedents of KI effectiveness and the consequences of the antecedents on ES success are validated. The findings demonstrate a statistically positive impact of KI effectiveness of ES success, with KI effectiveness contributing to almost one-third of ES success. This research makes a number of contributions to the understanding of the influence of KI on ES success. First, based on the empirical work using a complete nomological net model, the role of KI effectiveness on ES success is evidenced. Second, the model provides a theoretical lens for a more comprehensive understanding of the impact of KI on the level of ES success. Third, restructuring the dimensions of the knowledge-based theory to fit the context of ES extends its applicability and generalisability to contemporary Information Systems. Fourth, the study develops and validates measures for the antecedents of KI effectiveness. Fifth, the study demonstrates the statistically significant positive influence of the goodness of KI on ES success. From a practical viewpoint, this study emphasises the importance of KI effectiveness as a direct antecedent of ES success. Practical lessons can be drawn from the work done in this study to empirically identify the critical factors among the antecedents of KI effectiveness that should be given attention.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Monitoring of the integrity of rolling element bearings in the traction system of high speed trains is a fundamental operation in order to avoid catastrophic failures and to implement effective condition-based maintenance strategies. Diagnostics of rolling element bearings is usually based on vibration signal analysis by means of suitable signal processing techniques. The experimental validation of such techniques has been traditionally performed by means of laboratory tests on artificially damaged bearings, while their actual effectiveness in industrial applications, particularly in the field of rail transport, remains scarcely investigated. This paper will address the diagnostics of bearings taken from the service after a long term operation on a high speed train. These worn bearings have been installed on a test-rig, consisting of a complete full-scale traction system of a high speed train, able to reproduce the effects of wheel-track interaction and bogie-wheelset dynamics. The results of the experimental campaign show that suitable signal processing techniques are able to diagnose bearing failures even in this harsh and noisy application. Moreover, the most suitable location of the sensors on the traction system is also proposed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Rolling element bearings are the most critical components in the traction system of high speed trains. Monitoring their integrity is a fundamental operation in order to avoid catastrophic failures and to implement effective condition based maintenance strategies. Generally, diagnostics of rolling element bearings is usually performed by analyzing vibration signals measured by accelerometers placed in the proximity of the bearing under investigation. Several papers have been published on this subject in the last two decades, mainly devoted to the development and assessment of signal processing techniques for diagnostics. The experimental validation of such techniques has been traditionally performed by means of laboratory tests on artificially damaged bearings, while their actual effectiveness in specific industrial applications, particularly in rail industry, remains scarcely investigated. This paper is aimed at filling this knowledge gap, by addressing the diagnostics of bearings taken from the service after a long term operation on the traction system of a high speed train. Moreover, in order to test the effectiveness of the diagnostic procedures in the environmental conditions peculiar to the rail application, a specific test-rig has been built, consisting of a complete full-scale train traction system, able to reproduce the effects of wheeltrack interaction and bogie-wheelset dynamics. The results of the experimental campaign show that suitable signal processing techniques are able to diagnose bearing failures even in this harsh and noisy application. Moreover, the most suitable location of the sensors on the traction system is proposed, in order to limit their number.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Whereas it has been widely assumed in the public that the Soviet music policy system had a “top-down” structure of control and command that directly affected musical creativity, in fact my research shows that the relations between the different levels of the music policy system were vague, and the viewpoints of its representatives differed from each other. Because the representatives of the party and government organs controlling operas could not define which kind of music represented Socialist Realism, the system as it developed during the 1930s and 1940s did not function effectively enough in order to create such a centralised control of Soviet music, still less could Soviet operas fulfil the highly ambiguous aesthetics of Socialist Realism. I show that musical discussions developed as bureaucratic ritualistic arenas, where it became more important to reveal the heretical composers, making scapegoats of them, and requiring them to perform self-criticism, than to give directions on how to reach the artistic goals of Socialist Realism. When one opera was found to be unacceptable, this lead to a strengthening of control by the party leadership, which lead to more operas, one after the other, to be revealed as failures. I have studied the control of the composition, staging and reception of the opera case-studies, which remain obscure in the West despite a growing scholarly interest in them, and have created a detailed picture of the foundation and development of the Soviet music control system in 1932-1950. My detailed discussion of such case-studies as Ivan Dzerzhinskii’s The Quiet Don, Dmitrii Shostakovich’s Lady Macbeth of Mtsensk District, Vano Muradeli’s The Great Friendship, Sergei Prokofiev’s Story of a Real Man, Tikhon Khrennikov’s Frol Skobeev and Evgenii Zhukovskii’s From All One’s Heart backs with documentary precision the historically revisionist model of the development of Soviet music. In February 1948, composers belonging to the elite of the Union of Soviet Composers, e.g. Dmitri Shostakovich and Sergei Prokofiev, were accused in a Central Committee Resolution of formalism, as been under the influence of western modernism. Accusations of formalism were connected to the criticism of the conciderable financial, material and social privileges these composers enjoyed in the leadership of the Union. With my new archival findings I give a more detailed picture of the financial background for the 1948 campaign. The independent position of the music funding organization of the Union of Soviet Composers (Muzfond) to decide on its finances was an exceptional phenomenon in the Soviet Union and contradicted the strivings to strengthen the control of Soviet music. The financial audits of the Union of Soviet Composers did not, however, change the elite status of some of its composers, except for maybe a short duration in some cases. At the same time the independence of the significal financial authorities of Soviet theatres was restricted. The cuts in the governmental funding allocated to Soviet theatres contradicted the intensified ideological demands for Soviet operas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The uncertainty in material properties and traffic characterization in the design of flexible pavements has led to significant efforts in recent years to incorporate reliability methods and probabilistic design procedures for the design, rehabilitation, and maintenance of pavements. In the mechanistic-empirical (ME) design of pavements, despite the fact that there are multiple failure modes, the design criteria applied in the majority of analytical pavement design methods guard only against fatigue cracking and subgrade rutting, which are usually considered as independent failure events. This study carries out the reliability analysis for a flexible pavement section for these failure criteria based on the first-order reliability method (FORM) and the second-order reliability method (SORM) techniques and the crude Monte Carlo simulation. Through a sensitivity analysis, the most critical parameter affecting the design reliability for both fatigue and rutting failure criteria was identified as the surface layer thickness. However, reliability analysis in pavement design is most useful if it can be efficiently and accurately applied to components of pavement design and the combination of these components in an overall system analysis. The study shows that for the pavement section considered, there is a high degree of dependence between the two failure modes, and demonstrates that the probability of simultaneous occurrence of failures can be almost as high as the probability of component failures. Thus, the need to consider the system reliability in the pavement analysis is highlighted, and the study indicates that the improvement of pavement performance should be tackled in the light of reducing this undesirable event of simultaneous failure and not merely the consideration of the more critical failure mode. Furthermore, this probability of simultaneous occurrence of failures is seen to increase considerably with small increments in the mean traffic loads, which also results in wider system reliability bounds. The study also advocates the use of narrow bounds to the probability of failure, which provides a better estimate of the probability of failure, as validated from the results obtained from Monte Carlo simulation (MCS).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An X-ray imaging technique is used to probe the stability of 3-dimensional granular packs in a slowly rotating drum. Well before the surface reaches the avalanche angle, we observe intermittent plastic events associated with collective rearrangements of the grains located in the vicinity of the free surface. The energy released by these discrete events grows as the system approaches the avalanche threshold. By testing various preparation methods, we show that the pre-avalanche dynamics is not solely controlled by the difference between the free surface inclination and the avalanche angle. As a consequence, the measure of the pre-avalanche dynamics is unlikely to serve as a tool for predicting macroscopic avalanches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In semiconductor fabrication processes, effective management of maintenance operations is fundamental to decrease costs associated with failures and downtime. Predictive Maintenance (PdM) approaches, based on statistical methods and historical data, are becoming popular for their predictive capabilities and low (potentially zero) added costs. We present here a PdM module based on Support Vector Machines for prediction of integral type faults, that is, the kind of failures that happen due to machine usage and stress of equipment parts. The proposed module may also be employed as a health factor indicator. The module has been applied to a frequent maintenance problem in semiconductor manufacturing industry, namely the breaking of the filament in the ion-source of ion-implantation tools. The PdM has been tested on a real production dataset. © 2013 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Past research has frequently attributed the incidence of bank failures to macroeconomic cycles and/or downturns in the regional economy. More recent analyses have suggested that the incidence and severity of bank failures can be linked to governance failures, which may be preventable through more stringent disclosure and auditing requirements. Using data on bank failures during the years 1991 to 1997, for the US, Canada, the UK and Germany, this study examines the relationship between institutional characteristics of national legal and auditing systems and the incidence of bank failures. In the second part of our analysis we then examined the relationship between the same institutional variables and the severity of bank failures.
The first part of our study notes a significant correlation between the law and order tradition (‘rule of law’) of a national legal system and the incidence of bank failures. Nations which were assigned high 'rule of law’ scores by country risk guides appear to have been less likely to experience bank failures. Another variable which appears to impact on bank failure rates is the ‘risk of contract repudiation’. Countries with a greater ‘risk of contract repudiation’ appear to be more likely to experience bank failures. We suggest that this may be due to a greater ex ante protection of stakeholders in countries where contract enforcement is more stringent.
The results of the second part of our study are less clear cut. However, there appears to be a significant correlation between the amount paid out by national deposit insurers (our proxy for the severity of bank failures) and the macroeconomic variable 'GDP change'. Here our findings follow the conventional wisdom; with greater amounts of deposit insurance funds being paid during economic downturns (i.e. low or negative GDP 'growth' correlates with high amounts of deposit insurance being paid out). A less pronounced relationship with the severity of bank failures can also be established for the institutional variables ' accounting standards' as well as 'risk of contract repudiation'. Countries with more stringent ‘accounting standards’ and a low ‘risk of contract repudiation’ appear to have been less prone to severe bank failures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Presented at 21st IEEE International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA 2015). 19 to 21, Aug, 2015, pp 122-131. Hong Kong, China.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present environment, industry should provide the products of high quality. Quality of products is judged by the period of time they can successfully perform their intended functions without failure. The cause of the failures can be ascertained through life testing experiments and the times to failure due to different cause are likely to follow different distributions. Knowledge of this distribution is essential to eliminate causes of failures and thereby to improve the quality and the reliability of products. The main accomplishment expected to the study is to develop statistical tools that could facilitate solution to lifetime data arising in such and similar contexts

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we discuss the consensus problem for synchronous distributed systems with orderly crash failures. For a synchronous distributed system of n processes with up to t crash failures and f failures actually occur, first, we present a bivalency argument proof to solve the open problem of proving the lower bound, min (t + 1, f + 2) rounds, for early-stopping synchronous consensus with orderly crash failures, where t < n - 1. Then, we extend the system model with orderly crash failures to a new model in which a process is allowed to send multiple messages to the same destination process in a round and the failing processes still respect the order specified by the protocol in sending messages. For this new model, we present a uniform consensus protocol, in which all non-faulty processes always decide and stop immediately by the end of f + 1 rounds. We prove that the lower bound of early stopping protocols for both consensus and uniform consensus are f + 1 rounds under the new model, and our proposed protocol is optimal.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The performance of boreal winter forecasts made with the European Centre for Medium-Range Weather Forecasts (ECMWF) System 11 Seasonal Forecasting System is investigated through analyses of ensemble hindcasts for the period 1987-2001. The predictability, or signal-to-noise ratio, associated with the forecasts, and the forecast skill are examined. On average, forecasts of 500 hPa geopotential height (GPH) have skill in most of the Tropics and in a few regions of the extratropics. There is broad, but not perfect, agreement between regions of high predictability and regions of high skill. However, model errors are also identified, in particular regions where the forecast ensemble spread appears too small. For individual winters the information provided by t-values, a simple measure of the forecast signal-to-noise ratio, is investigated. For 2 m surface air temperature (T2m), highest t-values are found in the Tropics but there is considerable interannual variability, and in the tropical Atlantic and Indian basins this variability is not directly tied to the El Nino Southern Oscillation. For GPH there is also large interannual variability in t-values, but these variations cannot easily be predicted from the strength of the tropical sea-surface-temperature anomalies. It is argued that the t-values for 500 hPa GPH can give valuable insight into the oceanic forcing of the atmosphere that generates predictable signals in the model. Consequently, t-values may be a useful tool for understanding, at a mechanistic level, forecast successes and failures. Lastly, the extent to which t-values are useful as a predictor of forecast skill is investigated. For T2m, t-values provide a useful predictor of forecast skill in both the Tropics and extratropics. Except in the equatorial east Pacific, most of the information in t-values is associated with interannual variability of the ensemble-mean forecast rather than interannual variability of the ensemble spread. For GPH, however, t-values provide a useful predictor of forecast skill only in the tropical Pacific region.