982 resultados para Reliability (Engineering)
Resumo:
We examine the test-retest reliability of biceps brachii tissue oxygenation index (TOI) parameters measured by near-infrared spectroscopy during a 10-s sustained and a 30-repeated (1-s contraction, 1-s relaxation) isometric contraction task at 30% of maximal voluntary contraction (30% MVC) and maximal (100% MVC) intensities. Eight healthy men (23 to 33 yr) were tested on three sessions separated by 3 h and 24 h, and the within-subject reliability of torque and each TOI parameter were determined by Bland-Altman+/-2 SD limits of agreement plots and coefficient of variation (CV). No significant (P>0.05) differences between the three sessions were found for mean values of torque and TOI parameters during the sustained and repeated tasks at both contraction intensities. All TOI parameters were within+/-2 SD limits of agreement. The CVs for torque integral were similar between the sustained and repeated task at both intensities (4 to 7%); however, the CVs for TOI parameters during the sustained and repeated task were lower for 100% MVC (7 to 11%) than for 30% MVC (22 to 36%). It is concluded that the reliability of the biceps brachii NIRS parameters during both sustained and repeated isometric contraction tasks is acceptable.
Resumo:
This paper presents the findings of an investigation of the challenges Australian manufacturers are currently facing. A comprehensive questionnaire survey was conducted among leading Australian manufacturers. This paper reports the main findings of this study. Evidence indicates that product quality and reliability (Q & R) are the main challenges for Australian manufacturers. Design capability and time to market came second. Results show that there is no effective information exchange between the parties involved in production and quality control. Learning from the past mistakes is not proving to have significant effects on improving product quality. The technological innovation speed is high and companies are introducing as many as 5 new products in a year. This technological speed has pressure on the Q & R of new products. To overcome the new challenges, companies need a Q & R improvement model.
Resumo:
Accurate reliability prediction for large-scale, long lived engineering is a crucial foundation for effective asset risk management and optimal maintenance decision making. However, a lack of failure data for assets that fail infrequently, and changing operational conditions over long periods of time, make accurate reliability prediction for such assets very challenging. To address this issue, we present a Bayesian-Marko best approach to reliability prediction using prior knowledge and condition monitoring data. In this approach, the Bayesian theory is used to incorporate prior information about failure probabilities and current information about asset health to make statistical inferences, while Markov chains are used to update and predict the health of assets based on condition monitoring data. The prior information can be supplied by domain experts, extracted from previous comparable cases or derived from basic engineering principles. Our approach differs from existing hybrid Bayesian models which are normally used to update the parameter estimation of a given distribution such as the Weibull-Bayesian distribution or the transition probabilities of a Markov chain. Instead, our new approach can be used to update predictions of failure probabilities when failure data are sparse or nonexistent, as is often the case for large-scale long-lived engineering assets.
Resumo:
Study Design. A sheep study designed to compare the accuracy of static radiographs, dynamic radiographs, and computed tomographic (CT) scans for the assessment of thoracolumbar facet joint fusion as determined by micro-CT scanning. Objective. To determine the accuracy and reliability of conventional imaging techniques in identifying the status of thoracolumbar (T13-L1) facet joint fusion in a sheep model. Summary of Background Data. Plain radiographs are commonly used to determine the integrity of surgical arthrodesis of the thoracolumbar spine. Many previous studies of fusion success have relied solely on postoperative assessment of plain radiographs, a technique lacking sensitivity for pseudarthrosis. CT may be a more reliable technique, but is less well characterized. Methods. Eleven adult sheep were randomized to either attempted arthrodesis using autogenous bone graft and internal fixation (n = 3) or intentional pseudarthrosis (IP) using oxidized cellulose and internal fixation (n = 8). After 6 months, facet joint fusion was assessed by independent observers, using (1) plain static radiography alone, (2) additional dynamic radiographs, and (3) additional reconstructed spiral CT imaging. These assessments were correlated with high-resolution micro-CT imaging to predict the utility of the conventional imaging techniques in the estimation of fusion success. Results. The capacity of plain radiography alone to correctly predict fusion or pseudarthrosis was 43% and was not improved using plain radiography and dynamic radiography with also a 43% accuracy. Adding assessment by reformatted CT imaging to the plain radiography techniques increased the capacity to predict fusion outcome to 86% correctly. The sensitivity, specificity, and accuracy of static radiography were 0.33, 0.55, and 0.43, respectively, those of dynamic radiography were 0.46, 0.40, and 0.43, respectively, and those of radiography plus CT were 0.88, 0.85, and 0.86, respectively. Conclusion. CT-based evaluation correlated most closely with high-resolution micro-CT imaging. Neither plain static nor dynamic radiographs were able to predict fusion outcome accurately. © 2012 Lippincott Williams & Wilkins.
Resumo:
Each year, organizations in Australian mining industry (asset intensive industry) spend substantial amount of capital (A$86 billion in 2009-10) (Statistics, 2011) in acquiring engineering assets. Engineering assets are put to use in operations to generate value. Different functions (departments) of an organization have different expectations and requirements from each of the engineering asset e.g. return on investment, reliability, efficiency, maintainability, low cost of running the asset, low or nil environmental impact and easy of disposal, potential salvage value etc. Assets are acquired from suppliers or built by service providers and or internally. The process of acquiring assets is supported by procurement function. One of the most costly mistakes that organizations can make is acquiring the inappropriate or non-conforming assets that do not fit the purpose. The root cause of acquiring non confirming assets belongs to incorrect acquisition decision and the process of making decisions. It is very important that an asset acquisition decision is based on inputs and multi-criteria of each function within the organization which has direct or indirect impact on the acquisition, utilization, maintenance and disposal of the asset. Literature review shows that currently there is no comprehensive process framework and tool available to evaluate the inclusiveness and breadth of asset acquisition decisions that are taken in the Mining Organizations. This thesis discusses various such criteria and inputs that need to be considered and evaluated from various functions within the organization while making the asset acquisition decision. Criteria from functions such as finance, production, maintenance, logistics, procurement, asset management, environment health and safety, material management, training and development etc. need to be considered to make an effective and coherent asset acquisition decision. The thesis also discusses a tool that is developed to be used in the multi-criteria and cross functional acquisition decision making. The development of multi-criteria and cross functional inputs based decision framework and tool which utilizes that framework to formulate cross functional and integrated asset acquisition decisions are the contribution of this research.
Resumo:
Reliable ambiguity resolution (AR) is essential to Real-Time Kinematic (RTK) positioning and its applications, since incorrect ambiguity fixing can lead to largely biased positioning solutions. A partial ambiguity fixing technique is developed to improve the reliability of AR, involving partial ambiguity decorrelation (PAD) and partial ambiguity resolution (PAR). Decorrelation transformation could substantially amplify the biases in the phase measurements. The purpose of PAD is to find the optimum trade-off between decorrelation and worst-case bias amplification. The concept of PAR refers to the case where only a subset of the ambiguities can be fixed correctly to their integers in the integer least-squares (ILS) estimation system at high success rates. As a result, RTK solutions can be derived from these integer-fixed phase measurements. This is meaningful provided that the number of reliably resolved phase measurements is sufficiently large for least-square estimation of RTK solutions as well. Considering the GPS constellation alone, partially fixed measurements are often insufficient for positioning. The AR reliability is usually characterised by the AR success rate. In this contribution an AR validation decision matrix is firstly introduced to understand the impact of success rate. Moreover the AR risk probability is included into a more complete evaluation of the AR reliability. We use 16 ambiguity variance-covariance matrices with different levels of success rate to analyse the relation between success rate and AR risk probability. Next, the paper examines during the PAD process, how a bias in one measurement is propagated and amplified onto many others, leading to more than one wrong integer and to affect the success probability. Furthermore, the paper proposes a partial ambiguity fixing procedure with a predefined success rate criterion and ratio-test in the ambiguity validation process. In this paper, the Galileo constellation data is tested with simulated observations. Numerical results from our experiment clearly demonstrate that only when the computed success rate is very high, the AR validation can provide decisions about the correctness of AR which are close to real world, with both low AR risk and false alarm probabilities. The results also indicate that the PAR procedure can automatically chose adequate number of ambiguities to fix at given high-success rate from the multiple constellations instead of fixing all the ambiguities. This is a benefit that multiple GNSS constellations can offer.
Resumo:
The rank transform is one non-parametric transform which has been applied to the stereo matching problem The advantages of this transform include its invariance to radio metric distortion and its amenability to hardware implementation. This paper describes the derivation of the rank constraint for matching using the rank transform Previous work has shown that this constraint was capable of resolving ambiguous matches thereby improving match reliability A new matching algorithm incorporating this constraint was also proposed. This paper extends on this previous work by proposing a matching algorithm which uses a dimensional match surface in which the match score is computed for every possible template and match window combination. The principal advantage of this algorithm is that the use of the match surface enforces the left�right consistency and uniqueness constraints thus improving the algorithms ability to remove invalid matches Experimental results for a number of test stereo pairs show that the new algorithm is capable of identifying and removing a large number of in incorrect matches particularly in the case of occlusions
Resumo:
The rank transform is a non-parametric technique which has been recently proposed for the stereo matching problem. The motivation behind its application to the matching problem is its invariance to certain types of image distortion and noise, as well as its amenability to real-time implementation. This paper derives an analytic expression for the process of matching using the rank transform, and then goes on to derive one constraint which must be satisfied for a correct match. This has been dubbed the rank order constraint or simply the rank constraint. Experimental work has shown that this constraint is capable of resolving ambiguous matches, thereby improving matching reliability. This constraint was incorporated into a new algorithm for matching using the rank transform. This modified algorithm resulted in an increased proportion of correct matches, for all test imagery used.
Resumo:
Urban transit system performance may be quantified and assessed using transit capacity and productive capacity for planning, design and operational management. Bunker (4) defines important productive performance measures of an individual transit service and transit line. Transit work (p-km) captures transit task performed over distance. Transit productiveness (p-km/h) captures transit work performed over time. This paper applies productive performance with risk assessment to quantify transit system reliability. Theory is developed to monetize transit segment reliability risk on the basis of demonstration Annual Reliability Event rates by transit facility type, segment productiveness, and unit-event severity. A comparative example of peak hour performance of a transit sub-system containing bus-on-street, busway, and rail components in Brisbane, Australia demonstrates through practical application the importance of valuing reliability. Comparison reveals the highest risk segments to be long, highly productive on street bus segments followed by busway (BRT) segments and then rail segments. A transit reliability risk reduction treatment example demonstrates that benefits can be significant and should be incorporated into project evaluation in addition to those of regular travel time savings, reduced emissions and safety improvements. Reliability can be used to identify high risk components of the transit system and draw comparisons between modes both in planning and operations settings, and value improvement scenarios in a project evaluation setting. The methodology can also be applied to inform daily transit system operational management.
Resumo:
Urban transit system performance may be quantified and assessed using transit capacity and productive capacity for planning, design and operational management. Bunker (4) defines important productive performance measures of an individual transit service and transit line. Transit work (p-km) captures transit task performed over distance. Transit productiveness (p-km/h) captures transit work performed over time. This paper applies productive performance with risk assessment to quantify transit system reliability. Theory is developed to monetize transit segment reliability risk on the basis of demonstration Annual Reliability Event rates by transit facility type, segment productiveness, and unit-event severity. A comparative example of peak hour performance of a transit sub-system containing bus-on-street, busway, and rail components in Brisbane, Australia demonstrates through practical application the importance of valuing reliability. Comparison reveals the highest risk segments to be long, highly productive on street bus segments followed by busway (BRT) segments and then rail segments. A transit reliability risk reduction treatment example demonstrates that benefits can be significant and should be incorporated into project evaluation in addition to those of regular travel time savings, reduced emissions and safety improvements. Reliability can be used to identify high risk components of the transit system and draw comparisons between modes both in planning and operations settings, and value improvement scenarios in a project evaluation setting. The methodology can also be applied to inform daily transit system operational management.
Resumo:
The ability to forecast machinery health is vital to reducing maintenance costs, operation downtime and safety hazards. Recent advances in condition monitoring technologies have given rise to a number of prognostic models which attempt to forecast machinery health based on condition data such as vibration measurements. This paper demonstrates how the population characteristics and condition monitoring data (both complete and suspended) of historical items can be integrated for training an intelligent agent to predict asset health multiple steps ahead. The model consists of a feed-forward neural network whose training targets are asset survival probabilities estimated using a variation of the Kaplan–Meier estimator and a degradation-based failure probability density function estimator. The trained network is capable of estimating the future survival probabilities when a series of asset condition readings are inputted. The output survival probabilities collectively form an estimated survival curve. Pump data from a pulp and paper mill were used for model validation and comparison. The results indicate that the proposed model can predict more accurately as well as further ahead than similar models which neglect population characteristics and suspended data. This work presents a compelling concept for longer-range fault prognosis utilising available information more fully and accurately.
Resumo:
The IEEE Subcommittee on the Application of Probability Methods (APM) published the IEEE Reliability Test System (RTS) [1] in 1979. This system provides a consistent and generally acceptable set of data that can be used both in generation capacity and in composite system reliability evaluation [2,3]. The test system provides a basis for the comparison of results obtained by different people using different methods. Prior to its publication, there was no general agreement on either the system or the data that should be used to demonstrate or test various techniques developed to conduct reliability studies. Development of reliability assessment techniques and programs are very dependent on the intent behind the development as the experience of one power utility with their system may be quite different from that of another utility. The development and the utilization of a reliability program are, therefore, greatly influenced by the experience of a utlity and the intent of the system manager, planner and designer conducting the reliability studies. The IEEE-RTS has proved to be extremely valuable in highlighting and comparing the capabilities (or incapabilities) of programs used in reliability studies, the differences in the perception of various power utilities and the differences in the solution techniques. The IEEE-RTS contains a reasonably large power network which can be difficult to use for initial studies in an educational environment.
Resumo:
The IEEE Reliability Test System (RTS) developed by the Application of Probability Method Subcommittee has been used to compare and test a wide range of generating capacity and composite system evaluation techniques and subsequent digital computer programs. A basic reliability test system is presented which has evolved from the reliability education and research programs conducted by the Power System Research Group at the University of Saskatchewan. The basic system data necessary for adequacy evaluation at the generation and composite generation and transmission system levels are presented together with the fundamental data required to conduct reliability-cost/reliability-worth evaluation
Resumo:
A set of basic reliability indices at the generation and composite generation and transmission levels for a small reliability test system are presented. The test system and the results presented have evolved from reliability research and teaching programs. The indices presented are for fundamental reliability applications which should be covered in a power system reliability teaching program. The RBTS test system and the basic indices provide a valuable reference for faculty and students engaged in reliability teaching and research