140 resultados para failure tree analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

It has long been recognised that government and public sector services suffer an innovation deficit compared to private or market-based services. This paper argues that this can be explained as an unintended consequence of the concerted public sector drive toward the elimination of waste through efficiency, accountability and transparency. Yet in an evolving economy this can be a false efficiency, as it also eliminates the 'good waste' that is a necessary cost of experimentation. This results in a systematic trade0off in the public sector between the static efficiency of minimizing the misuse of public resources and the dynamic efficiency of experimentation. this is inherently biased against risk and uncertainty and therein, explains why governments find service innovation so difficult. In the drive to eliminate static inefficiencies, many political systems have susequently overshot and stifled policy innovation. I propose the 'Red Queen' solution of adaptive economic policy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: This study explored the spatial distribution of notified cryptosporidiosis cases and identified major socioeconomic factors associated with the transmission of cryptosporidiosis in Brisbane, Australia. Methods: We obtained the computerized data sets on the notified cryptosporidiosis cases and their key socioeconomic factors by statistical local area (SLA) in Brisbane for the period of 1996 to 2004 from the Queensland Department of Health and Australian Bureau of Statistics, respectively. We used spatial empirical Bayes rates smoothing to estimate the spatial distribution of cryptosporidiosis cases. A spatial classification and regression tree (CART) model was developed to explore the relationship between socioeconomic factors and the incidence rates of cryptosporidiosis. Results: Spatial empirical Bayes analysis reveals that the cryptosporidiosis infections were primarily concentrated in the northwest and southeast of Brisbane. A spatial CART model shows that the relative risk for cryptosporidiosis transmission was 2.4 when the value of the social economic index for areas (SEIFA) was over 1028 and the proportion of residents with low educational attainment in an SLA exceeded 8.8%. Conclusions: There was remarkable variation in spatial distribution of cryptosporidiosis infections in Brisbane. Spatial pattern of cryptosporidiosis seems to be associated with SEIFA and the proportion of residents with low education attainment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Perez-Losada et al. [1] analyzed 72 complete genomes corresponding to nine mammalian (67 strains) and 2 avian (5 strains) polyomavirus species using maximum likelihood and Bayesian methods of phylogenetic inference. Because some data of 2 genomes in their work are now not available in GenBank, in this work, we analyze the phylogenetic relationship of the remaining 70 complete genomes corresponding to nine mammalian (65 strains) and two avian (5 strains) polyomavirus species using a dynamical language model approach developed by our group (Yu et al., [26]). This distance method does not require sequence alignment for deriving species phylogeny based on overall similarities of the complete genomes. Our best tree separates the bird polyomaviruses (avian polyomaviruses and goose hemorrhagic polymaviruses) from the mammalian polyomaviruses, which supports the idea of splitting the genus into two subgenera. Such a split is consistent with the different viral life strategies of each group. In the mammalian polyomavirus subgenera, mouse polyomaviruses (MPV), simian viruses 40 (SV40), BK viruses (BKV) and JC viruses (JCV) are grouped as different branches as expected. The topology of our best tree is quite similar to that of the tree constructed by Perez-Losada et al.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

World economies increasingly demand reliable and economical power supply and distribution. To achieve this aim the majority of power systems are becoming interconnected, with several power utilities supplying the one large network. One problem that occurs in a large interconnected power system is the regular occurrence of system disturbances which can result in the creation of intra-area oscillating modes. These modes can be regarded as the transient responses of the power system to excitation, which are generally characterised as decaying sinusoids. For a power system operating ideally these transient responses would ideally would have a “ring-down” time of 10-15 seconds. Sometimes equipment failures disturb the ideal operation of power systems and oscillating modes with ring-down times greater than 15 seconds arise. The larger settling times associated with such “poorly damped” modes cause substantial power flows between generation nodes, resulting in significant physical stresses on the power distribution system. If these modes are not just poorly damped but “negatively damped”, catastrophic failures of the system can occur. To ensure system stability and security of large power systems, the potentially dangerous oscillating modes generated from disturbances (such as equipment failure) must be quickly identified. The power utility must then apply appropriate damping control strategies. In power system monitoring there exist two facets of critical interest. The first is the estimation of modal parameters for a power system in normal, stable, operation. The second is the rapid detection of any substantial changes to this normal, stable operation (because of equipment breakdown for example). Most work to date has concentrated on the first of these two facets, i.e. on modal parameter estimation. Numerous modal parameter estimation techniques have been proposed and implemented, but all have limitations [1-13]. One of the key limitations of all existing parameter estimation methods is the fact that they require very long data records to provide accurate parameter estimates. This is a particularly significant problem after a sudden detrimental change in damping. One simply cannot afford to wait long enough to collect the large amounts of data required for existing parameter estimators. Motivated by this gap in the current body of knowledge and practice, the research reported in this thesis focuses heavily on rapid detection of changes (i.e. on the second facet mentioned above). This thesis reports on a number of new algorithms which can rapidly flag whether or not there has been a detrimental change to a stable operating system. It will be seen that the new algorithms enable sudden modal changes to be detected within quite short time frames (typically about 1 minute), using data from power systems in normal operation. The new methods reported in this thesis are summarised below. The Energy Based Detector (EBD): The rationale for this method is that the modal disturbance energy is greater for lightly damped modes than it is for heavily damped modes (because the latter decay more rapidly). Sudden changes in modal energy, then, imply sudden changes in modal damping. Because the method relies on data from power systems in normal operation, the modal disturbances are random. Accordingly, the disturbance energy is modelled as a random process (with the parameters of the model being determined from the power system under consideration). A threshold is then set based on the statistical model. The energy method is very simple to implement and is computationally efficient. It is, however, only able to determine whether or not a sudden modal deterioration has occurred; it cannot identify which mode has deteriorated. For this reason the method is particularly well suited to smaller interconnected power systems that involve only a single mode. Optimal Individual Mode Detector (OIMD): As discussed in the previous paragraph, the energy detector can only determine whether or not a change has occurred; it cannot flag which mode is responsible for the deterioration. The OIMD seeks to address this shortcoming. It uses optimal detection theory to test for sudden changes in individual modes. In practice, one can have an OIMD operating for all modes within a system, so that changes in any of the modes can be detected. Like the energy detector, the OIMD is based on a statistical model and a subsequently derived threshold test. The Kalman Innovation Detector (KID): This detector is an alternative to the OIMD. Unlike the OIMD, however, it does not explicitly monitor individual modes. Rather it relies on a key property of a Kalman filter, namely that the Kalman innovation (the difference between the estimated and observed outputs) is white as long as the Kalman filter model is valid. A Kalman filter model is set to represent a particular power system. If some event in the power system (such as equipment failure) causes a sudden change to the power system, the Kalman model will no longer be valid and the innovation will no longer be white. Furthermore, if there is a detrimental system change, the innovation spectrum will display strong peaks in the spectrum at frequency locations associated with changes. Hence the innovation spectrum can be monitored to both set-off an “alarm” when a change occurs and to identify which modal frequency has given rise to the change. The threshold for alarming is based on the simple Chi-Squared PDF for a normalised white noise spectrum [14, 15]. While the method can identify the mode which has deteriorated, it does not necessarily indicate whether there has been a frequency or damping change. The PPM discussed next can monitor frequency changes and so can provide some discrimination in this regard. The Polynomial Phase Method (PPM): In [16] the cubic phase (CP) function was introduced as a tool for revealing frequency related spectral changes. This thesis extends the cubic phase function to a generalised class of polynomial phase functions which can reveal frequency related spectral changes in power systems. A statistical analysis of the technique is performed. When applied to power system analysis, the PPM can provide knowledge of sudden shifts in frequency through both the new frequency estimate and the polynomial phase coefficient information. This knowledge can be then cross-referenced with other detection methods to provide improved detection benchmarks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During the past decade, a significant amount of research has been conducted internationally with the aim of developing, implementing, and verifying "advanced analysis" methods suitable for non-linear analysis and design of steel frame structures. Application of these methods permits comprehensive assessment of the actual failure modes and ultimate strengths of structural systems in practical design situations, without resort to simplified elastic methods of analysis and semi-empirical specification equations. Advanced analysis has the potential to extend the creativity of structural engineers and simplify the design process, while ensuring greater economy and more uniform safety with respect to the ultimate limit state. The application of advanced analysis methods has previously been restricted to steel frames comprising only members with compact cross-sections that are not subject to the effects of local buckling. This precluded the use of advanced analysis from the design of steel frames comprising a significant proportion of the most commonly used Australian sections, which are non-compact and subject to the effects of local buckling. This thesis contains a detailed description of research conducted over the past three years in an attempt to extend the scope of advanced analysis by developing methods that include the effects of local buckling in a non-linear analysis formulation, suitable for practical design of steel frames comprising non-compact sections. Two alternative concentrated plasticity formulations are presented in this thesis: the refined plastic hinge method and the pseudo plastic zone method. Both methods implicitly account for the effects of gradual cross-sectional yielding, longitudinal spread of plasticity, initial geometric imperfections, residual stresses, and local buckling. The accuracy and precision of the methods for the analysis of steel frames comprising non-compact sections has been established by comparison with a comprehensive range of analytical benchmark frame solutions. Both the refined plastic hinge and pseudo plastic zone methods are more accurate and precise than the conventional individual member design methods based on elastic analysis and specification equations. For example, the pseudo plastic zone method predicts the ultimate strength of the analytical benchmark frames with an average conservative error of less than one percent, and has an acceptable maximum unconservati_ve error of less than five percent. The pseudo plastic zone model can allow the design capacity to be increased by up to 30 percent for simple frames, mainly due to the consideration of inelastic redistribution. The benefits may be even more significant for complex frames with significant redundancy, which provides greater scope for inelastic redistribution. The analytical benchmark frame solutions were obtained using a distributed plasticity shell finite element model. A detailed description of this model and the results of all the 120 benchmark analyses are provided. The model explicitly accounts for the effects of gradual cross-sectional yielding, longitudinal spread of plasticity, initial geometric imperfections, residual stresses, and local buckling. Its accuracy was verified by comparison with a variety of analytical solutions and the results of three large-scale experimental tests of steel frames comprising non-compact sections. A description of the experimental method and test results is also provided.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Digital collections are growing exponentially in size as the information age takes a firm grip on all aspects of society. As a result Information Retrieval (IR) has become an increasingly important area of research. It promises to provide new and more effective ways for users to find information relevant to their search intentions. Document clustering is one of the many tools in the IR toolbox and is far from being perfected. It groups documents that share common features. This grouping allows a user to quickly identify relevant information. If these groups are misleading then valuable information can accidentally be ignored. There- fore, the study and analysis of the quality of document clustering is important. With more and more digital information available, the performance of these algorithms is also of interest. An algorithm with a time complexity of O(n2) can quickly become impractical when clustering a corpus containing millions of documents. Therefore, the investigation of algorithms and data structures to perform clustering in an efficient manner is vital to its success as an IR tool. Document classification is another tool frequently used in the IR field. It predicts categories of new documents based on an existing database of (doc- ument, category) pairs. Support Vector Machines (SVM) have been found to be effective when classifying text documents. As the algorithms for classifica- tion are both efficient and of high quality, the largest gains can be made from improvements to representation. Document representations are vital for both clustering and classification. Representations exploit the content and structure of documents. Dimensionality reduction can improve the effectiveness of existing representations in terms of quality and run-time performance. Research into these areas is another way to improve the efficiency and quality of clustering and classification results. Evaluating document clustering is a difficult task. Intrinsic measures of quality such as distortion only indicate how well an algorithm minimised a sim- ilarity function in a particular vector space. Intrinsic comparisons are inherently limited by the given representation and are not comparable between different representations. Extrinsic measures of quality compare a clustering solution to a “ground truth” solution. This allows comparison between different approaches. As the “ground truth” is created by humans it can suffer from the fact that not every human interprets a topic in the same manner. Whether a document belongs to a particular topic or not can be subjective.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bearing damage in modern inverter-fed AC drive systems is more common than in motors working with 50 or 60 Hz power supply. Fast switching transients and common mode voltage generated by a PWM inverter cause unwanted shaft voltage and resultant bearing currents. Parasitic capacitive coupling creates a path to discharge current in rotors and bearings. In order to analyze bearing current discharges and their effect on bearing damage under different conditions, calculation of the capacitive coupling between the outer and inner races is needed. During motor operation, the distances between the balls and races may change the capacitance values. Due to changing of the thickness and spatial distribution of the lubricating grease, this capacitance does not have a constant value and is known to change with speed and load. Thus, the resultant electric field between the races and balls varies with motor speed. The lubricating grease in the ball bearing cannot withstand high voltages and a short circuit through the lubricated grease can occur. At low speeds, because of gravity, balls and shaft voltage may shift down and the system (ball positions and shaft) will be asymmetric. In this study, two different asymmetric cases (asymmetric ball position, asymmetric shaft position) are analyzed and the results are compared with the symmetric case. The objective of this paper is to calculate the capacitive coupling and electric fields between the outer and inner races and the balls at different motor speeds in symmetrical and asymmetrical shaft and balls positions. The analysis is carried out using finite element simulations to determine the conditions which will increase the probability of high rates of bearing failure due to current discharges through the balls and races.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective--To determine whether heart failure with preserved systolic function (HFPSF) has different natural history from left ventricular systolic dysfunction (LVSD). Design and setting--A retrospective analysis of 10 years of data (for patients admitted between 1 July 1994 and 30 June 2004, and with a study census date of 30 June 2005) routinely collected as part of clinical practice in a large tertiary referral hospital.Main outcome measures-- Sociodemographic characteristics, diagnostic features, comorbid conditions, pharmacotherapies, readmission rates and survival.Results--Of the 2961 patients admitted with chronic heart failure, 753 had echocardiograms available for this analysis. Of these, 189 (25%) had normal left ventricular size and systolic function. In comparison to patients with LVSD, those with HFPSF were more often female (62.4% v 38.5%; P = 0.001), had less social support, and were more likely to live in nursing homes (17.9% v 7.6%; P < 0.001), and had a greater prevalence of renal impairment (86.7% v 6.2%; P = 0.004), anaemia (34.3% v 6.3%; P = 0.013) and atrial fibrillation (51.3% v 47.1%; P = 0.008), but significantly less ischaemic heart disease (53.4% v 81.2%; P = 0.001). Patients with HFPSF were less likely to be prescribed an angiotensin-converting enzyme inhibitor (61.9% v 72.5%; P = 0.008); carvedilol was used more frequently in LVSD (1.5% v 8.8%; P < 0.001). Readmission rates were higher in the HFPSF group (median, 2 v 1.5 admissions; P = 0.032), particularly for malignancy (4.2% v 1.8%; P < 0.001) and anaemia (3.9% v 2.3%; P < 0.001). Both groups had the same poor survival rate (P = 0.912). Conclusions--Patients with HFPSF were predominantly older women with less social support and higher readmission rates for associated comorbid illnesses. We therefore propose that reduced survival in HFPSF may relate more to comorbid conditions than suboptimal cardiac management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Auto rickshaws (3-wheelers) are the most sought after transport among the urban and rural poor in India. The assembly of the vehicle involves assemblies of several major components. The L-angle is the component that connects the front panel with the vehicle floor. Current L-angle part has been observed to experience permanent deformation failure over period of time. This paper studies the effect of the addition of stiffeners on the L-angle to increase the strength of the component. A physical model of the L-angle was reversed engineered and modelled in CAD before static loading analysis were carried out on the model using finite element analysis. The modified L-angle fitted with stiffeners was shown to be able to withstand more load compare to previous design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: It remains unclear whether it is possible to develop a spatiotemporal epidemic prediction model for cryptosporidiosis disease. This paper examined the impact of social economic and weather factors on cryptosporidiosis and explored the possibility of developing such a model using social economic and weather data in Queensland, Australia. ----- ----- Methods: Data on weather variables, notified cryptosporidiosis cases and social economic factors in Queensland were supplied by the Australian Bureau of Meteorology, Queensland Department of Health, and Australian Bureau of Statistics, respectively. Three-stage spatiotemporal classification and regression tree (CART) models were developed to examine the association between social economic and weather factors and monthly incidence of cryptosporidiosis in Queensland, Australia. The spatiotemporal CART model was used for predicting the outbreak of cryptosporidiosis in Queensland, Australia. ----- ----- Results: The results of the classification tree model (with incidence rates defined as binary presence/absence) showed that there was an 87% chance of an occurrence of cryptosporidiosis in a local government area (LGA) if the socio-economic index for the area (SEIFA) exceeded 1021, while the results of regression tree model (based on non-zero incidence rates) show when SEIFA was between 892 and 945, and temperature exceeded 32°C, the relative risk (RR) of cryptosporidiosis was 3.9 (mean morbidity: 390.6/100,000, standard deviation (SD): 310.5), compared to monthly average incidence of cryptosporidiosis. When SEIFA was less than 892 the RR of cryptosporidiosis was 4.3 (mean morbidity: 426.8/100,000, SD: 319.2). A prediction map for the cryptosporidiosis outbreak was made according to the outputs of spatiotemporal CART models. ----- ----- Conclusions: The results of this study suggest that spatiotemporal CART models based on social economic and weather variables can be used for predicting the outbreak of cryptosporidiosis in Queensland, Australia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The modern society has come to expect the electrical energy on demand, while many of the facilities in power systems are aging beyond repair and maintenance. The risk of failure is increasing with the aging equipments and can pose serious consequences for continuity of electricity supply. As the equipments used in high voltage power networks are very expensive, economically it may not be feasible to purchase and store spares in a warehouse for extended periods of time. On the other hand, there is normally a significant time before receiving equipment once it is ordered. This situation has created a considerable interest in the evaluation and application of probability methods for aging plant and provisions of spares in bulk supply networks, and can be of particular importance for substations. Quantitative adequacy assessment of substation and sub-transmission power systems is generally done using a contingency enumeration approach which includes the evaluation of contingencies, classification of the contingencies based on selected failure criteria. The problem is very complex because of the need to include detailed modelling and operation of substation and sub-transmission equipment using network flow evaluation and to consider multiple levels of component failures. In this thesis a new model associated with aging equipment is developed to combine the standard tools of random failures, as well as specific model for aging failures. This technique is applied in this thesis to include and examine the impact of aging equipments on system reliability of bulk supply loads and consumers in distribution network for defined range of planning years. The power system risk indices depend on many factors such as the actual physical network configuration and operation, aging conditions of the equipment, and the relevant constraints. The impact and importance of equipment reliability on power system risk indices in a network with aging facilities contains valuable information for utilities to better understand network performance and the weak links in the system. In this thesis, algorithms are developed to measure the contribution of individual equipment to the power system risk indices, as part of the novel risk analysis tool. A new cost worth approach was developed in this thesis that can make an early decision in planning for replacement activities concerning non-repairable aging components, in order to maintain a system reliability performance which economically is acceptable. The concepts, techniques and procedures developed in this thesis are illustrated numerically using published test systems. It is believed that the methods and approaches presented, substantially improve the accuracy of risk predictions by explicit consideration of the effect of equipment entering a period of increased risk of a non-repairable failure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Specialised disease management programmes for chronic heart failure (CHF) improve survival, quality of life and reduce healthcare utilisation. The overall efficacy of structured telephone support or telemonitoring as an individual component of a CHF disease management strategy remains inconclusive. Objectives: To review randomised controlled trials (RCTs) of structured telephone support or telemonitoring compared to standard practice for patients with CHF in order to quantify the effects of these interventions over and above usual care for these patients. Search strategy: Databases (the Cochrane Central Register of Controlled Trials (CENTRAL), Database of Abstracts of Reviews of Effects (DARE) and Health Technology Assessment Database (HTA) on The Cochrane Library, MEDLINE, EMBASE, CINAHL, AMED and Science Citation Index Expanded and Conference Citation Index on ISI Web of Knowledge) and various search engines were searched from 2006 to November 2008 to update a previously published non-Cochrane review. Bibliographies of relevant studies and systematic reviews and abstract conference proceedings were handsearched. No language limits were applied. Selection criteria: Only peer reviewed, published RCTs comparing structured telephone support or telemonitoring to usual care of CHF patients were included. Unpublished abstract data was included in sensitivity analyses. The intervention or usual care could not include a home visit or more than the usual (four to six weeks) clinic follow-up. Data collection and analysis: Data were presented as risk ratio (RR) with 95% confidence intervals (CI). Primary outcomes included all-cause mortality, all-cause and CHF-related hospitalisations which were meta-analysed using fixed effects models. Other outcomes included length of stay, quality of life, acceptability and cost and these were described and tabulated. Main results: Twenty-five studies and five published abstracts were included. Of the 25 full peer-reviewed studies meta-analysed, 16 evaluated structured telephone support (5613 participants), 11 evaluated telemonitoring (2710 participants), and two tested both interventions (included in counts). Telemonitoring reduced all-cause mortality (RR 0.66, 95% CI 0.54 to 0.81, P < 0.0001) with structured telephone support demonstrating a non-significant positive effect (RR 0.88, 95% CI 0.76 to 1.01, P = 0.08). Both structured telephone support (RR 0.77, 95% CI 0.68 to 0.87, P < 0.0001) and telemonitoring (RR 0.79, 95% CI 0.67 to 0.94, P = 0.008) reduced CHF-related hospitalisations. For both interventions, several studies improved quality of life, reduced healthcare costs and were acceptable to patients. Improvements in prescribing, patient knowledge and self-care, and New York Heart Association (NYHA) functional class were observed. Authors' conclusions: Structured telephone support and telemonitoring are effective in reducing the risk of all-cause mortality and CHF-related hospitalisations in patients with CHF; they improve quality of life, reduce costs, and evidence-based prescribing.