993 resultados para Failure Probability
Resumo:
The purpose of this study was to describe patterns of medical and nursing practice in the care of patients dying of oncological and hematological malignancies in the acute care setting in Australia. A tool validated in a similar American study was used to study the medical records of 100 consecutive patients who died of oncological or hematological malignancies before August 1999 at The Canberra Hospital in the Australian Capital Territory. The three major indicators of patterns of end-of-life care were documentation of Do Not Resuscitate (DNR) orders, evidence that the patient was considered dying, and the presence of a palliative care intention. Findings were that 88 patients were documented DNR, 63 patients' records suggested that the patient was dying, and 74 patients had evidence of a palliative care plan. Forty-six patients were documented DNR 2 days or less prior to death and, of these, 12 were documented the day of death. Similar patterns emerged for days between considered dying and death, and between palliative care goals and death. Sixty patients had active treatment in progress at the time of death. The late implementation of end-of-life management plans and the lack of consistency within these plans suggested that patients were subjected to medical interventions and investigations up to the time of death. Implications for palliative care teams include the need to educate health care staff and to plan and implement policy regarding the management of dying patients in the acute care setting. Although the health care system in Australia has cultural differences when compared to the American context, this research suggests that the treatment imperative to prolong life is similar to that found in American-based studies.
Resumo:
The ability to accurately predict the remaining useful life of machine components is critical for machine continuous operation, and can also improve productivity and enhance system safety. In condition-based maintenance (CBM), maintenance is performed based on information collected through condition monitoring and an assessment of the machine health. Effective diagnostics and prognostics are important aspects of CBM for maintenance engineers to schedule a repair and to acquire replacement components before the components actually fail. All machine components are subjected to degradation processes in real environments and they have certain failure characteristics which can be related to the operating conditions. This paper describes a technique for accurate assessment of the remnant life of machines based on health state probability estimation and involving historical knowledge embedded in the closed loop diagnostics and prognostics systems. The technique uses a Support Vector Machine (SVM) classifier as a tool for estimating health state probability of machine degradation, which can affect the accuracy of prediction. To validate the feasibility of the proposed model, real life historical data from bearings of High Pressure Liquefied Natural Gas (HP-LNG) pumps were analysed and used to obtain the optimal prediction of remaining useful life. The results obtained were very encouraging and showed that the proposed prognostic system based on health state probability estimation has the potential to be used as an estimation tool for remnant life prediction in industrial machinery.
Resumo:
It is trite law that a lawyer owes their client a duty of care requiring the lawyer to take reasonable steps to avoid their client suffering foreseeable economiic loss: Hawkins v Clayton. In the context of a property transaction this will include a duty to warn the client of anything that is unusual or anything which may affect the client obtaining the full benefit of the contract entered into: Macindoe v Parbery.
Resumo:
The ability to estimate the asset reliability and the probability of failure is critical to reducing maintenance costs, operation downtime, and safety hazards. Predicting the survival time and the probability of failure in future time is an indispensable requirement in prognostics and asset health management. In traditional reliability models, the lifetime of an asset is estimated using failure event data, alone; however, statistically sufficient failure event data are often difficult to attain in real-life situations due to poor data management, effective preventive maintenance, and the small population of identical assets in use. Condition indicators and operating environment indicators are two types of covariate data that are normally obtained in addition to failure event and suspended data. These data contain significant information about the state and health of an asset. Condition indicators reflect the level of degradation of assets while operating environment indicators accelerate or decelerate the lifetime of assets. When these data are available, an alternative approach to the traditional reliability analysis is the modelling of condition indicators and operating environment indicators and their failure-generating mechanisms using a covariate-based hazard model. The literature review indicates that a number of covariate-based hazard models have been developed. All of these existing covariate-based hazard models were developed based on the principle theory of the Proportional Hazard Model (PHM). However, most of these models have not attracted much attention in the field of machinery prognostics. Moreover, due to the prominence of PHM, attempts at developing alternative models, to some extent, have been stifled, although a number of alternative models to PHM have been suggested. The existing covariate-based hazard models neglect to fully utilise three types of asset health information (including failure event data (i.e. observed and/or suspended), condition data, and operating environment data) into a model to have more effective hazard and reliability predictions. In addition, current research shows that condition indicators and operating environment indicators have different characteristics and they are non-homogeneous covariate data. Condition indicators act as response variables (or dependent variables) whereas operating environment indicators act as explanatory variables (or independent variables). However, these non-homogenous covariate data were modelled in the same way for hazard prediction in the existing covariate-based hazard models. The related and yet more imperative question is how both of these indicators should be effectively modelled and integrated into the covariate-based hazard model. This work presents a new approach for addressing the aforementioned challenges. The new covariate-based hazard model, which termed as Explicit Hazard Model (EHM), explicitly and effectively incorporates all three available asset health information into the modelling of hazard and reliability predictions and also drives the relationship between actual asset health and condition measurements as well as operating environment measurements. The theoretical development of the model and its parameter estimation method are demonstrated in this work. EHM assumes that the baseline hazard is a function of the both time and condition indicators. Condition indicators provide information about the health condition of an asset; therefore they update and reform the baseline hazard of EHM according to the health state of asset at given time t. Some examples of condition indicators are the vibration of rotating machinery, the level of metal particles in engine oil analysis, and wear in a component, to name but a few. Operating environment indicators in this model are failure accelerators and/or decelerators that are included in the covariate function of EHM and may increase or decrease the value of the hazard from the baseline hazard. These indicators caused by the environment in which an asset operates, and that have not been explicitly identified by the condition indicators (e.g. Loads, environmental stresses, and other dynamically changing environment factors). While the effects of operating environment indicators could be nought in EHM; condition indicators could emerge because these indicators are observed and measured as long as an asset is operational and survived. EHM has several advantages over the existing covariate-based hazard models. One is this model utilises three different sources of asset health data (i.e. population characteristics, condition indicators, and operating environment indicators) to effectively predict hazard and reliability. Another is that EHM explicitly investigates the relationship between condition and operating environment indicators associated with the hazard of an asset. Furthermore, the proportionality assumption, which most of the covariate-based hazard models suffer from it, does not exist in EHM. According to the sample size of failure/suspension times, EHM is extended into two forms: semi-parametric and non-parametric. The semi-parametric EHM assumes a specified lifetime distribution (i.e. Weibull distribution) in the form of the baseline hazard. However, for more industry applications, due to sparse failure event data of assets, the analysis of such data often involves complex distributional shapes about which little is known. Therefore, to avoid the restrictive assumption of the semi-parametric EHM about assuming a specified lifetime distribution for failure event histories, the non-parametric EHM, which is a distribution free model, has been developed. The development of EHM into two forms is another merit of the model. A case study was conducted using laboratory experiment data to validate the practicality of the both semi-parametric and non-parametric EHMs. The performance of the newly-developed models is appraised using the comparison amongst the estimated results of these models and the other existing covariate-based hazard models. The comparison results demonstrated that both the semi-parametric and non-parametric EHMs outperform the existing covariate-based hazard models. Future research directions regarding to the new parameter estimation method in the case of time-dependent effects of covariates and missing data, application of EHM in both repairable and non-repairable systems using field data, and a decision support model in which linked to the estimated reliability results, are also identified.
Resumo:
In this paper we investigate the distribution of the product of Rayleigh distributed random variables. Considering the Mellin-Barnes inversion formula and using the saddle point approach we obtain an upper bound for the product distribution. The accuracy of this tail-approximation increases as the number of random variables in the product increase.
Resumo:
This volume puts together the works of a group of distinguished scholars and active researchers in the field of media and communication studies to reflect upon the past, present, and future of new media research. The chapters examine the implications of new media technologies on everyday life, existing social institutions, and the society at large at various levels of analysis. Macro-level analyses of changing techno-social formation – such as discussions of the rise of surveillance society and the "fifth estate" – are combined with studies on concrete and specific new media phenomena, such as the rise of Pro-Am collaboration and "fan labor" online. In the process, prominent concepts in the field of new media studies, such as social capital, displacement, and convergence, are critically examined, while new theoretical perspectives are proposed and explicated. Reflecting the inter-disciplinary nature of the field of new media studies and communication research in general, the chapters interrogate into the problematic through a range of theoretical and methodological approaches. The book should offer students and researchers who are interested in the social impact of new media both critical reviews of the existing literature and inspirations for developing new research questions.
Resumo:
The decision of Applegarth J in Heartwood Architectural & Joinery Pty Ltd v Redchip Lawyers [2009] QSC 195 (27 July 2009) involved a costs order against solicitors personally. This decision is but one of several recent decisions in which the court has been persuaded that the circumstances justified costs orders against legal practitioners on the indemnity basis. These decisions serve as a reminder to practitioners of their disclosure obligations when seeking any interlocutory relief in an ex parte application. These obligations are now clearly set out in r 14.4 of the Legal Profession (Solicitors) Rule 2007 and r 25 of 2007 Barristers Rule. Inexperience or ignorance will not excuse breaches of the duties owed to the court.
Resumo:
Structural framing systems and mechanisms designed for normal use rarely possess adequate robustness to withstand the effects of large impacts, blasts and extreme earthquakes that have been experienced in recent times. Robustness is the property of systems that enables them to survive unforeseen or unusual circumstances (Knoll & Vogel, 2009). Queensland University of Technology with industry collaboration is engaged in a program of research that commenced 15 years ago to study the impact of such unforeseeable phenomena and investigate methods of improving robustness and safety with protective mechanisms embedded or designed in structural systems. This paper highlights some of the research pertaining to seismic protection of building structures, rollover protective structures and effects of vehicular impact and blast on key elements in structures that could propagate catastrophic and disproportionate collapse.
Resumo:
Unlike most normal construction projects, post-disaster housing projects are diverse in nature, have unique socio-cultural and economical requirements, and are extremely dynamic and thus necessitate a meaningful and dynamic response. Post-disaster reconstruction practices that lack a strategy compatible with the severity of disaster, community culture, socio-economic requirements, environmental condition, government legislations, and technical and technological situations, often fail to operate and respond effectively to the needs of the wider affected population. Factors that frequently pose real threats to the eventual success of reconstruction projects are rarely given appropriate consideration when designing such projects. Research into past reconstruction practices has shown that ignoring these factors altogether or failing to give them meaningful consideration can affect housing reconstruction projects. In other words, they either miss their targets altogether or undergo serious modifications after their occupancy, subsequently resulting in an overall loss of project resources. This article touches upon the common factors that negatively impact the outcome of such projects.
Resumo:
Aims: This paper describes the development of a risk adjustment (RA) model predictive of individual lesion treatment failure in percutaneous coronary interventions (PCI) for use in a quality monitoring and improvement program. Methods and results: Prospectively collected data for 3972 consecutive revascularisation procedures (5601 lesions) performed between January 2003 and September 2011 were studied. Data on procedures to September 2009 (n = 3100) were used to identify factors predictive of lesion treatment failure. Factors identified included lesion risk class (p < 0.001), occlusion type (p < 0.001), patient age (p = 0.001), vessel system (p < 0.04), vessel diameter (p < 0.001), unstable angina (p = 0.003) and presence of major cardiac risk factors (p = 0.01). A Bayesian RA model was built using these factors with predictive performance of the model tested on the remaining procedures (area under the receiver operating curve: 0.765, Hosmer–Lemeshow p value: 0.11). Cumulative sum, exponentially weighted moving average and funnel plots were constructed using the RA model and subjectively evaluated. Conclusion: A RA model was developed and applied to SPC monitoring for lesion failure in a PCI database. If linked to appropriate quality improvement governance response protocols, SPC using this RA tool might improve quality control and risk management by identifying variation in performance based on a comparison of observed and expected outcomes.
Resumo:
Anisotropic damage distribution and evolution have a profound effect on borehole stress concentrations. Damage evolution is an irreversible process that is not adequately described within classical equilibrium thermodynamics. Therefore, we propose a constitutive model, based on non-equilibrium thermodynamics, that accounts for anisotropic damage distribution, anisotropic damage threshold and anisotropic damage evolution. We implemented this constitutive model numerically, using the finite element method, to calculate stress–strain curves and borehole stresses. The resulting stress–strain curves are distinctively different from linear elastic-brittle and linear elastic-ideal plastic constitutive models and realistically model experimental responses of brittle rocks. We show that the onset of damage evolution leads to an inhomogeneous redistribution of material properties and stresses along the borehole wall. The classical linear elastic-brittle approach to borehole stability analysis systematically overestimates the stress concentrations on the borehole wall, because dissipative strain-softening is underestimated. The proposed damage mechanics approach explicitly models dissipative behaviour and leads to non-conservative mud window estimations. Furthermore, anisotropic rocks with preferential planes of failure, like shales, can be addressed with our model.
Resumo:
Ambiguity resolution plays a crucial role in real time kinematic GNSS positioning which gives centimetre precision positioning results if all the ambiguities in each epoch are correctly fixed to integers. However, the incorrectly fixed ambiguities can result in large positioning offset up to several meters without notice. Hence, ambiguity validation is essential to control the ambiguity resolution quality. Currently, the most popular ambiguity validation is ratio test. The criterion of ratio test is often empirically determined. Empirically determined criterion can be dangerous, because a fixed criterion cannot fit all scenarios and does not directly control the ambiguity resolution risk. In practice, depending on the underlying model strength, the ratio test criterion can be too conservative for some model and becomes too risky for others. A more rational test method is to determine the criterion according to the underlying model and user requirement. Miss-detected incorrect integers will lead to a hazardous result, which should be strictly controlled. In ambiguity resolution miss-detected rate is often known as failure rate. In this paper, a fixed failure rate ratio test method is presented and applied in analysis of GPS and Compass positioning scenarios. A fixed failure rate approach is derived from the integer aperture estimation theory, which is theoretically rigorous. The criteria table for ratio test is computed based on extensive data simulations in the approach. The real-time users can determine the ratio test criterion by looking up the criteria table. This method has been applied in medium distance GPS ambiguity resolution but multi-constellation and high dimensional scenarios haven't been discussed so far. In this paper, a general ambiguity validation model is derived based on hypothesis test theory, and fixed failure rate approach is introduced, especially the relationship between ratio test threshold and failure rate is examined. In the last, Factors that influence fixed failure rate approach ratio test threshold is discussed according to extensive data simulation. The result shows that fixed failure rate approach is a more reasonable ambiguity validation method with proper stochastic model.
Resumo:
Background: Heart failure is a serious condition estimated to affect 1.5-2.0% of the Australian population with a point prevalence of approximately 1% in people aged 50-59 years, 10% in people aged 65 years or more and over 50% in people aged 85 years or over (National Heart Foundation of Australian and the Cardiac Society of Australia and New Zealand, 2006). Sleep disturbances are a common complaint of persons with heart failure. Disturbances of sleep can worsen heart failure symptoms, impair independence, reduce quality of life and lead to increased health care utilisation in patients with heart failure. Previous studies have identified exercise as a possible treatment for poor sleep in patients without cardiac disease however there is limited evidence of the effect of this form of treatment in heart failure. Aim: The primary objective of this study was to examine the effect of a supervised, hospital-based exercise training programme on subjective sleep quality in heart failure patients. Secondary objectives were to examine the association between changes in sleep quality and changes in depression, exercise performance and body mass index. Methods: The sample for the study was recruited from metropolitan and regional heart failure services across Brisbane, Queensland. Patients with a recent heart failure related hospital admission who met study inclusion criteria were recruited. Participants were screened by specialist heart failure exercise staff at each site to ensure exercise safety prior to study enrolment. Demographic data, medical history, medications, Pittsburgh Sleep Quality Index score, Geriatric Depression Score, exercise performance (six minute walk test), weight and height were collected at Baseline. Pittsburgh Sleep Quality Index score, Geriatric Depression Score, exercise performance and weight were repeated at 3 months. One hundred and six patients admitted to hospital with heart failure were randomly allocated to a 3-month disease-based management programme of education and self-management support including standard exercise advice (Control) or to the same disease management programme as the Control group with the addition of a tailored physical activity program (Intervention). The intervention consisted of 1 hour of aerobic and resistance exercise twice a week. Programs were designed and supervised by an exercise specialist. The main outcome measure was achievement of a clinically significant change (.3 points) in global Pittsburgh Sleep Quality score. Results: Intervention group participants reported significantly greater clinical improvement in global sleep quality than Control (p=0.016). These patients also exhibited significant improvements in component sleep disturbance (p=0.004), component sleep quality (p=0.015) and global sleep quality (p=0.032) after 3 months of supervised exercise intervention. Improvements in sleep quality correlated with improvements in depression (p<0.001) and six minute walk distance (p=0.04). When study results were examined categorically, with subjects classified as either "poor" or "good" sleepers, subjects in the Control group were significantly more likely to report "poor" sleep at 3 months (p=0.039) while Intervention participants were likely to report "good" sleep at this time (p=0.08). Conclusion: Three months of supervised, hospital based, aerobic and resistance exercise training improved subjective sleep quality in patients with heart failure. This is the first randomised controlled trial to examine the role of aerobic and resistance exercise training in the improvement of sleep quality for patients with this disease. While this study establishes exercise as a therapy for poor sleep quality, further research is needed to investigate the effect of exercise training on objective parameters of sleep in this population.
Resumo:
With a monolayer honeycomb-lattice of sp2-hybridized carbon atoms, graphene has demonstrated exceptional electrical, mechanical and thermal properties. One of its promising applications is to create graphene-polymer nanocomposites with tailored mechanical and physical properties. In general, the mechanical properties of graphene nanofiller as well as graphene-polymer interface govern the overall mechanical performance of graphene-polymer nanocomposites. However, the strengthening and toughening mechanisms in these novel nanocomposites have not been well understood. In this work, the deformation and failure of graphene sheet and graphene-polymer interface were investigated using molecular dynamics (MD) simulations. The effect of structural defects on the mechanical properties of graphene and graphene-polymer interface was investigated as well. The results showed that structural defects in graphene (e.g. Stone-Wales defect and multi-vacancy defect) can significantly deteriorate the fracture strength of graphene but may still make full utilization of corresponding strength of graphene and keep the interfacial strength and the overall mechanical performance of graphene-polymer nanocomposites.