949 resultados para Equipment Failure Analysis
Resumo:
BACKGROUND: The beneficial effects of beta-blockers and aldosterone receptor antagonists are now well established in patients with severe systolic chronic heart failure (CHF). However, it is unclear whether beta-blockers are able to provide additional benefit in patients already receiving aldosterone antagonists. We therefore examined this question in the COPERNICUS study of 2289 patients with severe CHF receiving the beta1-beta2/alpha1 blocker carvedilol compared with placebo. METHODS: Patients were divided post hoc into subgroups according to whether they were receiving spironolactone (n = 445) or not (n = 1844) at baseline. Consistency of the effect of carvedilol versus placebo was examined for these subgroups with respect to the predefined end points of all-cause mortality, death or CHF-related hospitalizations, death or cardiovascular hospitalizations, and death or all-cause hospitalizations. RESULTS: The beneficial effect of carvedilol was similar among patients who were or were not receiving spironolactone for each of the 4 efficacy measures. For all-cause mortality, the Cox model hazard ratio for carvedilol compared with placebo was 0.65 (95% CI 0.36-1.15) in patients receiving spironolactone and 0.65 (0.51-0.83) in patients not receiving spironolactone. Hazard ratios for death or all-cause hospitalization were 0.76 (0.55-1.05) versus 0.76 (0.66-0.88); for death or cardiovascular hospitalization, 0.61 (0.42-0.89) versus 0.75 (0.64-0.88); and for death or CHF hospitalization, 0.63 (0.43-0.94) versus 0.70 (0.59-0.84), in patients receiving and not receiving spironolactone, respectively. The safety and tolerability of treatment with carvedilol were also similar, regardless of background spironolactone. CONCLUSION: Carvedilol remained clinically efficacious in the COPERNICUS study of patients with severe CHF when added to background spironolactone in patients who were practically all receiving angiotensin-converting enzyme inhibitor (or angiotensin II antagonist) therapy. Therefore, the use of spironolactone in patients with severe CHF does not obviate the necessity of additional treatment that interferes with the adverse effects of sympathetic activation, specifically beta-blockade.
Resumo:
Vertebral compression fracture is a common medical problem in osteoporotic individuals. The quantitative computed tomography (QCT)-based finite element (FE) method may be used to predict vertebral strength in vivo, but needs to be validated with experimental tests. The aim of this study was to validate a nonlinear anatomy specific QCT-based FE model by using a novel testing setup. Thirty-seven human thoracolumbar vertebral bone slices were prepared by removing cortical endplates and posterior elements. The slices were scanned with QCT and the volumetric bone mineral density (vBMD) was computed with the standard clinical approach. A novel experimental setup was designed to induce a realistic failure in the vertebral slices in vitro. Rotation of the loading plate was allowed by means of a ball joint. To minimize device compliance, the specimen deformation was measured directly on the loading plate with three sensors. A nonlinear FE model was generated from the calibrated QCT images and computed vertebral stiffness and strength were compared to those measured during the experiments. In agreement with clinical observations, most of the vertebrae underwent an anterior wedge-shape fracture. As expected, the FE method predicted both stiffness and strength better than vBMD (R2 improved from 0.27 to 0.49 and from 0.34 to 0.79, respectively). Despite the lack of fitting parameters, the linear regression of the FE prediction for strength was close to the 1:1 relation (slope and intercept close to one (0.86 kN) and to zero (0.72 kN), respectively). In conclusion, a nonlinear FE model was successfully validated through a novel experimental technique for generating wedge-shape fractures in human thoracolumbar vertebrae.
Resumo:
AMS-14C applications often require the analysis of small samples. Such is the case of atmospheric aerosols where frequently only a small amount of sample is available. The ion beam physics group at the ETH, Zurich, has designed an Automated Graphitization Equipment (AGE III) for routine graphite production for AMS analysis from organic samples of approximately 1 mg. In this study, we explore the potential use of the AGE III for graphitization of particulate carbon collected in quartz filters. In order to test the methodology, samples of reference materials and blanks with different sizes were prepared in the AGE III and the graphite was analyzed in a MICADAS AMS (ETH) system. The graphite samples prepared in the AGE III showed recovery yields higher than 80% and reproducible 14C values for masses ranging from 50 to 300 lg. Also, reproducible radiocarbon values were obtained for aerosol filters of small sizes that had been graphitized in the AGE III. As a study case, the tested methodology was applied to PM10 samples collected in two urban cities in Mexico in order to compare the source apportionment of biomass and fossil fuel combustion. The obtained 14C data showed that carbonaceous aerosols from Mexico City have much lower biogenic signature than the smaller city of Cuernavaca.
Resumo:
Analysis of recurrent events has been widely discussed in medical, health services, insurance, and engineering areas in recent years. This research proposes to use a nonhomogeneous Yule process with the proportional intensity assumption to model the hazard function on recurrent events data and the associated risk factors. This method assumes that repeated events occur for each individual, with given covariates, according to a nonhomogeneous Yule process with intensity function λx(t) = λ 0(t) · exp( x′β). One of the advantages of using a non-homogeneous Yule process for recurrent events is that it assumes that the recurrent rate is proportional to the number of events that occur up to time t. Maximum likelihood estimation is used to provide estimates of the parameters in the model, and a generalized scoring iterative procedure is applied in numerical computation. ^ Model comparisons between the proposed method and other existing recurrent models are addressed by simulation. One example concerning recurrent myocardial infarction events compared between two distinct populations, Mexican-American and Non-Hispanic Whites in the Corpus Christi Heart Project is examined. ^
Resumo:
Genital human papillomavirus (HPV) is of public health concern because persistent infection with certain HPV types can cause cervical cancer. In response to a nationwide push for cervical cancer legislation, Texas Governor Rick Perry bypassed the traditional legislative process and issued an executive order mandating compulsory HPV vaccinations for all female public school students prior to their entrance in the sixth grade. By bypassing the legislative process Governor Perry did not effectively mitigate the risk perception issues that arose around the need for and usefulness of the vaccine mandate. This policy paper uses a social policy paradigm to identify perception as the key intervening factor on how the public responds to risk information. To demonstrate how the HPV mandate failed, it analyzes four factors, economics, politics, knowledge and culture, that shape perception and influence the public's response. By understanding the factors that influence the public's perception, public health practitioners and policy makers can more effectively create preventive health policy at the state level. ^
Resumo:
In December, 1980, following increasing congressional and constituent-interest in problems associated with hazardous waste, the Comprehensive Environmental Recovery, Compensation and Liability Act (CERCLA) was passed. During its development, the legislative initiative was seriously compromised which resulted in a less exhaustive approach than was formerly sought. Still, CERCLA (Superfund) which established, among other things, authority to clean up abandoned waste dumps and to respond to emergencies caused by releases of hazardous substances was welcomed by many as an important initial law critical to the cleanup of the nation's hazardous waste. Expectations raised by passage of this bill were tragically unmet. By the end of four years, only six sites had been declared by the EPA as cleaned. Seemingly, even those determinations were liberal; of the six sites, two were identified subsequently as requiring further cleanup.^ This analysis is focused upon the implementation failure of the Superfund. In light of that focus, discussion encompasses development of linkages between flaws in the legislative language and foreclosure of chances for implementation success. Specification of such linkages is achieved through examination of the legislative initiative, identification of its flaws and characterization of attendant deficits in implementation ability. Subsequent analysis is addressed to how such legislative frailities might have been avoided and to attendant regulatory weaknesses which have contributed to implementation failure. Each of these analyses are accomplished through application of an expanded approach to the backward mapping analytic technique as presented by Elmore. Results and recommendations follow.^ Consideration is devoted to a variety of regulatory issues as well as to those pertinent to legislative and implementation analysis. Problems in assessing legal liability associated with hazardous waste management are presented, as is a detailed review of the legislative development of Superfund, and its initial implementation by Gorsuch's EPA. ^
Resumo:
In this work, robustness and stability of continuum damage models applied to material failure in soft tissues are addressed. In the implicit damage models equipped with softening, the presence of negative eigenvalues in the tangent elemental matrix degrades the condition number of the global matrix, leading to a reduction of the computational performance of the numerical model. Two strategies have been adapted from literature to improve the aforementioned computational performance degradation: the IMPL-EX integration scheme [Oliver,2006], which renders the elemental matrix contribution definite positive, and arclength-type continuation methods [Carrera,1994], which allow to capture the unstable softening branch in brittle ruptures. The IMPL-EX integration scheme has as a major drawback the need to use small time steps to keep numerical error below an acceptable value. A convergence study, limiting the maximum allowed increment of internal variables in the damage model, is presented. Finally, numerical simulation of failure problems with fibre reinforced materials illustrates the performance of the adopted methodology.
Resumo:
This paper proposes a method for the identification of different partial discharges (PDs) sources through the analysis of a collection of PD signals acquired with a PD measurement system. This method, robust and sensitive enough to cope with noisy data and external interferences, combines the characterization of each signal from the collection, with a clustering procedure, the CLARA algorithm. Several features are proposed for the characterization of the signals, being the wavelet variances, the frequency estimated with the Prony method, and the energy, the most relevant for the performance of the clustering procedure. The result of the unsupervised classification is a set of clusters each containing those signals which are more similar to each other than to those in other clusters. The analysis of the classification results permits both the identification of different PD sources and the discrimination between original PD signals, reflections, noise and external interferences. The methods and graphical tools detailed in this paper have been coded and published as a contributed package of the R environment under a GNU/GPL license.
Resumo:
In this work, robustness and stability of continuum damage models applied to material failure in soft tissues are addressed. In the implicit damage models equipped with softening, the presence of negative eigenvalues in the tangent elemental matrix degrades the condition number of the global matrix, leading to a reduction of the computational performance of the numerical model. Two strategies have been adapted from literature to improve the aforementioned computational performance degradation: the IMPL-EX integration scheme [Oliver,2006], which renders the elemental matrix contribution definite positive, and arclength-type continuation methods [Carrera,1994], which allow to capture the unstable softening branch in brittle ruptures. The IMPL-EX integration scheme has as a major drawback the need to use small time steps to keep numerical error below an acceptable value. A convergence study, limiting the maximum allowed increment of internal variables in the damage model, is presented. Finally, numerical simulation of failure problems with fibre reinforced materials illustrates the performance of the adopted methodology.
Resumo:
The road to the automation of the agricultural processes passes through the safe operation of the autonomous vehicles. This requirement is a fact in ground mobile units, but it still has not well defined for the aerial robots (UAVs) mainly because the normative and legislation are quite diffuse or even inexistent. Therefore, to define a common and global policy is the challenge to tackle. This characterization has to be addressed from the field experience. Accordingly, this paper presents the work done in this direction, based on the analysis of the most common sources of hazards when using UAV's for agricultural tasks. The work, based on the ISO 31000 normative, has been carried out by applying a three-step structure that integrates the identification, assessment and reduction procedures. The present paper exposes how this method has been applied to analyze previous accidents and malfunctions during UAV operations in order to obtain real failure causes. It has allowed highlighting common risks and hazardous sources and proposing specific guards and safety measures for the agricultural context.
Resumo:
Steam Generator Tube Rupture (SGTR) sequences in Pressurized Water Reactors are known to be one of the most demanding transients for the operating crew. SGTR are a special kind of transient as they could lead to radiological releases without core damage or containment failure, as they can constitute a direct path from the reactor coolant system to the environment. The first methodology used to perform the Deterministic Safety Analysis (DSA) of a SGTR did not credit the operator action for the first 30 min of the transient, assuming that the operating crew was able to stop the primary to secondary leakage within that period of time. However, the different real SGTR accident cases happened in the USA and over the world demonstrated that the operators usually take more than 30 min to stop the leakage in actual sequences. Some methodologies were raised to overcome that fact, considering operator actions from the beginning of the transient, as it is done in Probabilistic Safety Analysis. This paper presents the results of comparing different assumptions regarding the single failure criteria and the operator action taken from the most common methodologies included in the different Deterministic Safety Analysis. One single failure criteria that has not been analysed previously in the literature is proposed and analysed in this paper too. The comparison is done with a PWR Westinghouse three loop model in TRACE code (Almaraz NPP) with best estimate assumptions but including deterministic hypothesis such as single failure criteria or loss of offsite power. The behaviour of the reactor is quite diverse depending on the different assumptions made regarding the operator actions. On the other hand, although there are high conservatisms included in the hypothesis, as the single failure criteria, all the results are quite far from the regulatory limits. In addition, some improvements to the Emergency Operating Procedures to minimize the offsite release from the damaged SG in case of a SGTR are outlined taking into account the offsite dose sensitivity results.
Resumo:
This paper proposes a method for the identification of different partial discharges (PDs) sources through the analysis of a collection of PD signals acquired with a PD measurement system. This method, robust and sensitive enough to cope with noisy data and external interferences, combines the characterization of each signal from the collection, with a clustering procedure, the CLARA algorithm. Several features are proposed for the characterization of the signals, being the wavelet variances, the frequency estimated with the Prony method, and the energy, the most relevant for the performance of the clustering procedure. The result of the unsupervised classification is a set of clusters each containing those signals which are more similar to each other than to those in other clusters. The analysis of the classification results permits both the identification of different PD sources and the discrimination between original PD signals, reflections, noise and external interferences. The methods and graphical tools detailed in this paper have been coded and published as a contributed package of the R environment under a GNU/GPL license.
Resumo:
After construction of the LYSS (Light cYcling Stressing Source) in early 2014, several CPV receivers, with and without secondary optical element (SOE) have been aged under fast transient illumination cycling,. The test plan for Madrid consisted of 50000 cycles. Receivers with poor heat spreaders showed low reliability but those with thicker metal layers passed the test well. The operation of LYSS along 8 months, after more than 250000 cycles, did not show any significant failure, except lamp reposition every 120 hours, in average. The equipment seems valid for unveiling weak receiver designs with respect to intensive illumination, in steady and transient modes.