935 resultados para Increasing failure rate
Resumo:
AIMS Vent-HeFT is a multicentre randomized trial designed to investigate the potential additive benefits of inspiratory muscle training (IMT) on aerobic training (AT) in patients with chronic heart failure (CHF). METHODS AND RESULTS Forty-three CHF patients with a mean age of 58 ± 12 years, peak oxygen consumption (peak VO2 ) 17.9 ± 5 mL/kg/min, and LVEF 29.5 ± 5% were randomized to an AT/IMT group (n = 21) or to an AT/SHAM group (n = 22) in a 12-week exercise programme. AT involved 45 min of ergometer training at 70-80% of maximum heart rate, three times a week for both groups. In the AT/IMT group, IMT was performed at 60% of sustained maximal inspiratory pressure (SPImax ) while in the AT/SHAM group it was performed at 10% of SPImax , using a computer biofeedback trainer for 30 min, three times a week. At baseline and at 3 months, patients were evaluated for exercise capacity, lung function, inspiratory muscle strength (PImax ) and work capacity (SPImax ), quality of life (QoL), LVEF and LV diameter, dyspnoea, C-reactive protein (CRP), and NT-proBNP. IMT resulted in a significantly higher benefit in SPImax (P = 0.02), QoL (P = 0.002), dyspnoea (P = 0.004), CRP (P = 0.03), and NT-proBNP (P = 0.004). In both AT/IMT and AT/SHAM groups PImax (P < 0.001, P = 0.02), peak VO2 (P = 0.008, P = 0.04), and LVEF (P = 0.005, P = 0.002) improved significantly; however, without an additional benefit for either of the groups. CONCLUSION This randomized multicentre study demonstrates that IMT combined with aerobic training provides additional benefits in functional and serum biomarkers in patients with moderate CHF. These findings advocate for application of IMT in cardiac rehabilitation programmes.
Resumo:
OBJECTIVES Fontan failure (FF) represents a growing and challenging indication for paediatric orthotopic heart transplantation (OHT). The aim of this study was to identify predictors of the best mid-term outcome in OHT after FF. METHODS Twenty-year multi-institutional retrospective analysis on OHT for FF. RESULTS Between 1991 and 2011, 61 patients, mean age 15.0 ± 9.7 years, underwent OHT for failing atriopulmonary connection (17 patients = 27.8%) or total cavopulmonary connection (44 patients = 72.2%). Modality of FF included arrhythmia (14.8%), complex obstructions in the Fontan circuit (16.4%), protein-losing enteropathy (PLE) (22.9%), impaired ventricular function (31.1%) or a combination of the above (14.8%). The mean time interval between Fontan completion and OHT was 10.7 ± 6.6 years. Early FF occurred in 18%, requiring OHT 0.8 ± 0.5 years after Fontan. The hospital mortality rate was 18.3%, mainly secondary to infection (36.4%) and graft failure (27.3%). The mean follow-up was 66.8 ± 54.2 months. The overall Kaplan-Meier survival estimate was 81.9 ± 1.8% at 1 year, 73 ± 2.7% at 5 years and 56.8 ± 4.3% at 10 years. The Kaplan-Meier 5-year survival estimate was 82.3 ± 5.9% in late FF and 32.7 ± 15.0% in early FF (P = 0.0007). Late FF with poor ventricular function exhibited a 91.5 ± 5.8% 5-year OHT survival. PLE was cured in 77.7% of hospital survivors, but the 5-year Kaplan-Meier survival estimate in PLE was 46.3 ± 14.4 vs 84.3 ± 5.5% in non-PLE (P = 0.0147). Cox proportional hazards identified early FF (P = 0.0005), complex Fontan pathway obstruction (P = 0.0043) and PLE (P = 0.0033) as independent predictors of 5-year mortality. CONCLUSIONS OHT is an excellent surgical option for late FF with impaired ventricular function. Protein dispersion improves with OHT, but PLE negatively affects the mid-term OHT outcome, mainly for early infective complications.
Resumo:
BACKGROUND Anti-TNFα agents are commonly used for ulcerative colitis (UC) therapy in the event of non-response to conventional strategies or as colon-salvaging therapy. The objectives were to assess the appropriateness of biological therapies for UC patients and to study treatment discontinuation over time, according to appropriateness of treatment, as a measure of outcome. METHODS We selected adult ulcerative colitis patients from the Swiss IBD cohort who had been treated with anti-TNFα agents. Appropriateness of the first-line anti-TNFα treatment was assessed using detailed criteria developed during the European Panel on the Appropriateness of Therapy for UC. Treatment discontinuation as an outcome was assessed for categories of appropriateness. RESULTS Appropriateness of the first-line biological treatment was determined in 186 UC patients. For 64% of them, this treatment was considered appropriate. During follow-up, 37% of all patients discontinued biological treatment, 17% specifically because of failure. Time-to-failure of treatment was significantly different among patients on an appropriate biological treatment compared to those for whom the treatment was considered not appropriate (p=0.0007). Discontinuation rate after 2years was 26% compared to 54% between those two groups. Patients on inappropriate biological treatment were more likely to have severe disease, concomitant steroids and/or immunomodulators. They were also consistently more likely to suffer a failure of efficacy and to stop therapy during follow-up. CONCLUSION Appropriateness of first-line anti-TNFα therapy results in a greater likelihood of continuing with the therapy. In situations where biological treatment is uncertain or inappropriate, physicians should consider other options instead of prescribing anti-TNFα agents.
Resumo:
Blood loss and bleeding complications may often be observed in critically ill patients on renal replacement therapies (RRT). Here we investigate procedural (i.e. RRT-related) and non-procedural blood loss as well as transfusion requirements in regard to the chosen mode of dialysis (i.e. intermittent haemodialysis [IHD] versus continuous veno-venous haemofiltration [CVVH]). Two hundred and fifty-two patients (122 CVVH, 159 male; aged 61.5±13.9 years) with dialysis-dependent acute renal failure were analysed in a sub-analysis of the prospective randomised controlled clinical trial-CONVINT-comparing IHD and CVVH. Bleeding complications including severity of bleeding and RRT-related blood loss were assessed. We observed that 3.6% of patients died related to severe bleeding episodes (between group P=0.94). Major all-cause bleeding complications were observed in 23% IHD versus 26% of CVVH group patients (P=0.95). Under CVVH, the rate of RRT-related blood loss events (57.4% versus 30.4%, P=0.01) and mean total blood volume lost was increased (222.3±291.9 versus 112.5±222.7 ml per patient, P <0.001). Overall, transfusion rates did not differ between the study groups. In patients with sepsis, transfusion rates of all blood products were significantly higher when compared to cardiogenic shock (all P <0.01) or other conditions. In conclusion, procedural and non-procedural blood loss may often be observed in critically ill patients on RRT. In CVVH-treated patients, procedural blood loss was increased but overall transfusion rates remained unchanged. Our data show that IHD and CVVH may be regarded as equivalent approaches in critically ill patients with dialysis-dependent acute renal failure in this regard.
Resumo:
Analysis of recurrent events has been widely discussed in medical, health services, insurance, and engineering areas in recent years. This research proposes to use a nonhomogeneous Yule process with the proportional intensity assumption to model the hazard function on recurrent events data and the associated risk factors. This method assumes that repeated events occur for each individual, with given covariates, according to a nonhomogeneous Yule process with intensity function λx(t) = λ 0(t) · exp( x′β). One of the advantages of using a non-homogeneous Yule process for recurrent events is that it assumes that the recurrent rate is proportional to the number of events that occur up to time t. Maximum likelihood estimation is used to provide estimates of the parameters in the model, and a generalized scoring iterative procedure is applied in numerical computation. ^ Model comparisons between the proposed method and other existing recurrent models are addressed by simulation. One example concerning recurrent myocardial infarction events compared between two distinct populations, Mexican-American and Non-Hispanic Whites in the Corpus Christi Heart Project is examined. ^
Resumo:
Background. Heart failure (HF) is a health problem of epidemic proportions and a clinical syndrome that leads to progressively severe symptoms, which contribute significantly to the burden of the disease. Several factors may affect the symptom burden of patients with HF, including physiological, psychological, and spiritual factors. This study was designed to examine the inter-relationship of physiological, psychological, and spiritual factors affecting symptoms for patients with HF. ^ Objectives. The aims of this study were to examine symptom burden of heart failure patients related to: (1) the physiological factor of brain natriuretic peptide (BNP); (2) the psychological factor of depression; (3) the spiritual factors of self transcendence and purpose in life; and (4) combined effects of physiological, psychological and spiritual factors. One additional aim was to describe symptom intensity related to symptom burden. ^ Methods. A cross-sectional non-experimental correlational design was used to examine factors affecting symptom burden in 105 patients with HF from a southwestern medical center outpatient heart failure clinic. Both men and women were included; average age was 56.6 (SD = 16.86). All measures except BNP were obtained by patient self-report. ^ Results. The mean number of symptoms present was 8.17 (SD = 3.34) with the three most common symptoms being shortness of breath on exertion, fatigue, and weakness. The mean symptom intensity was 365.66 (SD = 199.50) on a summative scale of visual analogue reports for 13 symptoms. The mean BNP level was 292.64 pg/ml (SD = 57 1.11). The prevalence rate for depression was 43.6% with a mean score of 3.48 (SD = 2.75) on the Center for Epidemiological Studies - Depression scale (CES-D 10) scale. In a multivariate analysis, depression was the only significant predictor of symptom burden (r = .474; P < .001), accounting for 18% of the variance. Spirituality had an interaction effect with depression (P ≤ .001), serving as a moderator between depression and symptom burden. ^ Conclusion. HF is a chronic and progressive syndrome characterized by severe symptoms, hospitalizations and disability. Depression is significantly related to symptom burden and this relationship is moderated by spirituality. ^
Resumo:
Racial differences in heart failure with preserved ejection fraction (HFpEF) have rarely been studied in an ambulatory, financially "equal access" cohort, although the majority of such patients are treated as outpatients. ^ Retrospective data was collected from 2,526 patients (2,240 Whites, 286 African American) with HFpEF treated at 153 VA clinics, as part of the VA External Peer Review Program (EPRP) between October 2000 and September 2002. Kaplan Meier curves (stratified by race) were created for time to first heart failure (HF) hospitalization, all cause hospitalization and death and Cox proportional multivariate regression models were constructed to evaluate the effect of race on these outcomes. ^ African American patients were younger (67.7 ± 11.3 vs. 71.2 ± 9.8 years; p < 0.001), had lower prevalence of atrial fibrillation (24.5 % vs. 37%; p <0.001), chronic obstructive pulmonary disease (23.4 % vs. 36.9%, p <0.001), but had higher blood pressure (systolic blood pressure > 120 mm Hg 77.6% vs. 67.8%; p < 0.01), glomerular filtration rate (67.9 ± 31.0 vs. 61.6 ± 22.6 mL/min/1.73 m2; p < 0.001), anemia (56.6% vs. 41.7%; p <0.001) as compared to whites. African Americans were found to have higher risk adjusted rate of HF hospitalization (HR 1.52, 95% CI 1.1 - 2.11; p = 0.01), with no difference in risk-adjusted all cause hospitalization (p = 0.80) and death (p= 0.21). ^ In a financially "equal access" setting of the VA, among ambulatory patients with HFpEF, African Americans have similar rates of mortality and all cause hospitalization but have an increased risk of HF hospitalizations compared to whites.^
Resumo:
In December, 1980, following increasing congressional and constituent-interest in problems associated with hazardous waste, the Comprehensive Environmental Recovery, Compensation and Liability Act (CERCLA) was passed. During its development, the legislative initiative was seriously compromised which resulted in a less exhaustive approach than was formerly sought. Still, CERCLA (Superfund) which established, among other things, authority to clean up abandoned waste dumps and to respond to emergencies caused by releases of hazardous substances was welcomed by many as an important initial law critical to the cleanup of the nation's hazardous waste. Expectations raised by passage of this bill were tragically unmet. By the end of four years, only six sites had been declared by the EPA as cleaned. Seemingly, even those determinations were liberal; of the six sites, two were identified subsequently as requiring further cleanup.^ This analysis is focused upon the implementation failure of the Superfund. In light of that focus, discussion encompasses development of linkages between flaws in the legislative language and foreclosure of chances for implementation success. Specification of such linkages is achieved through examination of the legislative initiative, identification of its flaws and characterization of attendant deficits in implementation ability. Subsequent analysis is addressed to how such legislative frailities might have been avoided and to attendant regulatory weaknesses which have contributed to implementation failure. Each of these analyses are accomplished through application of an expanded approach to the backward mapping analytic technique as presented by Elmore. Results and recommendations follow.^ Consideration is devoted to a variety of regulatory issues as well as to those pertinent to legislative and implementation analysis. Problems in assessing legal liability associated with hazardous waste management are presented, as is a detailed review of the legislative development of Superfund, and its initial implementation by Gorsuch's EPA. ^
Resumo:
Trastuzumab is a humanized-monoclonal antibody, developed specifically for HER2-neu over-expressed breast cancer patients. Although highly effective and well tolerated, it was reported associated with Congestive Heart Failure (CHF) in clinical trial settings (up to 27%). This leaves a gap where, Trastuzumab-related CHF rate in general population, especially older breast cancer patients with long term treatment of Trastuzumab remains unknown. This thesis examined the rates and risk factors associated with Trastuzumab-related CHF in a large population of older breast cancer patients. A retrospective cohort study using the existing Surveillance, Epidemiology and End Results (SEER) and Medicare linked de-identified database was performed. Breast cancer patients ≥ 66 years old, stage I-IV, diagnosed in 1998-2007, fully covered by Medicare but no HMO within 1-year before and after first diagnosis month, received 1st chemotherapy no earlier than 30 days prior to diagnosis were selected as study cohort. The primary outcome of this study is a diagnosis of CHF after starting chemotherapy but none CHF claims on or before cancer diagnosis date. ICD-9 and HCPCS codes were used to pool the claims for Trastuzumab use, chemotherapy, comorbidities and CHF claims. Statistical analysis including comparison of characteristics, Kaplan-Meier survival estimates of CHF rates for long term follow up, and Multivariable Cox regression model using Trastuzumab as a time-dependent variable were performed. Out of 17,684 selected cohort, 2,037 (12%) received Trastuzumab. Among them, 35% (714 out of 2037) were diagnosed with CHF, compared to 31% (4784 of 15647) of CHF rate in other chemotherapy recipients (p<.0001). After 10 years of follow-up, 65% of Trastuzumab users developed CHF, compared to 47% in their counterparts. After adjusting for patient demographic, tumor and clinical characteristics, older breast cancer patients who used Trastuzumab showed a significantly higher risk in developing CHF than other chemotherapy recipients (HR 1.69, 95% CI 1.54 - 1.85). And this risk is increased along with the increment of age (p-value < .0001). Among Trastuzumab users, these covariates also significantly increased the risk of CHF: older age, stage IV, Non-Hispanic black race, unmarried, comorbidities, Anthracyclin use, Taxane use, and lower educational level. It is concluded that, Trastuzumab users in older breast cancer patients had 69% higher risk in developing CHF than non-Trastuzumab users, much higher than the 27% increase reported in younger clinical trial patients. Older age, Non-Hispanic black race, unmarried, comorbidity, combined use with Anthracycline or Taxane also significantly increase the risk of CHF development in older patients treated with Trastuzumab. ^
Resumo:
Renal insufficiency is one of the most common co-morbidities present in heart failure (HF) patients. It has significant impact on mortality and adverse outcomes. Cystatin C has been shown as a promising marker of renal function. A systematic review of all the published studies evaluating the prognostic role of cystatin C in both acute and chronic HF was undertaken. A comprehensive literature search was conducted involving various terms of 'cystatin C' and 'heart failure' in Pubmed medline and Embase libraries using Scopus database. A total of twelve observational studies were selected in this review for detailed assessment. Six studies were performed in acute HF patients and six were performed in chronic HF patients. Cystatin C was used as a continuous variable, as quartiles/tertiles or as a categorical variable in these studies. Different mortality endpoints were reported in these studies. All twelve studies demonstrated a significant association of cystatin C with mortality. This association was found to be independent of other baseline risk factors that are known to impact HF outcomes. In both acute and chronic HF, cystatin C was not only a strong predictor of outcomes but also a better prognostic marker than creatinine and estimated glomerular filtration rate (eGFR). A combination of cystatin C with other biomarkers such as N terminal pro B- type natriuretic peptide (NT-proBNP) or creatinine also improved the risk stratification. The plausible mechanisms are renal dysfunction, inflammation or a direct effect of cystatin C on ventricular remodeling. Either alone or in combination, cystatin C is a better, accurate and a reliable biomarker for HF prognosis. ^
Resumo:
Background: For most cytotoxic and biologic anti-cancer agents, the response rate of the drug is commonly assumed to be non-decreasing with an increasing dose. However, an increasing dose does not always result in an appreciable increase in the response rate. This may especially be true at high doses for a biologic agent. Therefore, in a phase II trial the investigators may be interested in testing the anti-tumor activity of a drug at more than one (often two) doses, instead of only at the maximum tolerated dose (MTD). This way, when the lower dose appears equally effective, this dose can be recommended for further confirmatory testing in a phase III trial under potential long-term toxicity and cost considerations. A common approach to designing such a phase II trial has been to use an independent (e.g., Simon's two-stage) design at each dose ignoring the prior knowledge about the ordering of the response probabilities at the different doses. However, failure to account for this ordering constraint in estimating the response probabilities may result in an inefficient design. In this dissertation, we developed extensions of Simon's optimal and minimax two-stage designs, including both frequentist and Bayesian methods, for two doses that assume ordered response rates between doses. ^ Methods: Optimal and minimax two-stage designs are proposed for phase II clinical trials in settings where the true response rates at two dose levels are ordered. We borrow strength between doses using isotonic regression and control the joint and/or marginal error probabilities. Bayesian two-stage designs are also proposed under a stochastic ordering constraint. ^ Results: Compared to Simon's designs, when controlling the power and type I error at the same levels, the proposed frequentist and Bayesian designs reduce the maximum and expected sample sizes. Most of the proposed designs also increase the probability of early termination when the true response rates are poor. ^ Conclusion: Proposed frequentist and Bayesian designs are superior to Simon's designs in terms of operating characteristics (expected sample size and probability of early termination, when the response rates are poor) Thus, the proposed designs lead to more cost-efficient and ethical trials, and may consequently improve and expedite the drug discovery process. The proposed designs may be extended to designs of multiple group trials and drug combination trials.^
Resumo:
Synthetic mass accumulation rates have been calculated for ODP Site 707 using depth-density and depth-porosity functions to estimate values for these parameters with increasing sediment thickness, at 1 Ma time intervals determined on the basis of published microfossil datums. These datums were the basis of the age model used by Peterson and Backman (1990, doi:10.2973/odp.proc.sr.115.163.1990) to calculate actual mass accumulation rate data using density and porosity measurements. A comparison is made between the synthetic and actual mass accumulation rate values for the time interval 37 Ma to the Recent for 1 Myr time intervals. There is a correlation coefficient of 0.993 between the two data sets, with an absolute difference generally less than 0.1 g/cm**2/kyr. We have used the method to extend the mass accumulation rate analysis back to the Late Paleocene (60 Ma) for Site 707. Providing age datums (e.g. fossil or magnetic anomaly data) are available the generation of synthetic mass accumulation rates can be calculated for any sediment sequence.
Resumo:
Site 1123 is located on the northeastern flank of the Chatham Rise. Sedimentological and clay mineralogical analyses indicate a very fine grained carbonate-rich sediment. Smectite and illite are the main constituents of the clay mineral assemblage. High smectite values in the Eocene decrease in younger sediment sequences. Illite and chlorite concentrations increase in younger sediments with significant steps at 13.5, 9, and 6.4 Ma. The kaolinite content is near the detection limit and not significant. We observed only small fluctuations of the clay mineral composition, which indicates a uniform sedimentation process, probably driven by long-term processes. Good correspondence is shown between increasing illite and chlorite values and the tectonic uplift history of the Southern Alps.
Resumo:
Acidification of the oceans by increasing anthropogenic CO2 emissions will cause a decrease in biogenic calcification and an increase in carbonate dissolution. Previous studies have suggested that carbonate dissolution will occur in polar regions and in the deep sea where saturation state with respect to carbonate minerals (Omega) will be <1 by 2100. Recent reports demonstrate nocturnal carbonate dissolution of reefs, despite a Omega a (aragonite saturation state) value of >1. This is probably related to the dissolution of reef carbonate (Mg-calcite), which is more soluble than aragonite. However, the threshold of Omega for the dissolution of natural sediments has not been clearly determined. We designed an experimental dissolution system with conditions mimicking those of a natural coral reef, and measured the dissolution rates of aragonite in corals, and of Mg-calcite excreted by other marine organisms, under conditions of Omega a > 1, with controlled seawater pCO2. The experimental data show that dissolution of bulk carbonate sediments sampled from a coral reef occurs at Omega a values of 3.7 to 3.8. Mg-calcite derived from foraminifera and coralline algae dissolves at Omega a values between 3.0 and 3.2, and coralline aragonite starts to dissolve when Omega a = 1.0. We show that nocturnal carbonate dissolution of coral reefs occurs mainly by the dissolution of foraminiferans and coralline algae in reef sediments.
Resumo:
Uptake of half of the fossil fuel CO2 into the ocean causes gradual seawater acidification. This has been shown to slow down calcification of major calcifying groups, such as corals, foraminifera, and coccolithophores. Here we show that two of the most productive marine calcifying species, the coccolithophores Coccolithus pelagicus and Calcidiscus leptoporus, do not follow the CO2-related calcification response previously found. In batch culture experiments, particulate inorganic carbon (PIC) of C. leptoporus changes with increasing CO2 concentration in a nonlinear relationship. A PIC optimum curve is obtained, with a maximum value at present-day surface ocean pCO2 levels (?360 ppm CO2). With particulate organic carbon (POC) remaining constant over the range of CO2 concentrations, the PIC/POC ratio also shows an optimum curve. In the C. pelagicus cultures, neither PIC nor POC changes significantly over the CO2 range tested, yielding a stable PIC/POC ratio. Since growth rate in both species did not change with pCO2, POC and PIC production show the same pattern as POC and PIC. The two investigated species respond differently to changes in the seawater carbonate chemistry, highlighting the need to consider species-specific effects when evaluating whole ecosystem responses. Changes of calcification rate (PIC production) were highly correlated to changes in coccolith morphology. Since our experimental results suggest altered coccolith morphology (at least in the case of C. leptoporus) in the geological past, coccoliths originating from sedimentary records of periods with different CO2 levels were analyzed. Analysis of sediment samples was performed on six cores obtained from locations well above the lysocline and covering a range of latitudes throughout the Atlantic Ocean. Scanning electron micrograph analysis of coccolith morphologies did not reveal any evidence for significant numbers of incomplete or malformed coccoliths of C. pelagicus and C. leptoporus in last glacial maximum and Holocene sediments. The discrepancy between experimental and geological results might be explained by adaptation to changing carbonate chemistry.