84 resultados para Increasing failure rate
Resumo:
The choice and duration of antiplatelet therapy for secondary prevention of coronary artery disease (CAD) is determined by the clinical context and treatment strategy. Oral antiplatelet agents for secondary prevention include the cyclo-oxygenase-1 inhibitor aspirin, and the ADP dependent P2Y12 inhibitors clopidogrel, prasugrel and ticagrelor. Aspirin constitutes the cornerstone in secondary prevention of CAD and is complemented by clopidogrel in patients with stable CAD undergoing percutaneous coronary intervention. Among patients with acute coronary syndrome, prasugrel and ticagrelor improve net clinical outcome by reducing ischaemic adverse events at the expense of an increased risk of bleeding as compared with clopidogrel. Prasugrel appears particularly effective among patients with ST elevation myocardial infarction to reduce the risk of stent thrombosis compared with clopidogrel, and offered a greater net clinical benefit among patients with diabetes compared with patients without diabetes. Ticagrelor is associated with reduced mortality without increasing the rate of coronary artery bypass graft (CABG)-related bleeding as compared with clopidogrel. Dual antiplatelet therapy should be continued for a minimum of 1 year among patients with acute coronary syndrome irrespective of stent type; among patients with stable CAD treated with new generation drug-eluting stents, available data suggest no benefit to prolong antiplatelet treatment beyond 6 months.
Resumo:
Reduced bone stock can result in fractures that mostly occur in the spine, distal radius, and proximal femur. In case of operative treatment, osteoporosis is associated with an increased failure rate. To estimate implant anchorage, mechanical methods seem to be promising to measure bone strength intraoperatively. It has been shown that the mechanical peak torque correlates with the local bone mineral density and screw failure load in hip, hindfoot, humerus, and spine in vitro. One device to measure mechanical peak torque is the DensiProbe (AO Research Institute, Davos, Switzerland). The device has shown its effectiveness in mechanical peak torque measurement in mechanical testing setups for the use in hip, hindfoot, and spine. In all studies, the correlation of mechanical torque measurement and local bone mineral density and screw failure load could be shown. It allows the surgeon to judge local bone strength intraoperatively directly at the region of interest and gives valuable information if additional augmentation is needed. We summarize methods of this new technique, its advantages and limitations, and give an overview of actual and possible future applications.
Resumo:
OBJECTIVES Valve-sparing root replacement (VSRR) is thought to reduce the rate of thromboembolic and bleeding events compared with aortic root replacement using a mechanical aortic root replacement (MRR) with a composite graft by avoiding oral anticoagulation. But as VSRR carries a certain risk for subsequent reinterventions, decision-making in the individual patient can be challenging. METHODS Of 100 Marfan syndrome (MFS) patients who underwent 169 aortic surgeries and were followed at our institution since 1995, 59 consecutive patients without a history of dissection or prior aortic surgery underwent elective VSRR or MRR and were retrospectively analysed. RESULTS VSRR was performed in 29 (David n = 24, Yacoub n = 5) and MRR in 30 patients. The mean age was 33 ± 15 years. The mean follow-up after VSRR was 6.5 ± 4 years (180 patient-years) compared with 8.8 ± 9 years (274 patient-years) after MRR. Reoperation rates after root remodelling (Yacoub) were significantly higher than after the reimplantation (David) procedure (60 vs 4.2%, P = 0.01). The need for reinterventions after the reimplantation procedure (0.8% per patient-year) was not significantly higher than after MRR (P = 0.44) but follow-up after VSRR was significantly shorter (P = 0.03). There was neither significant morbidity nor mortality associated with root reoperations. There were no neurological events after VSRR compared with four stroke/intracranial bleeding events in the MRR group (log-rank, P = 0.11), translating into an event rate of 1.46% per patient-year following MRR. CONCLUSION The calculated annual failure rate after VSRR using the reimplantation technique was lower than the annual risk for thromboembolic or bleeding events. Since the perioperative risk of reinterventions following VSRR is low, patients might benefit from VSRR even if redo surgery may become necessary during follow-up.
Resumo:
OBJECT In ventriculoperitoneal (VP) shunt surgery, laparoscopic assistance can be used for placement of the peritoneal catheter. Until now, the efficacy of laparoscopic shunt placement has been investigated only in retrospective and nonrandomized prospective studies, which have reported decreased distal shunt dysfunction rates in patients undergoing laparascopic placement compared with mini-laparotomy cohorts. In this randomized controlled trial the authors compared rates of shunt failure in patients who underwent laparoscopic surgery for peritoneal catheter placement with rates in patients who underwent traditional mini-laparotomy. METHODS One hundred twenty patients scheduled for VP shunt surgery were randomized to laparoscopic surgery or mini-laparotomy for insertion of the peritoneal catheter. The primary endpoint was the rate of overall shunt complication or failure within the first 12 months after surgery. Secondary endpoints were distal shunt failure, overall complication/ failure, duration of surgery and hospitalization, and morbidity. RESULTS The overall shunt complication/failure rate was 15% (9 of 60 cases) in the laparoscopic group and 18.3% (11 of 60 cases) in the mini-laparotomy group (p = 0.404). Patients in the laparoscopic group had no distal shunt failures; in contrast, 5 (8%) of 60 patients in the mini-laparotomy group experienced distal shunt failure (p = 0.029). Intraoperative complications occurred in 2 patients (both in the laparoscopic group), and abdominal pain led to catheter removal in 1 patient per group. Infections occurred in 1 patient in the laparoscopic group and 3 in the mini-laparotomy group. The mean durations of surgery and hospitalization were similar in the 2 groups. CONCLUSIONS While overall shunt failure rates were similar in the 2 groups, the use of laparoscopic shunt placement significantly reduced the rate of distal shunt failure compared with mini-laparotomy.
Resumo:
This study analysed the outcome of 563 Aplastic Anaemia (AA) children aged 0-12 years reported to the Severe Aplastic Anaemia Working Party database of the European Society for Blood and Marrow Transplantation, according to treatment received. Overall survival (OS) after upfront human leucocyte antigen-matched family donor (MFD) haematopoietic stem cell transplantation (HSCT) or immunosuppressive treatment (IST) was 91% vs. 87% (P 0·18). Event-free survival (EFS) after upfront MFD HSCT or IST was 87% vs. 33% (P 0·001). Ninety-one of 167 patients (55%) failed front-line IST and underwent rescue HSCT. The OS of this rescue group was 83% compared with 91% for upfront MFD HSCT patients and 97% for those who did not fail IST up-front (P 0·017). Rejection was 2% for MFD HSCT and HSCT post-IST failure (P 0·73). Acute graft-versus-host disease (GVHD) grade II-IV was 8% in MFD graft vs. 25% for HSCT post-IST failure (P < 0·0001). Chronic GVHD was 6% in MFD HSCT vs. 20% in HSCT post-IST failure (P < 0·0001). MFD HSCT is an excellent therapy for children with AA. IST has a high failure rate, but remains a reasonable first-line choice if MFD HSCT is not available because high OS enables access to HSCT, which is a very good rescue option.
Resumo:
The choice and duration of antiplatelet therapy for secondary prevention of coronary artery disease (CAD) is determined by the clinical context and treatment strategy. Oral antiplatelet agents for secondary prevention include the cyclo-oxygenase-1 inhibitor aspirin, and the ADP dependent P2Y12 inhibitors clopidogrel, prasugrel and ticagrelor. Aspirin constitutes the cornerstone in secondary prevention of CAD and is complemented by clopidogrel in patients with stable CAD undergoing percutaneous coronary intervention. Among patients with acute coronary syndrome, prasugrel and ticagrelor improve net clinical outcome by reducing ischaemic adverse events at the expense of an increased risk of bleeding as compared with clopidogrel. Prasugrel appears particularly effective among patients with ST elevation myocardial infarction to reduce the risk of stent thrombosis compared with clopidogrel, and offered a greater net clinical benefit among patients with diabetes compared with patients without diabetes. Ticagrelor is associated with reduced mortality without increasing the rate of coronary artery bypass graft (CABG)-related bleeding as compared with clopidogrel. Dual antiplatelet therapy should be continued for a minimum of 1 year among patients with acute coronary syndrome irrespective of stent type; among patients with stable CAD treated with new generation drug-eluting stents, available data suggest no benefit to prolong antiplatelet treatment beyond 6 months.
Resumo:
The ATLS program by the American college of surgeons is probably the most important globally active training organization dedicated to improve trauma management. Detection of acute haemorrhagic shock belongs to the key issues in clinical practice and thus also in medical teaching. (In this issue of the journal William Schulz and Ian McConachrie critically review the ATLS shock classification Table 1), which has been criticized after several attempts of validation have failed [1]. The main problem is that distinct ranges of heart rate are related to ranges of uncompensated blood loss and that the heart rate decrease observed in severe haemorrhagic shock is ignored [2]. Table 1. Estimated blood loos based on patient's initial presentation (ATLS Students Course Manual, 9th Edition, American College of Surgeons 2012). Class I Class II Class III Class IV Blood loss ml Up to 750 750–1500 1500–2000 >2000 Blood loss (% blood volume) Up to 15% 15–30% 30–40% >40% Pulse rate (BPM) <100 100–120 120–140 >140 Systolic blood pressure Normal Normal Decreased Decreased Pulse pressure Normal or ↑ Decreased Decreased Decreased Respiratory rate 14–20 20–30 30–40 >35 Urine output (ml/h) >30 20–30 5–15 negligible CNS/mental status Slightly anxious Mildly anxious Anxious, confused Confused, lethargic Initial fluid replacement Crystalloid Crystalloid Crystalloid and blood Crystalloid and blood Table options In a retrospective evaluation of the Trauma Audit and Research Network (TARN) database blood loss was estimated according to the injuries in nearly 165,000 adult trauma patients and each patient was allocated to one of the four ATLS shock classes [3]. Although heart rate increased and systolic blood pressure decreased from class I to class IV, respiratory rate and GCS were similar. The median heart rate in class IV patients was substantially lower than the value of 140 min−1 postulated by ATLS. Moreover deterioration of the different parameters does not necessarily go parallel as suggested in the ATLS shock classification [4] and [5]. In all these studies injury severity score (ISS) and mortality increased with in increasing shock class [3] and with increasing heart rate and decreasing blood pressure [4] and [5]. This supports the general concept that the higher heart rate and the lower blood pressure, the sicker is the patient. A prospective study attempted to validate a shock classification derived from the ATLS shock classes [6]. The authors used a combination of heart rate, blood pressure, clinically estimated blood loss and response to fluid resuscitation to classify trauma patients (Table 2) [6]. In their initial assessment of 715 predominantly blunt trauma patients 78% were classified as normal (Class 0), 14% as Class I, 6% as Class II and only 1% as Class III and Class IV respectively. This corresponds to the results from the previous retrospective studies [4] and [5]. The main endpoint used in the prospective study was therefore presence or absence of significant haemorrhage, defined as chest tube drainage >500 ml, evidence of >500 ml of blood loss in peritoneum, retroperitoneum or pelvic cavity on CT scan or requirement of any blood transfusion >2000 ml of crystalloid. Because of the low prevalence of class II or higher grades statistical evaluation was limited to a comparison between Class 0 and Class I–IV combined. As in the retrospective studies, Lawton did not find a statistical difference of heart rate and blood pressure among the five groups either, although there was a tendency to a higher heart rate in Class II patients. Apparently classification during primary survey did not rely on vital signs but considered the rather soft criterion of “clinical estimation of blood loss” and requirement of fluid substitution. This suggests that allocation of an individual patient to a shock classification was probably more an intuitive decision than an objective calculation the shock classification. Nevertheless it was a significant predictor of ISS [6]. Table 2. Shock grade categories in prospective validation study (Lawton, 2014) [6]. Normal No haemorrhage Class I Mild Class II Moderate Class III Severe Class IV Moribund Vitals Normal Normal HR > 100 with SBP >90 mmHg SBP < 90 mmHg SBP < 90 mmHg or imminent arrest Response to fluid bolus (1000 ml) NA Yes, no further fluid required Yes, no further fluid required Requires repeated fluid boluses Declining SBP despite fluid boluses Estimated blood loss (ml) None Up to 750 750–1500 1500–2000 >2000 Table options What does this mean for clinical practice and medical teaching? All these studies illustrate the difficulty to validate a useful and accepted physiologic general concept of the response of the organism to fluid loss: Decrease of cardiac output, increase of heart rate, decrease of pulse pressure occurring first and hypotension and bradycardia occurring only later. Increasing heart rate, increasing diastolic blood pressure or decreasing systolic blood pressure should make any clinician consider hypovolaemia first, because it is treatable and deterioration of the patient is preventable. This is true for the patient on the ward, the sedated patient in the intensive care unit or the anesthetized patients in the OR. We will therefore continue to teach this typical pattern but will continue to mention the exceptions and pitfalls on a second stage. The shock classification of ATLS is primarily used to illustrate the typical pattern of acute haemorrhagic shock (tachycardia and hypotension) as opposed to the Cushing reflex (bradycardia and hypertension) in severe head injury and intracranial hypertension or to the neurogenic shock in acute tetraplegia or high paraplegia (relative bradycardia and hypotension). Schulz and McConachrie nicely summarize the various confounders and exceptions from the general pattern and explain why in clinical reality patients often do not present with the “typical” pictures of our textbooks [1]. ATLS refers to the pitfalls in the signs of acute haemorrhage as well: Advanced age, athletes, pregnancy, medications and pace makers and explicitly state that individual subjects may not follow the general pattern. Obviously the ATLS shock classification which is the basis for a number of questions in the written test of the ATLS students course and which has been used for decades probably needs modification and cannot be literally applied in clinical practice. The European Trauma Course, another important Trauma training program uses the same parameters to estimate blood loss together with clinical exam and laboratory findings (e.g. base deficit and lactate) but does not use a shock classification related to absolute values. In conclusion the typical physiologic response to haemorrhage as illustrated by the ATLS shock classes remains an important issue in clinical practice and in teaching. The estimation of the severity haemorrhage in the initial assessment trauma patients is (and was never) solely based on vital signs only but includes the pattern of injuries, the requirement of fluid substitution and potential confounders. Vital signs are not obsolete especially in the course of treatment but must be interpreted in view of the clinical context. Conflict of interest None declared. Member of Swiss national ATLS core faculty.
Resumo:
PURPOSE The aim of this study was to analyze the patient pool referred to a specialty clinic for implant surgery over a 3-year period. MATERIALS AND METHODS All patients receiving dental implants between 2008 and 2010 at the Department of Oral Surgery and Stomatology were included in the study. As primary outcome parameters, the patients were analyzed according to the following criteria: age, sex, systemic diseases, and indication for therapy. For the inserted implants, the type of surgical procedure, the types of implants placed, postsurgical complications, and early failures were recorded. A logistic regression analysis was performed to identify possible local and systemic risk factors for complications. As a secondary outcome, data regarding demographics and surgical procedures were compared with the findings of a historic study group (2002 to 2004). RESULTS A total of 1,568 patients (792 women and 776 men; mean age, 52.6 years) received 2,279 implants. The most frequent indication was a single-tooth gap (52.8%). Augmentative procedures were performed in 60% of the cases. Tissue-level implants (72.1%) were more frequently used than bone-level implants (27.9%). Regarding dimensions of the implants, a diameter of 4.1 mm (59.7%) and a length of 10 mm (55.0%) were most often utilized. An early failure rate of 0.6% was recorded (13 implants). Patients were older and received more implants in the maxilla, and the complexity of surgical interventions had increased when compared to the patient pool of 2002 to 2004. CONCLUSION Implant therapy performed in a surgical specialty clinic utilizing strict patient selection and evidence-based surgical protocols showed a very low early failure rate of 0.6%.
Resumo:
Context Long-term antiretroviral therapy (ART) use in resource-limited countries leads to increasing numbers of patients with HIV taking second-line therapy. Limited access to further therapeutic options makes essential the evaluation of second-line regimen efficacy in these settings. Objectives To investigate failure rates in patients receiving second-line therapy and factors associated with failure and death. Design, Setting, and Participants Multicohort study of 632 patients >14 years old receiving second-line therapy for more than 6 months in 27 ART programs in Africa and Asia between January 2001 and October 2008. Main Outcome Measures Clinical, immunological, virological, and immunovirological failure (first diagnosed episode of immunological or virological failure) rates, and mortality after 6 months of second-line therapy use. Sensitivity analyses were performed using alternative CD4 cell count thresholds for immunological and immunovirological definitions of failure and for cohort attrition instead of death. Results The 632 patients provided 740.7 person-years of follow-up; 119 (18.8%) met World Health Organization failure criteria after a median 11.9 months following the start of second-line therapy (interquartile range [IQR], 8.7-17.0 months), and 34 (5.4%) died after a median 15.1 months (IQR, 11.9-25.7 months). Failure rates were lower in those who changed 2 nucleoside reverse transcriptase inhibitors (NRTIs) instead of 1 (179.2 vs 251.6 per 1000 person-years; incidence rate ratio [IRR], 0.64; 95% confidence interval [CI], 0.42-0.96), and higher in those with lowest adherence index (383.5 vs 176.0 per 1000 person-years; IRR, 3.14; 95% CI, 1.67-5.90 for <80% vs ≥95% [percentage adherent, as represented by percentage of appointments attended with no delay]). Failure rates increased with lower CD4 cell counts when second-line therapy was started, from 156.3 vs 96.2 per 1000 person-years; IRR, 1.59 (95% CI, 0.78-3.25) for 100 to 199/μL to 336.8 per 1000 person-years; IRR, 3.32 (95% CI, 1.81-6.08) for less than 50/μL vs 200/μL or higher; and decreased with time using second-line therapy, from 250.0 vs 123.2 per 1000 person-years; IRR, 1.90 (95% CI, 1.19-3.02) for 6 to 11 months to 212.0 per 1000 person-years; 1.71 (95% CI, 1.01-2.88) for 12 to 17 months vs 18 or more months. Mortality for those taking second-line therapy was lower in women (32.4 vs 68.3 per 1000 person-years; hazard ratio [HR], 0.45; 95% CI, 0.23-0.91); and higher in patients with treatment failure of any type (91.9 vs 28.1 per 1000 person-years; HR, 2.83; 95% CI, 1.38-5.80). Sensitivity analyses showed similar results. Conclusions Among patients in Africa and Asia receiving second-line therapy for HIV, treatment failure was associated with low CD4 cell counts at second-line therapy start, use of suboptimal second-line regimens, and poor adherence. Mortality was associated with diagnosed treatment failure.
Resumo:
Resting heart rate is a promising modifiable cardiovascular risk marker in older adults, but the mechanisms linking heart rate to cardiovascular disease are not fully understood. We aimed to assess the association between resting heart rate and incident heart failure (HF) and cardiovascular mortality, and to examine whether these associations might be attributable to systemic inflammation and endothelial dysfunction.
Resumo:
BACKGROUND: Psychological distress, poor disease-specific quality of life (QoL), and reduction in vagally mediated early heart rate recovery (HRR) after exercise, all previously predicted morbidity and mortality in patients with chronic heart failure (CHF). We hypothesized lower HRR with greater psychological distress and poorer QoL in CHF. DESIGN: All assessments were made at the beginning of a comprehensive cardiac outpatient rehabilitation intervention program. METHODS: Fifty-six CHF patients (mean 58+/-12 years, 84% men) completed the Hospital Anxiety and Depression Scale and the Minnesota Living With Heart Failure Questionnaire. HRR was determined as the difference between HR at the end of exercise and 1 min after exercise termination (HRR-1). RESULTS: Elevated levels of anxiety symptoms (P=0.005) as well as decreased levels of the Minnesota Living With Heart Failure Questionnaire total (P = 0.025), physical (P=0.026), and emotional (P=0.017) QoL were independently associated with blunted HRR-1. Anxiety, total, physical, and emotional QoL explained 11.4, 8, 7.8, and 9.0%, respectively, of the variance after controlling for covariates. Depressed mood was not associated with HRR-1 (P=0.20). CONCLUSION: Increased psychological distress with regard to elevated anxiety symptoms and impaired QoL were independent correlates of reduced HRR-1 in patients with CHF. Reduced vagal tone might explain part of the adverse clinical outcome previously observed in CHF patients in relation to psychological distress and poor disease-specific QoL.
Resumo:
OBJECTIVE: Vital exhaustion and type D personality previously predicted mortality and cardiac events in patients with chronic heart failure (CHF). Reduced heart rate recovery (HRR) also predicts morbidity and mortality in CHF. We hypothesized that elevated levels of vital exhaustion and type D personality are both associated with decreased HRR. METHODS: Fifty-one patients with CHF (mean age 58+/-12 years, 82% men) and left ventricular ejection fraction (LVEF) =40% underwent standard exercise testing before receiving outpatient cardiac rehabilitation. They completed the 9-item short form of the Maastricht Vital Exhaustion Questionnaire and the 14-item type D questionnaire asking about negative affectivity and social inhibition. HRR was calculated as the difference between heart rate at the end of exercise and 1min after abrupt cessation of exercise (HRR-1). Regression analyses were adjusted for gender, age, LVEF, and maximum exercise capacity. RESULTS: Vital exhaustion explained 8.4% of the variance in continuous HRR-1 (p=0.045). For each point increase on the vital exhaustion score (range 0-18) there was a mean+/-SEM decrease of 0.54+/-0.26bpm in HRR-1. Type D personality showed a trend toward statistical significance for being associated with lower levels of HRR-1 explaining 6.5% of the variance (p<0.08). The likelihood of having HRR-1=18bpm was significantly higher in patients with type D personality than in those without (odds ratio=7.62, 95% CI 1.50-38.80). CONCLUSIONS: Elevated levels of vital exhaustion and type D personality were both independently associated with reduced HRR-1. The findings provide a hitherto not explored psychobiological explanation for poor cardiac outcome in patients with CHF.
Resumo:
Antisaccade errors are attributed to failure to inhibit the habitual prosaccade. We investigated whether the amount of information about the required response the patient has before the trial begins also contributes to error rate. Participants performed antisaccades in five conditions. The traditional design had two goals on the left and right horizontal meridians. In the second condition, stimulus-goal confusability between trials was eliminated by displacing one goal upward. In the third, hemifield uncertainty was eliminated by placing both goals in the same hemifield. In the fourth, goal uncertainty was eliminated by having only one goal, but interspersed with no-go trials. The fifth condition eliminated all uncertainty by having the same goal on every trial. Antisaccade error rate increased by 2% with each additional source of uncertainty, with the main effect being hemifield information, and a trend for stimulus-goal confusability. A control experiment for the effects of increasing angular separation between targets without changing these types of prior response information showed no effects on latency or error rate. We conclude that other factors besides prosaccade inhibition contribute to antisaccade error rates in traditional designs, possibly by modulating the strength of goal activation.
Resumo:
Objectives: To assess the biological and technical complication rates of single crowns on vital teeth (SC-V), endodontically treated teeth without post and core (SC-E), with a cast post and core (SC-PC) and on implants (SC-I). Material and methods: From 392 patients with chronic periodontitis treated and documented by graduate students during the period from 1978 to 2002, 199 were reexamined during 2005 for this retrospective cohort study, and 64 of these patients were treated with SCs. Statistical analysis included Kaplan–Meier survival functions and event rates per 100 years of object-time. Poisson regression was used to compare the four groups of crowns with respect to the incidence rate ratio of failures, and failures and complications combined over 10 years and the entire observation period. Results: Forty-one (64%) female and 23 (36%) male patients participated in the reexamination. At the time of seating the crowns, the mean patient age was 46.8 (range 24–66.3) years. One hundred and sixty-eight single unit crowns were incorporated. Their mean follow-up time was 11.8 (range 0.8–26.4) years. During the time of observation, 22 biological and 11 technical complications occurred; 19 SC were lost. The chance for SC-V (56) to remain free of any failure or complication was 89.3% (95% confidence interval [CI] 76.1–95.4) after 10 years, 85.8% (95% CI 66–94.5) for SC-E (34), 75.9% for SC-PC (39), (95% CI 58.8–86.7) and 66.2% (95% CI 45.1–80.7) for SC-I (39). Over 10 years, 95% of SC-I remained free of failure and demonstrated a cumulative incidence of failure or complication of 34%. Compared with SC-E, SC-I were 3.5 times more likely to yield failures or complications and SC-PC failed 1.7 times more frequently than did SC-E. SC-V had the lowest rate of failures or complications over the 10 years. Conclusions: While SCs on vital teeth have the best prognosis, those on endodontically treated teeth have a slightly poorer prognosis over 10 years. Crowns on teeth with post and cores and implant-supported SCs displayed the highest incidence of failures and complications.
Resumo:
Atrial fibrillation (AF) and heart failure (HF) are common and interrelated conditions, each promoting the other, and both associated with increased mortality. HF leads to structural and electrical atrial remodeling, thus creating the basis for the development and perpetuation of AF; and AF may lead to hemodynamic deterioration and the development of tachycardia-mediated cardiomyopathy. Stroke prevention by antithrombotic therapy is crucial in patients with AF and HF. Of the 2 principal therapeutic strategies to treat AF, rate control and rhythm control, neither has been shown to be superior to the other in terms of survival, despite better survival in patients with sinus rhythm compared with those in AF. Antiarrhythmic drug toxicity and poor efficacy are concerns. Catheter ablation of AF can establish sinus rhythm without the risks of antiarrhythmic drug therapy, but has important procedural risks, and data from randomized trials showing a survival benefit of this treatment strategy are still lacking. In intractable cases, ablation of the atrioventricular junction and placement of a permanent pacemaker is a treatment alternative; and biventricular pacing may prevent or reduce the negative consequences of chronic right ventricular pacing.