878 resultados para surgical site infection rates
Resumo:
Background and aims Fine root decomposition contributes significantly to element cycling in terrestrial ecosystems. However, studies on root decomposition rates and on the factors that potentially influence them are fewer than those on leaf litter decomposition. To study the effects of region and land use intensity on fine root decomposition, we established a large scale study in three German regions with different climate regimes and soil properties. Methods In 150 forest and 150 grassland sites we deployed litterbags (100 μm mesh size) with standardized litter consisting of fine roots from European beech in forests and from a lowland mesophilous hay meadow in grasslands. In the central study region, we compared decomposition rates of this standardized litter with root litter collected on-site to separate the effect of litter quality from environmental factors. Results Standardized herbaceous roots in grassland soils decomposed on average significantly faster (24 ± 6 % mass loss after 12 months, mean ± SD) than beech roots in forest soils (12 ± 4 %; p < 0.001). Fine root decomposition varied among the three study regions. Land use intensity, in particular N addition, decreased fine root decomposition in grasslands. The initial lignin:N ratio explained 15 % of the variance in grasslands and 11 % in forests. Soil moisture, soil temperature, and C:N ratios of soils together explained 34 % of the variance of the fine root mass loss in grasslands, and 24 % in forests. Conclusions Grasslands, which have higher fine root biomass and root turnover compared to forests, also have higher rates of root decomposition. Our results further show that at the regional scale fine root decomposition is influenced by environmental variables such as soil moisture, soil temperature and soil nutrient content. Additional variation is explained by root litter quality.
Resumo:
In several studies of antiretroviral treatment (ART) programs for persons with human immunodeficiency virus infection, investigators have reported that there has been a higher rate of loss to follow-up (LTFU) among patients initiating ART in recent years than among patients who initiated ART during earlier time periods. This finding is frequently interpreted as reflecting deterioration of patient retention in the face of increasing patient loads. However, in this paper we demonstrate by simulation that transient gaps in follow-up could lead to bias when standard survival analysis techniques are applied. We created a simulated cohort of patients with different dates of ART initiation. Rates of ART interruption, ART resumption, and mortality were assumed to remain constant over time, but when we applied a standard definition of LTFU, the simulated probability of being classified LTFU at a particular ART duration was substantially higher in recently enrolled cohorts. This suggests that much of the apparent trend towards increased LTFU may be attributed to bias caused by transient interruptions in care. Alternative statistical techniques need to be used when analyzing predictors of LTFU-for example, using "prospective" definitions of LTFU in place of "retrospective" definitions. Similar considerations may apply when analyzing predictors of LTFU from treatment programs for other chronic diseases.
Resumo:
The following, from the 12th OESO World Conference: Cancers of the Esophagus, includes commentaries on the role of the nurse in preparation of esophageal resection (ER); the management of patients who develop high-grade dysplasia after having undergone Nissen fundoplication; the trajectory of care for the patient with esophageal cancer; the influence of the site of tumor in the choice of treatment; the best location for esophagogastrostomy; management of chylous leak after esophagectomy; the optimal approach to manage thoracic esophageal leak after esophagectomy; the choice for operational approach in surgery of cardioesophageal crossing; the advantages of robot esophagectomy; the place of open esophagectomy; the advantages of esophagectomy compared to definitive chemoradiotherapy; the pathologist report in the resected specimen; the best way to manage patients with unsuspected positive microscopic margin after ER; enhanced recovery after surgery for ER: expedited care protocols; and long-term quality of life in patients following esophagectomy.
Resumo:
PURPOSE: The worldwide prevalence of human papillomavirus (HPV) infection is estimated at 9-13 %. Persistent infection can lead to the development of malignant and nonmalignant diseases. Low-risk HPV types are mostly associated with benign lesions such as anogenital warts. In the present systematic review, we examined the impact of smoking on HPV infection and the development of anogenital warts, respectively. METHODS: A systematic literature search was performed using MEDLINE database for peer-reviewed articles published from January 01, 1985 to November 30, 2013. Pooled rates of HPV prevalence were compared using the χ (2) test. RESULTS: In both genders, smoking is associated with higher incidence and prevalence rates for HPV infection, whereas the latter responds to a dose-effect relationship. The overall HPV prevalence for smoking patients was 48.2 versus 37. 5 % for nonsmoking patients (p < 0.001) (odds ratio (OR) = 1.5, 95 % confidence interval (CI) 1.4-1.7). Smoking does also increase persistence rates for high-risk HPV infection, while this correlation is debatable for low-risk HPV. The incidence and recurrence rates of anogenital warts are significantly increased in smokers. CONCLUSIONS: Most current data demonstrate an association between smoking, increased anogenital HPV infection, and development of anogenital warts. These data add to the long list of reasons for making smoking cessation a keystone of patient health.
Resumo:
BACKGROUND High early mortality in patients with HIV-1 starting antiretroviral therapy (ART) in sub-Saharan Africa, compared to Europe and North America, is well documented. Longer-term comparisons between settings have been limited by poor ascertainment of mortality in high burden African settings. This study aimed to compare mortality up to four years on ART between South Africa, Europe, and North America. METHODS AND FINDINGS Data from four South African cohorts in which patients lost to follow-up (LTF) could be linked to the national population register to determine vital status were combined with data from Europe and North America. Cumulative mortality, crude and adjusted (for characteristics at ART initiation) mortality rate ratios (relative to South Africa), and predicted mortality rates were described by region at 0-3, 3-6, 6-12, 12-24, and 24-48 months on ART for the period 2001-2010. Of the adults included (30,467 [South Africa], 29,727 [Europe], and 7,160 [North America]), 20,306 (67%), 9,961 (34%), and 824 (12%) were women. Patients began treatment with markedly more advanced disease in South Africa (median CD4 count 102, 213, and 172 cells/µl in South Africa, Europe, and North America, respectively). High early mortality after starting ART in South Africa occurred mainly in patients starting ART with CD4 count <50 cells/µl. Cumulative mortality at 4 years was 16.6%, 4.7%, and 15.3% in South Africa, Europe, and North America, respectively. Mortality was initially much lower in Europe and North America than South Africa, but the differences were reduced or reversed (North America) at longer durations on ART (adjusted rate ratios 0.46, 95% CI 0.37-0.58, and 1.62, 95% CI 1.27-2.05 between 24 and 48 months on ART comparing Europe and North America to South Africa). While bias due to under-ascertainment of mortality was minimised through death registry linkage, residual bias could still be present due to differing approaches to and frequency of linkage. CONCLUSIONS After accounting for under-ascertainment of mortality, with increasing duration on ART, the mortality rate on HIV treatment in South Africa declines to levels comparable to or below those described in participating North American cohorts, while substantially narrowing the differential with the European cohorts. Please see later in the article for the Editors' Summary.
Resumo:
OBJECTIVES/HYPOTHESIS Study of the clinical evolution of a primary ear, nose, and throat infection complicated by septic thrombophlebitis of the internal jugular vein. STUDY DESIGN Retrospective case-control study. PATIENTS AND METHODS From 1998 to 2010, 23 patients at our institution were diagnosed with a septic thrombosis of the internal jugular vein. Diagnostics included microbiologic analysis and imaging such as computed tomography, magnetic resonance imaging, and ultrasound. Therapy included broad-spectrum antibiotics, surgery of the primary infectious lesion, and postoperative anticoagulation. The patients were retrospectively analyzed. RESULTS The primary infection sites were found in the middle ear (11), oropharynx (8), sinus (3), and oral cavity (1). Fourteen patients needed intensive care unit treatment for a mean duration of 6 days. Seven patients were intubated, and two developed severe acute respiratory distress syndrome. An oropharynx primary infection site was most prone to a prolonged clinical evolution. Anticoagulation therapy was given in 90% of patients. All 23 patients survived the disseminated infection without consecutive systemic morbidity. CONCLUSION In the pre-antibiotic time, septic internal jugular vein thrombophlebitis was a highly fatal condition with a mortality rate of 90%. Modern imaging techniques allow early and often incidental diagnosis of this clinically hidden complication. Anticoagulation, intensive antibiotic therapy assisted by surgery of the primary infection site, and intensive supportive care can reach remission rates of 100%. LEVEL OF EVIDENCE 3b. Laryngoscope, 2014.
Resumo:
INTRODUCTION Even though arthroplasty of the ankle joint is considered to be an established procedure, only about 1,300 endoprostheses are implanted in Germany annually. Arthrodeses of the ankle joint are performed almost three times more often. This may be due to the availability of the procedure - more than twice as many providers perform arthrodesis - as well as the postulated high frequency of revision procedures of arthroplasties in the literature. In those publications, however, there is often no clear differentiation between revision surgery with exchange of components, subsequent interventions due to complications and subsequent surgery not associated with complications. The German Orthopaedic Foot and Ankle Association's (D. A. F.) registry for total ankle replacement collects data pertaining to perioperative complications as well as cause, nature and extent of the subsequent interventions, and postoperative patient satisfaction. MATERIAL AND METHODS The D. A. F.'s total ankle replacement register is a nation-wide, voluntary registry. After giving written informed consent, the patients can be added to the database by participating providers. Data are collected during hospital stay for surgical treatment, during routine follow-up inspections and in the context of revision surgery. The information can be submitted in paper-based or online formats. The survey instruments are available as minimum data sets or scientific questionnaires which include patient-reported outcome measures (PROMs). The pseudonymous clinical data are collected and evaluated at the Institute for Evaluative Research in Medicine, University of Bern/Switzerland (IEFM). The patient-related data remain on the register's module server in North Rhine-Westphalia, Germany. The registry's methodology as well as the results of the revisions and patient satisfaction for 115 patients with a two year follow-up period are presented. Statistical analyses are performed with SAS™ (Version 9.4, SAS Institute, Inc., Cary, NC, USA). RESULTS About 2½ years after the register was launched there are 621 datasets on primary implantations, 1,427 on follow-ups and 121 records on re-operation available. 49 % of the patients received their implants due to post-traumatic osteoarthritis, 27 % because of a primary osteoarthritis and 15 % of patients suffered from a rheumatic disease. More than 90 % of the primary interventions proceeded without complications. Subsequent interventions were recorded for 84 patients, which corresponds to a rate of 13.5 % with respect to the primary implantations. It should be noted that these secondary procedures also include two-stage procedures not due to a complication. "True revisions" are interventions with exchange of components due to mechanical complications and/or infection and were present in 7.6 % of patients. 415 of the patients commented on their satisfaction with the operative result during the last follow-up: 89.9 % of patients evaluate their outcome as excellent or good, 9.4 % as moderate and only 0.7 % (3 patients) as poor. In these three cases a component loosening or symptomatic USG osteoarthritis was present. Two-year follow-up data using the American Orthopedic Foot and Ankle Society Ankle and Hindfoot Scale (AOFAS-AHS) are already available for 115 patients. The median AOFAS-AHS score increased from 33 points preoperatively to more than 80 points three to six months postoperatively. This increase remained nearly constant over the entire two-year follow-up period. CONCLUSION Covering less than 10 % of the approximately 240 providers in Germany and approximately 12 % of the annually implanted total ankle-replacements, the D. A. F.-register is still far from being seen as a national registry. Nevertheless, geographical coverage and inclusion of "high-" (more than 100 total ankle replacements a year) and "low-volume surgeons" (less than 5 total ankle replacements a year) make the register representative for Germany. The registry data show that the number of subsequent interventions and in particular the "true revision" procedures are markedly lower than the 20 % often postulated in the literature. In addition, a high level of patient satisfaction over the short and medium term is recorded. From the perspective of the authors, these results indicate that total ankle arthroplasty - given a correct indication and appropriate selection of patients - is not inferior to an ankle arthrodesis concerning patients' satisfaction and function. First valid survival rates can be expected about 10 years after the register's start.
Resumo:
OBJECTIVE Reliable tools to predict long-term outcome among patients with well compensated advanced liver disease due to chronic HCV infection are lacking. DESIGN Risk scores for mortality and for cirrhosis-related complications were constructed with Cox regression analysis in a derivation cohort and evaluated in a validation cohort, both including patients with chronic HCV infection and advanced fibrosis. RESULTS In the derivation cohort, 100/405 patients died during a median 8.1 (IQR 5.7-11.1) years of follow-up. Multivariate Cox analyses showed age (HR=1.06, 95% CI 1.04 to 1.09, p<0.001), male sex (HR=1.91, 95% CI 1.10 to 3.29, p=0.021), platelet count (HR=0.91, 95% CI 0.87 to 0.95, p<0.001) and log10 aspartate aminotransferase/alanine aminotransferase ratio (HR=1.30, 95% CI 1.12 to 1.51, p=0.001) were independently associated with mortality (C statistic=0.78, 95% CI 0.72 to 0.83). In the validation cohort, 58/296 patients with cirrhosis died during a median of 6.6 (IQR 4.4-9.0) years. Among patients with estimated 5-year mortality risks <5%, 5-10% and >10%, the observed 5-year mortality rates in the derivation cohort and validation cohort were 0.9% (95% CI 0.0 to 2.7) and 2.6% (95% CI 0.0 to 6.1), 8.1% (95% CI 1.8 to 14.4) and 8.0% (95% CI 1.3 to 14.7), 21.8% (95% CI 13.2 to 30.4) and 20.9% (95% CI 13.6 to 28.1), respectively (C statistic in validation cohort = 0.76, 95% CI 0.69 to 0.83). The risk score for cirrhosis-related complications also incorporated HCV genotype (C statistic = 0.80, 95% CI 0.76 to 0.83 in the derivation cohort; and 0.74, 95% CI 0.68 to 0.79 in the validation cohort). CONCLUSIONS Prognosis of patients with chronic HCV infection and compensated advanced liver disease can be accurately assessed with risk scores including readily available objective clinical parameters.
Resumo:
BACKGROUND & AIMS The efficacy and tolerability of faldaprevir, a potent hepatitis C virus (HCV) NS3/4A protease inhibitor, plus peginterferon and ribavirin was assessed in a double-blind, placebo-controlled phase 3 study of treatment-naïve patients with HCV genotype-1 infection. METHODS Patients were randomly assigned (1:2:2) to peginterferon/ribavirin plus: placebo (arm 1, n=132) for 24 weeks; faldaprevir (120 mg, once daily) for 12 or 24 weeks (arm 2, n=259); or faldaprevir (240 mg, once daily) for 12 weeks (arm 3, n=261). In arms 2 and 3, patients with early treatment success (HCV RNA <25 IU/mL at week 4 and undetectable at week 8) stopped all treatment at week 24. Other patients received peginterferon/ribavirin until week 48 unless they met futility criteria. The primary endpoint was sustained virologic response 12 weeks post-treatment (SVR12). RESULTS SVR12 was achieved by 52%, 79%, and 80% of patients in arms 1, 2, and 3, respectively (estimated difference for arms 2 and 3 versus arm 1: 27%, 95% confidence interval 17%-36%; and 29%, 95% confidence interval, 19%-38%, respectively; P<.0001 for both). Early treatment success was achieved by 87% (arm 2) and 89% (arm 3) of patients, of whom 86% and 89% achieved SVR12. Adverse event rates were similar among groups; few adverse events led to discontinuation of all regimen components. CONCLUSIONS Faldaprevir plus peginterferon/ribavirin significantly increased SVR12, compared with peginterferon/ribavirin, in treatment-naïve patients with HCV genotype-1 infection. There do not seem to be any differences in responses of patients given once-daily 120 or 240 mg faldaprevir.
Resumo:
OBJECTIVE: To investigate the prevalence of discontinuation and nonpublication of surgical versus medical randomized controlled trials (RCTs) and to explore risk factors for discontinuation and nonpublication of surgical RCTs. BACKGROUND: Trial discontinuation has significant scientific, ethical, and economic implications. To date, the prevalence of discontinuation of surgical RCTs is unknown. METHODS: All RCT protocols approved between 2000 and 2003 by 6 ethics committees in Canada, Germany, and Switzerland were screened. Baseline characteristics were collected and, if published, full reports retrieved. Risk factors for early discontinuation for slow recruitment and nonpublication were explored using multivariable logistic regression analyses. RESULTS: In total, 863 RCT protocols involving adult patients were identified, 127 in surgery (15%) and 736 in medicine (85%). Surgical trials were discontinued for any reason more often than medical trials [43% vs 27%, risk difference 16% (95% confidence interval [CI]: 5%-26%); P = 0.001] and more often discontinued for slow recruitment [18% vs 11%, risk difference 8% (95% CI: 0.1%-16%); P = 0.020]. The percentage of trials not published as full journal article was similar in surgical and medical trials (44% vs 40%, risk difference 4% (95% CI: -5% to 14%); P = 0.373). Discontinuation of surgical trials was a strong risk factor for nonpublication (odds ratio = 4.18, 95% CI: 1.45-12.06; P = 0.008). CONCLUSIONS: Discontinuation and nonpublication rates were substantial in surgical RCTs and trial discontinuation was strongly associated with nonpublication. These findings need to be taken into account when interpreting surgical literature. Surgical trialists should consider feasibility studies before embarking on full-scale trials.
Resumo:
BACKGROUND The success of an intervention to prevent the complications of an infection is influenced by the natural history of the infection. Assumptions about the temporal relationship between infection and the development of sequelae can affect the predicted effect size of an intervention and the sample size calculation. This study investigates how a mathematical model can be used to inform sample size calculations for a randomised controlled trial (RCT) using the example of Chlamydia trachomatis infection and pelvic inflammatory disease (PID). METHODS We used a compartmental model to imitate the structure of a published RCT. We considered three different processes for the timing of PID development, in relation to the initial C. trachomatis infection: immediate, constant throughout, or at the end of the infectious period. For each process we assumed that, of all women infected, the same fraction would develop PID in the absence of an intervention. We examined two sets of assumptions used to calculate the sample size in a published RCT that investigated the effect of chlamydia screening on PID incidence. We also investigated the influence of the natural history parameters of chlamydia on the required sample size. RESULTS The assumed event rates and effect sizes used for the sample size calculation implicitly determined the temporal relationship between chlamydia infection and PID in the model. Even small changes in the assumed PID incidence and relative risk (RR) led to considerable differences in the hypothesised mechanism of PID development. The RR and the sample size needed per group also depend on the natural history parameters of chlamydia. CONCLUSIONS Mathematical modelling helps to understand the temporal relationship between an infection and its sequelae and can show how uncertainties about natural history parameters affect sample size calculations when planning a RCT.
Resumo:
We developed a model to calculate a quantitative risk score for individual aquaculture sites. The score indicates the risk of the site being infected with a specific fish pathogen (viral haemorrhagic septicaemia virus (VHSV); infectious haematopoietic necrosis virus, Koi herpes virus), and is intended to be used for risk ranking sites to support surveillance for demonstration of zone or member state freedom from these pathogens. The inputs to the model include a range of quantitative and qualitative estimates of risk factors organised into five risk themes (1) Live fish and egg movements; (2) Exposure via water; (3) On-site processing; (4) Short-distance mechanical transmission; (5) Distance-independent mechanical transmission. The calculated risk score for an individual aquaculture site is a value between zero and one and is intended to indicate the risk of a site relative to the risk of other sites (thereby allowing ranking). The model was applied to evaluate 76 rainbow trout farms in 3 countries (42 from England, 32 from Italy and 2 from Switzerland) with the aim to establish their risk of being infected with VHSV. Risk scores for farms in England and Italy showed great variation, clearly enabling ranking. Scores ranged from 0.002 to 0.254 (mean score 0.080) in England and 0.011 to 0.778 (mean of 0.130) for Italy, reflecting the diversity of infection status of farms in these countries. Requirements for broader application of the model are discussed. Cost efficient farm data collection is important to realise the benefits from a risk-based approach.
Resumo:
Trepanation is defined as the intentional perforation of the cranial vault with removal of a piece of skull bone. In Europe, trepanation is known to have been practiced at least since the Neolithic, and it can still be found today in East African native tribes. Two skulls with lesions from the Late Iron Age site Münsingen-Rain (420–240 BC) were investigated. The aim of this study was to analyse the lesions and to determine whether they were caused by surgical interventions. Both individuals were analysed by current morphologic-anthropological methods and radiological examinations were performed with a multislice CT-scanner. Additionally, this work surveys trepanations reported in Switzerland and calculates survival rates. In Switzerland, 34 individuals with trepanations have been published. As a tendency, the survival rate appears to be relatively high from the Neolithic to Late Antiquity but then decreases until Pre-Modern times. The 78% survival rate in Late Iron Age Switzerland indicates that the surgery was often performed successfully. Skull injuries sustained in conflicts could have been a reason for trepanation during the Iron Age.
Resumo:
BACKGROUND This study evaluated whether risk factors for sternal wound infections vary with the type of surgical procedure in cardiac operations. METHODS This was a university hospital surveillance study of 3,249 consecutive patients (28% women) from 2006 to 2010 (median age, 69 years [interquartile range, 60 to 76]; median additive European System for Cardiac Operative Risk Evaluation score, 5 [interquartile range, 3 to 8]) after (1) isolated coronary artery bypass grafting (CABG), (2) isolated valve repair or replacement, or (3) combined valve procedures and CABG. All other operations were excluded. Univariate and multivariate binary logistic regression were conducted to identify independent predictors for development of sternal wound infections. RESULTS We detected 122 sternal wound infections (3.8%) in 3,249 patients: 74 of 1,857 patients (4.0%) after CABG, 19 of 799 (2.4%) after valve operations, and 29 of 593 (4.9%) after combined procedures. In CABG patients, bilateral internal thoracic artery harvest, procedural duration exceeding 300 minutes, diabetes, obesity, chronic obstructive pulmonary disease, and female sex (model 1) were independent predictors for sternal wound infection. A second model (model 2), using the European System for Cardiac Operative Risk Evaluation, revealed bilateral internal thoracic artery harvest, diabetes, obesity, and the second and third quartiles of the European System for Cardiac Operative Risk Evaluation were independent predictors. In valve patients, model 1 showed only revision for bleeding as an independent predictor for sternal infection, and model 2 yielded both revision for bleeding and diabetes. For combined valve and CABG operations, both regression models demonstrated revision for bleeding and duration of operation exceeding 300 minutes were independent predictors for sternal infection. CONCLUSIONS Risk factors for sternal wound infections after cardiac operations vary with the type of surgical procedure. In patients undergoing valve operations or combined operations, procedure-related risk factors (revision for bleeding, duration of operation) independently predict infection. In patients undergoing CABG, not only procedure-related risk factors but also bilateral internal thoracic artery harvest and patient characteristics (diabetes, chronic obstructive pulmonary disease, obesity, female sex) are predictive of sternal wound infection. Preventive interventions may be justified according to the type of operation.
Resumo:
Proliferative kidney disease (PKD) is an emerging disease threatening wild salmonid populations. In temperature-controlled aquaria, PKD can cause mortality rates of up to 85% in rainbow trout. So far, no data about PKD-related mortality in wild brown trout Salmo trutta fario are available. The aim of this study was to investigate mortality rates and pathology in brown trout kept in a cage within a natural river habitat known to harbor Tetracapsuloides bryosalmonae. Young-of-the-year (YOY) brown trout, free of T. bryosalmonae, were exposed in the River Wutach, in the northeast of Switzerland, during 3 summer months. Samples of wild brown trout caught by electrofishing near the cage location were examined in parallel. The incidence of PKD in cage-exposed animals (69%) was not significantly different to the disease prevalence of wild fish (82 and 80% in the upstream and downstream locations, respectively). The mortality in cageexposed animals, however, was as low as 15%. At the termination of the exposure experiment, surviving fish showed histological lesions typical for PKD regression, suggesting that many YOY brown trout survive the initial infection. Our results at the River Wutach suggest that PKD in brown trout does not always result in high mortality under natural conditions.