23 resultados para nosocomial diarrhoea
em University of Queensland eSpace - Australia
Resumo:
Objective: To evaluate the efficacy of Lactobacillus rhamnosus GG in the prevention of antibiotic-associated diarrhoea. Data Sources: A computer-based search of MED-LINE, CINAHL, AMED, the Cochrane Controlled Trials Register and the Cochrane Database of Systematic Reviews was conducted. A hand-search of the bibliographies of relevant papers and previous meta-analyses was undertaken. Review Methods: Trials were included in the review if they compared the effects of L. rhamnosus GG and placebo and listed diarrhoea as a primary end-point. Studies were excluded if they were not placebo-controlled or utilised other probiotic strains. Results:Six trials were found that met all eligibility requirements. Significant statistical heterogeneity of the trials precluded meta-analysis. Four of the six trials found a significant reduction in the risk of antibiotic-associated diarrhoea with co-administration of Lactobacillus GG. One of the trials found a reduced number of days with antibiotic-induced diarrhoea with Lactobacillus GG administration, whilst the final trial found no benefit of Lactobacillus GG supplementation. Conclusion: Additional research is needed to further clarify the effectiveness of Lactobacillus GG in the prevention of antibiotic-associated diarrhoea. Copyright (c) 2005 S. Karger AG, Basel.
Resumo:
Background Estimates of the disease burden due to multiple risk factors can show the potential gain from combined preventive measures. But few such investigations have been attempted, and none on a global scale. Our aim was to estimate the potential health benefits from removal of multiple major risk factors. Methods We assessed the burden of disease and injury attributable to the joint effects of 20 selected leading risk factors in 14 epidemiological subregions of the world. We estimated population attributable fractions, defined as the proportional reduction in disease or mortality that would occur if exposure to a risk factor were reduced to an alternative level, from data for risk factor prevalence and hazard size. For every disease, we estimated joint population attributable fractions, for multiple risk factors, by age and sex, from the direct contributions of individual risk factors. To obtain the direct hazards, we reviewed publications and re-analysed cohort data to account for that part of hazard that is mediated through other risks. Results Globally, an estimated 47% of premature deaths and 39% of total disease burden in 2000 resulted from the joint effects of the risk factors considered. These risks caused a substantial proportion of important diseases, including diarrhoea (92%-94%), lower respiratory infections (55-62%), lung cancer (72%), chronic obstructive pulmonary disease (60%), ischaemic heart disease (83-89%), and stroke (70-76%). Removal of these risks would have increased global healthy life expectancy by 9.3 years (17%) ranging from 4.4 years (6%) in the developed countries of the western Pacific to 16.1 years (43%) in parts of sub-Saharan Africa. Interpretation Removal of major risk factors would not only increase healthy life expectancy in every region, but also reduce some of the differences between regions, The potential for disease prevention and health gain from tackling major known risks simultaneously would be substantial.
Resumo:
The purpose of this study was to prospectively examine the effectiveness and tolerability of a simple radiotherapy technique for the palliation of symptomatic liver metastases. Twenty-eight patients with symptomatic liver metastases were enrolled from seven centres, and received targeted (partial or whole) liver irradiation consisting of 10 Gy in two fractions over 2 days. Symptoms at baseline were hepatic pain (27 patients), abdominal distension (19), night sweats (12), nausea (18) and vomiting (eight). Twenty-two patients (76%) had failed previous treatment with chemotherapy, hormonal therapy and/or high-dose steroids. Symptoms and potential toxicities were prospectively assessed at the time of treatment, then 2, 6 and 10 weeks later. Individual symptom response rates were 53−66% at 2 weeks. Partial or complete global symptomatic responses were noted in 15 patients (54%) overall. The treatment was well tolerated with two patients (7%) experiencing grade 3 toxicity (one vomiting and one diarrhoea); however, four patients reported temporary worsening of pain shortly after treatment. This simple and well-tolerated treatment achieves useful palliation.
Resumo:
The aim of this review is to analyse critically the recent literature on the clinical pharmacokinetics and pharmacodynamics of tacrolimus in solid organ transplant recipients. Dosage and target concentration recommendations for tacrolimus vary from centre to centre, and large pharmacokinetic variability makes it difficult to predict what concentration will be achieved with a particular dose or dosage change. Therapeutic ranges have not been based on statistical approaches. The majority of pharmacokinetic studies have involved intense blood sampling in small homogeneous groups in the immediate post-transplant period. Most have used nonspecific immunoassays and provide little information on pharmacokinetic variability. Demographic investigations seeking correlations between pharmacokinetic parameters and patient factors have generally looked at one covariate at a time and have involved small patient numbers. Factors reported to influence the pharmacokinetics of tacrolimus include the patient group studied, hepatic dysfunction, hepatitis C status, time after transplantation, patient age, donor liver characteristics, recipient race, haematocrit and albumin concentrations, diurnal rhythm, food administration, corticosteroid dosage, diarrhoea and cytochrome P450 (CYP) isoenzyme and P-glycoprotein expression. Population analyses are adding to our understanding of the pharmacokinetics of tacrolimus, but such investigations are still in their infancy. A significant proportion of model variability remains unexplained. Population modelling and Bayesian forecasting may be improved if CYP isoenzymes and/or P-glycoprotein expression could be considered as covariates. Reports have been conflicting as to whether low tacrolimus trough concentrations are related to rejection. Several studies have demonstrated a correlation between high trough concentrations and toxicity, particularly nephrotoxicity. The best predictor of pharmacological effect may be drug concentrations in the transplanted organ itself. Researchers have started to question current reliance on trough measurement during therapeutic drug monitoring, with instances of toxicity and rejection occurring when trough concentrations are within 'acceptable' ranges. The correlation between blood concentration and drug exposure can be improved by use of non-trough timepoints. However, controversy exists as to whether this will provide any great benefit, given the added complexity in monitoring. Investigators are now attempting to quantify the pharmacological effects of tacrolimus on immune cells through assays that measure in vivo calcineurin inhibition and markers of immuno suppression such as cytokine concentration. To date, no studies have correlated pharmacodynamic marker assay results with immunosuppressive efficacy, as determined by allograft outcome, or investigated the relationship between calcineurin inhibition and drug adverse effects. Little is known about the magnitude of the pharmacodynamic variability of tacrolimus.
Resumo:
A model was developed in dogs to determine the impact of oral enrofloxacin administration on the indigenous coliform population in the gastrointestinal tract and subsequent disposition to colonization by a strain of multidrug-resistant Escherichia coli (MDREC). Dogs given a daily oral dose of 5 mg enrofloxacin kg(-1) for 21 consecutive days showed a significant decline in faecal coliforms to levels below detectable limits by 72 In of administration. Subsequently, faecal coliforms remained suppressed throughout the period of enrofloxacin dosing. Upon termination of antibiotic administration, the number of excreted faecal coliforms slowly returned over an 8-day period, to levels comparable to those seen prior to antibiotic treatment. Enrofloxacin-treated dogs were more effectively colonized by MDREC, evidenced by a significantly increased count of MDREC in the faeces (7.1 +/- 1.5 log(10) g(-1)) compared with non-antibiotic-treated dogs (5.2 +/- 1.2; P = 0.003). Furthermore, antibiotic treatment also sustained a significantly longer period of MDREC excretion in the faeces (26.8 +/- 10.5 days) compared with animals not treated with enrofloxacin (8.5 +/- 5.4 days; P = 0.0215). These results confirm the importance of sustained delivery of an antimicrobial agent to maintain and expand the colonization potential of drug-resistant bacteria in vivo, achieved in part by reducing the competing commensal coliforms in the gastrointestinal tract to below detectable levels in the faeces. Without in vivo antimicrobial selection pressure, commensal coliforms dominated the gastrointestinal tract at the expense of the MDREC population. Conceivably, the model developed could be used to test the efficacy of novel non-antibiotic strategies aimed at monitoring and controlling gastrointestinal colonization by multidrug-resistant members of the Enterobacteriaceae that cause nosocomial infections.
Resumo:
This prospective study evaluated serum procalcitonin (PCT) and C-reactive protein (CRP) as markers for systemic inflammatory response syndrome (SIRS)/sepsis and mortality in patients with traumatic brain injury and subarachnoid haemorrhage. Sixty-two patients were followed for 7 days. Serum PCT and CRP were measured on days 0, 1, 4, 5, 6 and 7. Seventy-seven per cent of patients with traumatic brain injury and 83% with subarachnoid haemorrhage developed SIRS or sepsis (P= 0.75). Baseline PCT and CRP were elevated in 35% and 55% ofpatients respectively (P=0.03). There was a statistically non-significant step-wise increase in serum PCT levels from no SIRS (0.4 +/- 0.6 ng/ml) to SIRS (3.05 +/- 9.3 ng/ml) to sepsis (5.5 +/- 12.5 ng/ml). A similar trend was noted in baseline PCT in patients with mild (0.06 +/- 0.9 ng/ml), moderate (0.8 +/- 0.7 ng/ml) and severe head injury (1.2 +/- 1.9 ng/ml). Such a gradation was not observed with serum CRP There was a non-significant trend towards baseline PCT being a better marker of hospital mortality compared with baseline CRP (ROC-AUC 0.56 vs 0.31 respectively). This is the first prospective study to document the high incidence of SIRS in neurosurgical patients. In our study, serum PCT appeared to correlate with severity of traumatic brain injury and mortality. However, it could not reliably distinguish between SIRS and sepsis in this cohort. This is in pan because baseline PCT elevation seemed to correlate with severity of injury. Only a small proportion ofpatients developed sepsis, thus necessitating a larger sample size to demonstrate the diagnostic usefulness of serum PCT as a marker of sepsis. Further clinical trials with larger sample sizes are required to confirm any potential role of PCT as a sepsis and outcome indicator in patients with head injuries or subarachnoid haemorrhage.
Resumo:
The manner in which elements of clinical history, physical examination and investigations influence subjectively assessed illness severity and outcome prediction is poorly understood. This study investigates the relationship between clinician and objectively assessed illness severity and the factors influencing clinician's diagnostic confidence and illness severity rating for ventilated patients with suspected pneumonia in the intensive care unit (ICU). A prospective study of fourteen ICUs included all ventilated admissions with a clinical diagnosis of pneumonia. Data collection included pneumonia type - community-acquired (CAP), hospital-acquired (HAP) and ventilator-associated (VAP), clinician determined illness severity (CDIS), diagnostic methods, clinical diagnostic confidence (CDC), microbiological isolates and antibiotic use. For 476 episodes of pneumonia (48% CAP, 24% HAP, 28% VAP), CDC was greatest for CAP (64% CAP, 50% HAP and 49% VAP, P < 0.01) or when pneumonia was considered life-threatening (84% high CDC, 13% medium CDC and 3% low CDC, P < 0.001). Life-threatening pneumonia was predicted by worsening gas exchange (OR 4.8, CI 95% 2.3-10.2, P < 0.001), clinical signs of consolidation (OR 2.0, CI 95% 1.2-3.2, P < 0.01) and the Sepsis-Related Organ Failure Assessment (SOFA) Score (OR 1.1, CI 95% 1.1-1.2, P < 0.001). Diagnostic confidence increased with CDIS (OR 163, CI 95% 8.4-31.4, P < 0.001), definite pathogen isolation (OR 3.3, CI 95% 2.0-5.6) and clinical signs of consolidation (OR 2.1, CI 95% 1.3-3.3, P = 0.001). Although the CDIS, SOFA Score and the Simplified Acute Physiologic Score (SAPS II) were all associated with mortality, the SAPS II Score was the best predictor of mortality (P = 0.02). Diagnostic confidence for pneumonia is moderate but increases with more classical presentations. A small set of clinical parameters influence subjective assessment. Objective assessment using SAPS II Scoring is a better predictor of mortality.