863 resultados para MEDLINE
Resumo:
Background: Postoperative nausea and vomiting is a common and unpleasant phenomenon and current therapies are not always effective for all patients. Aromatherapy has been suggested as a possible addition to the available treatment strategies. Objectives: This review sought to establish what effect the use of aromatherapy has on the severity and duration of established postoperative nausea and vomiting and whether aromatherapy can be used with safety and clinical effectiveness comparable to standard pharmacological treatments. Search methods: We searched the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library 2011, Issue 3); MEDLINE; EMBASE; CINAHL; CAM on PubMed; Meditext; LILACS database; and ISI Web of Science as well as grey literature sources and the reference lists of retrieved articles. We conducted database searches up to August 2011. Selection criteria: We included all randomized controlled trials (RCTs) and controlled clinical trials (CCTs) where aromatherapy was used to treat postoperative nausea and vomiting. Interventions were all types of aromatherapy. Aromatherapy was defined as the inhalation of the vapours of any substance for the purposes of a therapeutic benefit. Primary outcomes were the severity and duration of postoperative nausea and vomiting. Secondary outcomes were adverse reactions, use of rescue anti-emetics and patient satisfaction with treatment. Data collection and analysis: Two review authors assessed risk of bias in the included studies and extracted data. As all outcomes analysed were dichotomous, we used a fixed-effects model and calculated relative risk (RR) with associated 95% confidence interval (95% CI). Results: The nine included studies comprised six RCTs and three CCTs with a total of 402 participants. The mean age and range data for all participants were not reported for all studies. The method of randomization in four of the six included RCTs was explicitly stated and adequate. Incomplete reporting of data affected the completeness of the analysis. Compared with placebo, isopropyl alcohol vapour inhalation was effective in reducing the proportion of participants requiring rescue anti-emetics (RR 0.30, 95%CI 0.09 to 1.00, P = 0.05). However, compared with standard anti-emetic treatment, isopropyl alcohol was not effective in reducing the proportion of participants requiring rescue anti-emetics (RR 0.66 95%CI 0.39 to 1.13, P = 0.13) except when the data from a possibly confounded study were included (RR 0.66, 95% CI 0.45 to 0.98, P = 0.04). Where studies reported data on patient satisfaction with aromatherapy, there were no statistically significant differences between the groups (RR 1.12, 95%CI 0.62 to 2.03, P = 0.71). Authors' conclusions: Isopropyl alcohol was more effective than saline placebo for reducing postoperative nausea and vomiting but less effective than standard anti-emetic drugs. There is currently no reliable evidence for the use of peppermint oil.
Prevalence and trends of the diabetes epidemic in South Asia : a systematic review and meta-analysis
Resumo:
Background Diabetes mellitus has reached epidemic proportions worldwide. South Asians are known to have an increased predisposition for diabetes which has become an important health concern in the region. We discuss the prevalence of pre-diabetes and diabetes in South Asia and explore the differential risk factors reported. Methods Prevalence data were obtained by searching the Medline® database with; ‘prediabetes’ and ‘diabetes mellitus’ (MeSH major topic) and ‘Epidemology/EP’ (MeSH subheading). Search limits were articles in English, between 01/01/1980–31/12/2011, on human adults (≥19 years). The conjunction of the above results was narrowed down with country names. Results The most recent reported prevalence of pre-diabetes:diabetes in regional countries were; Bangladesh–4.7%:8.5% (2004–2005;Rural), India–4.6%:12.5% (2007;Rural); Maldives–3.0%:3.7% (2004;National), Nepal–19.5%:9.5% (2007;Urban), Pakistan–3.0%:7.2% (2002;Rural), Sri Lanka–11.5%:10.3% (2005–2006;National). Urban populations demonstrated a higher prevalence of diabetes. An increasing trend in prevalence of diabetes was observed in urban/rural India and rural Sri Lanka. The diabetes epidemicity index decreased with the increasing prevalence of diabetes in respective countries. A high epidemicity index was seen in Sri Lanka (2005/2006–52.8%), while for other countries, the epidemicity index was comparatively low (rural India 2007–26.9%; urban India 2002/2005–31.3%, and urban Bangladesh–33.1%). Family history, urban residency, age, higher BMI, sedentary lifestyle, hypertension and waist-hip ratio were associated with an increased risks of diabetes. Conclusion A significant epidemic of diabetes is present in the South Asian region with a rapid increase in prevalence over the last two decades. Hence there is a need for urgent preventive and curative strategies .
Resumo:
INTRODUCTION: The large increase in the number of athletes who apply to use inhaled beta agonists (IBAs) at the Olympic Games is a concern to the medical community. This review will examine the use of IBAs in the asthmatic athlete, the variability that exists between countries and sport, and outline a plan to justify the use of these medications. DATA SOURCES: Much of this article is a result of an International Olympic Committee (IOC) Medical Commission-sponsored meeting that took place in May 2001. Records of the use of IBAs at previous Olympics were reviewed. MEDLINE Searches (PubMed interface) were performed using key words to locate published work relating to asthma, elite athletes, performance, treatment, and ergogenic aids. MAIN RESULTS: Since 1984 there have been significant increases in the use of IBAs at the Olympic Games as well as marked geographical differences in the percentage of athletes requesting the use of IBAs. There are large differences in the incidence of IBA use between sports with a trend towards increased use in endurance sports. There are no ergogenic effects of any IOC-approved IBA given in a therapeutic dose. CONCLUSIONS: In many cases, the prescription of IBAs to this population has been made on empirical grounds. Beginning with the 2002 Winter Games, athletes will be required to submit to the IOC Medical Commission clinical and laboratory evidence that justifies the use of this medication. The eucapnic voluntary hyperpnea test will be used to assess individuals who have not satisfied an independent medical panel of the need to use an IBA.
Resumo:
BACKGROUND: There is evidence that children's decisions to smoke are influenced by family and friends. OBJECTIVES: To assess the effectiveness of interventions to help family members to strengthen non-smoking attitudes and promote non-smoking by children and other family members. SEARCH STRATEGY: We searched 14 electronic bibliographic databases, including the Cochrane Tobacco Addiction Group specialized register, MEDLINE, EMBASE, PsycINFO and CINAHL. We also searched unpublished material, and the reference lists of key articles. We performed both free-text Internet searches and targeted searches of appropriate websites, and we hand-searched key journals not available electronically. We also consulted authors and experts in the field. The most recent search was performed in July 2006. SELECTION CRITERIA: Randomized controlled trials (RCTs) of interventions with children (aged 5-12) or adolescents (aged 13-18) and family members to deter the use of tobacco. The primary outcome was the effect of the intervention on the smoking status of children who reported no use of tobacco at baseline. Included trials had to report outcomes measured at least six months from the start of the intervention. DATA COLLECTION AND ANALYSIS: We reviewed all potentially relevant citations and retrieved the full text to determine whether the study was an RCT and matched our inclusion criteria. Two authors independently extracted study data and assessed them for methodological quality. The studies were too limited in number and quality to undertake a formal meta-analysis, and we present a narrative synthesis. MAIN RESULTS: We identified 19 RCTs of family interventions to prevent smoking. We identified five RCTs in Category 1 (minimal risk of bias on all counts); nine in Category 2 (a risk of bias in one or more areas); and five in Category 3 (risks of bias in design and execution such that reliable conclusions cannot be drawn from the study).Considering the fourteen Category 1 and 2 studies together: (1) four of the nine that tested a family intervention against a control group had significant positive effects, but one showed significant negative effects; (2) one of the five RCTs that tested a family intervention against a school intervention had significant positive effects; (3) none of the six that compared the incremental effects of a family plus a school programme to a school programme alone had significant positive effects; (4) the one RCT that tested a family tobacco intervention against a family non-tobacco safety intervention showed no effects; and (5) the one trial that used general risk reduction interventions found the group which received the parent and teen interventions had less smoking than the one that received only the teen intervention (there was no tobacco intervention but tobacco outcomes were measured). For the included trials the amount of implementer training and the fidelity of implementation are related to positive outcomes, but the number of sessions is not. AUTHORS' CONCLUSIONS: Some well-executed RCTs show family interventions may prevent adolescent smoking, but RCTs which were less well executed had mostly neutral or negative results. There is thus a need for well-designed and executed RCTs in this area.
Resumo:
OBJECTIVES: To examine the effect of thermal agents on the range of movement (ROM) and mechanical properties in soft tissue and to discuss their clinical relevance. DATA SOURCES: Electronic databases (Cochrane Central Register of Controlled Trials, MEDLINE, and EMBASE) were searched from their earliest available record up to May 2011 using Medical Subjects Headings and key words. We also undertook related articles searches and read reference lists of all incoming articles. STUDY SELECTION: Studies involving human participants describing the effects of thermal interventions on ROM and/or mechanical properties in soft tissue. Two reviewers independently screened studies against eligibility criteria. DATA EXTRACTION: Data were extracted independently by 2 review authors using a customized form. Methodologic quality was also assessed by 2 authors independently, using the Cochrane risk of bias tool. DATA SYNTHESIS: Thirty-six studies, comprising a total of 1301 healthy participants, satisfied the inclusion criteria. There was a high risk of bias across all studies. Meta-analyses were not undertaken because of clinical heterogeneity; however, effect sizes were calculated. There were conflicting data on the effect of cold on joint ROM, accessory joint movement, and passive stiffness. There was limited evidence to determine whether acute cold applications enhance the effects of stretching, and further evidence is required. There was evidence that heat increases ROM, and a combination of heat and stretching is more effective than stretching alone. CONCLUSIONS: Heat is an effective adjunct to developmental and therapeutic stretching techniques and should be the treatment of choice for enhancing ROM in a clinical or sporting setting. The effects of heat or ice on other important mechanical properties (eg, passive stiffness) remain equivocal and should be the focus of future study.
Resumo:
This systematic mixed studies review aimed at synthesizing evidence from studies related to the influences on the work participation of people with refugee status (PWRS). The review focused on the role of proximal socio-structural barriers on work participation by PWRS while foregrounding related distal, intermediate, proximal, and meta-systemic influences. For the systematic search of the literature, we focused on databases that addressed work, well-being, and social policy in refugee populations, including, Medline, CINAHL, PsycInfo, Web of Science, Scopus, and Sociological Abstracts. Of the studies reviewed, 16 of 39 met the inclusion criteria and were retained for the final analysis. We performed a narrative synthesis of the evidence on barriers to work participation by PWRS, interlinking clusters of barriers potent to their effects on work participation. Findings from the narrative synthesis suggest that proximal factors, those at point of entry to the labor market, influence work participation more directly than distal or intermediate factors. Distal and intermediate factors achieve their effects on work participation by PWRS primarily through meta-systemic interlinkages, including host-country documentation and refugee administration provisions.
Resumo:
We reviewed the effect of behavioural telehealth interventions on glycaemic control and diabetes self-management in patients with type 2 diabetes. The databases CINAHL, Medline and psychINFO were searched in August 2012. Journal articles were selected that had been published in English with a randomized controlled trial design using a usual care comparison group, and in which the primary intervention component was delivered by telehealth. Relevant outcome measures were glycaemic control and one or more of the following diabetes self-care areas: diet, physical activity, blood glucose self-monitoring (BGSM) or medication adherence. Interventions were excluded if they were primarily based on telemonitoring. The search retrieved 1027 articles, from which 49 were selected based on their title and abstract. Fourteen articles (reporting 13 studies) met the eligibility criteria for inclusion. Four studies reported significant improvements in glycaemic control. Five of eight studies on dietary adherence reported significant treatment effects, as did five of eight on physical activity, four of nine on blood glucose self-monitoring, and three of eight on medication adherence. Overall, behavioural telehealth interventions show promise in improving the diabetes self-care and glycaemic control of people with type 2 diabetes.
Resumo:
BACKGROUND: Studies have shown that nurse staffing levels, among many other factors in the hospital setting, contribute to adverse patient outcomes. Concerns about patient safety and quality of care have resulted in numerous studies being conducted to examine the relationship between nurse staffing levels and the incidence of adverse patient events in both general wards and intensive care units. AIM: The aim of this paper is to review literature published in the previous 10 years which examines the relationship between nurse staffing levels and the incidence of mortality and morbidity in adult intensive care unit patients. METHODS: A literature search from 2002 to 2011 using the MEDLINE, Cumulative Index to Nursing and Allied Health Literature (CINAHL), PsycINFO, and Australian digital thesis databases was undertaken. The keywords used were: intensive care; critical care; staffing; nurse staffing; understaffing; nurse-patient ratios; adverse outcomes; mortality; ventilator-associated pneumonia; ventilator-acquired pneumonia; infection; length of stay; pressure ulcer/injury; unplanned extubation; medication error; readmission; myocardial infarction; and renal failure. A total of 19 articles were included in the review. Outcomes of interest are patient mortality and morbidity, particularly infection and pressure ulcers. RESULTS: Most of the studies were observational in nature with variables obtained retrospectively from large hospital databases. Nurse staffing measures and patient outcomes varied widely across the studies. While an overall statistical association between increased nurse staffing levels and decreased adverse patient outcomes was not found in this review, most studies concluded that a trend exists between increased nurse staffing levels and decreased adverse events. CONCLUSION: While an overall statistical association between increased nurse staffing levels and decreased adverse patient outcomes was not found in this review, most studies demonstrated a trend between increased nurse staffing levels and decreased adverse patient outcomes in the intensive care unit which is consistent with previous literature. While further more robust research methodologies need to be tested in order to more confidently demonstrate this association and decrease the influence of the many other confounders to patient outcomes; this would be difficult to achieve in this field of research.
Resumo:
Background: Critically ill patients are at high risk for pressure ulcer (PrU) development due to their high acuity and the invasive nature of the multiple interventions and therapies they receive. With reported incidence rates of PrU development in the adult critical care population as high as 56%, the identification of patients at high risk of PrU development is essential. This paper will explore the association between PrU development and risk factors. It will also explore PrU development and the use of risk assessment scales for critically ill patients in adult intensive care units. Method: A literature search from 2000 to 2012 using the CINHAL, Cochrane Library, EBSCOHost, Medline (via EBSCOHost), PubMed, ProQuest and Google Scholar databases was conducted. Key words used were: pressure ulcer/s; pressure sore/s; decubitus ulcer/s; bed sore/s; critical care; intensive care; critical illness; prevalence; incidence; prevention; management; risk factor; risk assessment scale. Results: Nineteen articles were included in this review; eight studies addressing PrU risk factors, eight studies addressing risk assessment scales and three studies overlapping both. Results from the studies reviewed identified 28 intrinsic and extrinsic risk factors which may lead to PrU development. Development of a risk factor prediction model in this patient population, although beneficial, appears problematic due to many issues such as diverse diagnoses and subsequent patient needs. Additionally, several risk assessment instruments have been developed for early screening of patients at higher risk of developing PrU in the ICU. No existing risk assessment scales are valid for identification high risk critically ill patient,with the majority of scales potentially over-predicting patients at risk for PrU development. Conclusion: Research studies to inform the risk factors for potential pressure ulcer development are inconsistent. Additionally, there is no consistent or clear evidence which demonstrates any scale to better or more effective than another when used to identify the patients at risk for PrU development. Furthermore robust research is needed to identify the risk factors and develop valid scales for measuring the risk of PrU development in ICU.
Resumo:
Context: Various epidemiological studies have estimated that up to 70% of runners sustain an overuse running injury each year. Although few overuse running injuries have an established cause, more than 80% of running-related injuries occur at or below the knee, which suggests that some common mechanisms may be at work. The question then becomes, are there common mechanisms related to overuse running injuries? Evidence Acquisition: Research studies were identified via the following electronic databases: MEDLINE, EMBASE PsycInfo, and CINAHL (1980–July 2008). Inclusion was based on evaluation of risk factors for overuse running injuries. Results: A majority of the risk factors that have been researched over the past few years can be generally categorized into 2 groups: atypical foot pronation mechanics and inadequate hip muscle stabilization. Conclusion: Based on the review of literature, there is no definitive link between atypical foot mechanics and running injury mechanisms. The lack of normative data and a definition of typical foot structure has hampered progress. In contrast, a large and growing body of literature suggests that weakness of hip-stabilizing muscles leads to atypical lower extremity mechanics and increased forces within the lower extremity while running.
Resumo:
Background: Despite the technologic advances, radiation dermatitis is still a prevalent and distressing symptom in patients with cancer undergoing radiotherapy. Systematic reviews (SRs) are regarded as level I evidence providing direction for clinical practice and guidelines. This overview aims to provide a critical appraisal of SRs published on interventions for the prevention/management of radiation dermatitis. Methodology: We searched the following electronic databases: MEDLINE, CINAHL, EMBASE, and the Cochrane Library (up to Feb 2012). We also hand-searched reference lists of potentially eligible articles and a number of key journals in the area. Two authors screened all potential articles and included eligible SRs. Two authors critically appraised and extracted key findings from the included reviews using the “A Measurement Tool to Assess Systematic Reviews” (AMSTAR). Results: Of 1837 potential titles, six SRs were included. A number of interventions have been reported to be potentially beneficial for managing radiation dermatitis. Interventions evaluated in these reviews included skin care advice, steroidal/non-steroidal topical agents, systematic therapies, modes of radiation delivery, and dressings. However, all the included SRs reported that there is insufficient evidence supporting any single effective intervention. The methodological quality of the included studies varied, and methodological shortfalls in these reviews may create biases to the overall results or recommendations for clinical practice. Conclusions and implications: An up-to-date high quality SR in preventing/managing radiation dermatitis is needed to guide practice and direction for future research. Clinicians or guideline developers are recommended to critically evaluate the information of SRs in their decision making.
Resumo:
Introduction: Although advances in treatment modalities have improved the survival of head and neck (H&N) cancer patients over recent years, survivors’ quality of life (QoL) could be impaired for a number of reasons. The investigation of QoL determinants can inform the design of supportive interventions for this population. Objectives: To examine the QoL of H&N cancer survivors at 1 year after treatment and to identify potential determinants affecting their QoL. Methods: A systematic search of literature was done in December 2011 in five databases: Pubmed, Medline, Scopus, Sciencedirect and CINAHL, using combined search terms ‘head and neck cancer’, ‘quality of life’, ‘health-related quality of life’ and ‘systematic review’. The methodological qualities of selected studies were assessed by two reviewers using predefined criteria. The study characteristics and results were abstracted and summarized. Results: Thirty-seven studies met all inclusion criteria with methodological quality from moderate to high. The global QoL of H&N cancer survivors returned to baseline at 1 year after treatment. Significant improvement showed in emotional functioning while physical functioning, xerostomia, sticky/insufficient saliva, and fatigue were consistently worse at 12 months compared with baseline. Age, cancer sites and stages, social support, smoking, presence of feeding tube are significant QoL determinants at 12 months. Conclusions: Although the global QoL of H&N cancer survivors recover by 12 months after treatment, problems with physical functioning, fatigue, xerostomia and sticky saliva persist. Regular assessment should be carried out to monitor these problems. Further research is required to develop appropriate and effective interventions for this population.
Resumo:
Objectives: To identify and appraise the literature concerning nurse-administered procedural sedation and analgesia in the cardiac catheter laboratory. Design and data sources: An integrative review method was chosen for this study. MEDLINE and CINAHL databases as well as The Cochrane Database of Systematic Reviews and the Joanna Briggs Institute were searched. Nineteen research articles and three clinical guidelines were identified. Results: The authors of each study reported nurse-administered sedation in the CCL is safe due to the low incidence of complications. However, a higher percentage of deeply sedated patients were reported to experience complications than moderately sedated patients. To confound this issue, one clinical guideline permits deep sedation without an anaesthetist present, while others recommend against it. All clinical guidelines recommend nurses are educated about sedation concepts. Other findings focus on pain and discomfort and the cost-savings of nurse-administered sedation, which are associated with forgoing anaesthetic services. Conclusions: Practice is varied due to limitations in the evidence and inconsistent clinical practice guidelines. Therefore, recommendations for research and practice have been made. Research topics include determining how and in which circumstances capnography can be used in the CCL, discerning the economic impact of sedation-related complications and developing a set of objectives for nursing education about sedation. For practice, if deep sedation is administered without an anaesthetist present, it is essential nurses are adequately trained and have access to vital equipment such as capnography to monitor ventilation because deeply sedated patients are more likely to experience complications related to sedation. These initiatives will go some way to ensuring patients receiving nurse-administered procedural sedation and analgesia for a procedure in the cardiac catheter laboratory are cared for using consistent, safe and evidence-based practices.
Resumo:
BACKGROUND: US Centers for Disease Control guidelines recommend replacement of peripheral intravenous (IV) catheters no more frequently than every 72 to 96 hours. Routine replacement is thought to reduce the risk of phlebitis and bloodstream infection. Catheter insertion is an unpleasant experience for patients and replacement may be unnecessary if the catheter remains functional and there are no signs of inflammation. Costs associated with routine replacement may be considerable. This is an update of a review first published in 2010. OBJECTIVES: To assess the effects of removing peripheral IV catheters when clinically indicated compared with removing and re-siting the catheter routinely. SEARCH METHODS: For this update the Cochrane Peripheral Vascular Diseases (PVD) Group Trials Search Co-ordinator searched the PVD Specialised Register (December 2012) and CENTRAL (2012, Issue 11). We also searched MEDLINE (last searched October 2012) and clinical trials registries. SELECTION CRITERIA: Randomised controlled trials that compared routine removal of peripheral IV catheters with removal only when clinically indicated in hospitalised or community dwelling patients receiving continuous or intermittent infusions. DATA COLLECTION AND ANALYSIS: Two review authors independently assessed trial quality and extracted data. MAIN RESULTS: Seven trials with a total of 4895 patients were included in the review. Catheter-related bloodstream infection (CRBSI) was assessed in five trials (4806 patients). There was no significant between group difference in the CRBSI rate (clinically-indicated 1/2365; routine change 2/2441). The risk ratio (RR) was 0.61 but the confidence interval (CI) was wide, creating uncertainty around the estimate (95% CI 0.08 to 4.68; P = 0.64). No difference in phlebitis rates was found whether catheters were changed according to clinical indications or routinely (clinically-indicated 186/2365; 3-day change 166/2441; RR 1.14, 95% CI 0.93 to 1.39). This result was unaffected by whether infusion through the catheter was continuous or intermittent. We also analysed the data by number of device days and again no differences between groups were observed (RR 1.03, 95% CI 0.84 to 1.27; P = 0.75). One trial assessed all-cause bloodstream infection. There was no difference in this outcome between the two groups (clinically-indicated 4/1593 (0.02%); routine change 9/1690 (0.05%); P = 0.21). Cannulation costs were lower by approximately AUD 7.00 in the clinically-indicated group (mean difference (MD) -6.96, 95% CI -9.05 to -4.86; P ≤ 0.00001). AUTHORS' CONCLUSIONS: The review found no evidence to support changing catheters every 72 to 96 hours. Consequently, healthcare organisations may consider changing to a policy whereby catheters are changed only if clinically indicated. This would provide significant cost savings and would spare patients the unnecessary pain of routine re-sites in the absence of clinical indications. To minimise peripheral catheter-related complications, the insertion site should be inspected at each shift change and the catheter removed if signs of inflammation, infiltration, or blockage are present. OBJECTIVES: To assess the effects of removing peripheral IV catheters when clinically indicated compared with removing and re-siting the catheter routinely. SEARCH METHODS: For this update the Cochrane Peripheral Vascular Diseases (PVD) Group Trials Search Co-ordinator searched the PVD Specialised Register (December 2012) and CENTRAL (2012, Issue 11). We also searched MEDLINE (last searched October 2012) and clinical trials registries. SELECTION CRITERIA: Randomised controlled trials that compared routine removal of peripheral IV catheters with removal only when clinically indicated in hospitalised or community dwelling patients receiving continuous or intermittent infusions. DATA COLLECTION AND ANALYSIS: Two review authors independently assessed trial quality and extracted data. MAIN RESULTS: Seven trials with a total of 4895 patients were included in the review. Catheter-related bloodstream infection (CRBSI) was assessed in five trials (4806 patients). There was no significant between group difference in the CRBSI rate (clinically-indicated 1/2365; routine change 2/2441). The risk ratio (RR) was 0.61 but the confidence interval (CI) was wide, creating uncertainty around the estimate (95% CI 0.08 to 4.68; P = 0.64). No difference in phlebitis rates was found whether catheters were changed according to clinical indications or routinely (clinically-indicated 186/2365; 3-day change 166/2441; RR 1.14, 95% CI 0.93 to 1.39). This result was unaffected by whether infusion through the catheter was continuous or intermittent. We also analysed the data by number of device days and again no differences between groups were observed (RR 1.03, 95% CI 0.84 to 1.27; P = 0.75). One trial assessed all-cause bloodstream infection. There was no difference in this outcome between the two groups (clinically-indicated 4/1593 (0.02%); routine change 9/1690 (0.05%); P = 0.21). Cannulation costs were lower by approximately AUD 7.00 in the clinically-indicated group (mean difference (MD) -6.96, 95% CI -9.05 to -4.86; P ≤ 0.00001). AUTHORS' CONCLUSIONS: The review found no evidence to support changing catheters every 72 to 96 hours. Consequently, healthcare organisations may consider changing to a policy whereby catheters are changed only if clinically indicated. This would provide significant cost savings and would spare patients the unnecessary pain of routine re-sites in the absence of clinical indications. To minimise peripheral catheter-related complications, the insertion site should be inspected at each shift change and the catheter removed if signs of inflammation, infiltration, or blockage are present.
Resumo:
Background Exploring self management in End Stage Renal Disease is extremely important for patients as they encounter several challenges including ongoing symptoms, complex treatments and restrictions, uncertainty about life and a dependency on technology, all of which impact upon their autonomy particularly after commencement of haemodialysis. Objective To summarise the effects of nursing interventions which effect selfmanagement of haemodialysis for patients with End Stage Renal Disease. Search strategy Search terms were chosen after reviewing text words and MeSH terms in relevant articles and databases. An extensive search of the literature from 1966 to June 2009 was conducted across a range of health databases including Cochrane Central Register of Controlled Trials, MEDLINE, EMBASE, CINAHL, PsycINFO and Web of Science. Further studies were identified from reference lists of all retrieved studies. Selection criteria We considered randomised controlled trials that compared interventions to improve self management of haemodialysis in patients with ESRD. In the absence of RCTs, comparative studies without randomisation as well as before and after studies were considered for inclusion. Methodological quality Study reports selected for retrieval were assessed by two independent reviewers for methodological quality prior to inclusion in the review using the standardised critical appraisal instruments for the Joanna Briggs Institute System for the Unified Management, Assessment and Review of Information package (SUMARI). Data collection and analysis Data was extracted using the JBI data extraction tool for evidence of effectiveness independently by pairs of review authors. The evidence was reported in narrative summaries due to heterogeneity of the interventions of the studies. Results and conclusions Five randomised controlled trials were included in the review. Overall, the evidence found that psychosocial and educational interventions influenced self management of haemodialysis in this patient population.