729 resultados para ARDS ICU


Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVE: To explore how registered nurses (RNs) in the general ward perceive discharge processes and practices for patients recently discharged from the intensive care unit (ICU). BACKGROUND: Patients discharged from the ICU environment often require complicated and multifaceted care. The ward-based RN is at the forefront of the care of this fragile patient population, yet their views and perceptions have seldom been explored. DESIGN: A qualitative grounded theory design was used to guide focus group interviews with the RN participants. METHODS: Five semi-structured focus group interviews, including 27 RN participants, were conducted in an Australian metropolitan tertiary referral hospital in 2011. Data analyses of transcripts, field notes and memos used concurrent data generation, constant comparative analysis and theoretical sampling. RESULTS: Results yielded a core category of 'two worlds' stressing the disconnectedness between ICU and the ward setting. This category was divided into sub categories of 'communication disconnect' and 'remember the family'. Properties of 'what we say', 'what we write', 'transfer' and 'information needs' respectively were developed within those sub-categories. CONCLUSION: The discharge process for patients within the ICU setting is complicated and largely underappreciated. There are fundamental, misunderstood differences in prioritisation and care of patients between the areas, with a deep understanding of practice requirements of ward based RNs not being understood. The findings of this research may be used to facilitate inter departmental communications and progress practice development.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introduction: In this study, we report on initial efforts to discover putative biomarkers for differential diagnosis of a systemic inflammatory response syndrome (SIRS) versus sepsis; and different stages of sepsis. In addition, we also investigated whether there are proteins that can discriminate between patients who survived sepsis from those who did not. Materials and Methods: Our study group consisted of 16 patients, of which 6 died and 10 survived. We daily measured 28 plasma proteins, for the whole stay of the patients in the ICU. Results: We observed that metalloproteinases and sE-selectin play a role in the distinction between SIRS and sepsis, and that IL-1, IP-10, sTNF-R2 and sFas appear to be indicative for the progression from sepsis to septic shock. A combined measurement of MMP-3, -10, IL-1, IP-10, sIL-2R, sFas, sTNF-R1, sRAGE, GM-CSF, IL-1 and Eotaxin allows for a good separation of patients that survived from those that died (mortality prediction with a sensitivity of 79% and specificity of 86%). Correlation analysis suggests a novel interaction between IL-1a and IP-10. Conclusion: The marker panel is ready to be verified in a validation study with or without therapeutic intervention.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Diarrhoea is a common complication observed in critically ill patients. Relationships between diarrhoea, enteral nutrition and aerobic intestinal microflora have been disconnectedly examined in this patient cohort. This research used a two-study, observational design to examine these associations. Higher diarrhoea incidence rates were observed when patients received enteral tube feeding, had abnormal serum blood results, received multiple medications and had aerobic microflora dysbiosis. Further, significant aerobic intestinal microflora changes were observed over time in patients who experienced diarrhoea. These results establish a platform for further work to improve the intestinal health of critically ill patients.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aim The aim of this reflective account is to provide a view of the intensive care unit (ICU) relative’s experiences of supporting and being supported in the ICU. Background Understanding the relatives’ experiences of ICU is important especially because a recent work has identified the potential for this group to develop post-traumatic stress disorder, a condition that is normally equated with the ICU survivor. Design A thematic analysis was used in identifying emerging themes that would be significant in an ICU nursing context. Setting The incident took place in two 8-bedded ICUs (Private and National Health Service) in October. Results Two emergent themes were identified from the reflective story – fear of the technological environment and feeling hopeless and helpless. Conclusion The use of relative stories as an insight into the live experiences of ICU relatives may give a deeper understanding of their life-world. The loneliness, anguish and pain of the ICU relative extends beyond the walls of the ICU, and this is often negated as the focus of the ICU team is the patient. Relevance to clinical practice: Developing strategies to support relatives might include the use of relative diaries used concurrently with patient diaries to support this groups recovery or at the very least a gaining a sense of understanding for their ICU experience. Relative follow-up clinics designed specifically to meet their needs where support and advice can be given by the ICU team, in addition to making timely and appropriate referrals to counselling services and perhaps involving spiritual leaders where appropriate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aim: The aim of this survey was to assess registered nurse’s perceptions of alarm setting and management in an Australian Regional Critical Care Unit. Background: The setting and management of alarms within the critical care environment is one of the key responsibilities of the nurse in this area. However, with up to 99% of alarms potentially being false-positives it is easy for the nurse to become desensitised or fatigued by incessant alarms; in some cases up to 400 per patient per day. Inadvertently ignoring, silencing or disabling alarms can have deleterious implications for the patient and nurse. Method: A total population sample of 48 nursing staff from a 13 bedded ICU/HDU/CCU within regional Australia were asked to participate. A 10 item open-ended and multiple choice questionnaire was distributed to determine their perceptions and attitudes of alarm setting and management within this clinical area. Results: Two key themes were identified from the open-ended questions: attitudes towards inappropriate alarm settings and annoyance at delayed responses to alarms. A significant number of respondents (93%) agreed that alarm fatigue can result in alarm desensitisation and the disabling of alarms, whilst 81% suggested the key factors are those associated with false-positive alarms and inappropriately set alarms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is a wide range of potential study designs for intervention studies to decrease nosocomial infections in hospitals. The analysis is complex due to competing events, clustering, multiple timescales and time-dependent period and intervention variables. This review considers the popular pre-post quasi-experimental design and compares it with randomized designs. Randomization can be done in several ways: randomization of the cluster [intensive care unit (ICU) or hospital] in a parallel design; randomization of the sequence in a cross-over design; and randomization of the time of intervention in a stepped-wedge design. We introduce each design in the context of nosocomial infections and discuss the designs with respect to the following key points: bias, control for nonintervention factors, and generalizability. Statistical issues are discussed. A pre-post-intervention design is often the only choice that will be informative for a retrospective analysis of an outbreak setting. It can be seen as a pilot study with further, more rigorous designs needed to establish causality. To yield internally valid results, randomization is needed. Generally, the first choice in terms of the internal validity should be a parallel cluster randomized trial. However, generalizability might be stronger in a stepped-wedge design because a wider range of ICU clinicians may be convinced to participate, especially if there are pilot studies with promising results. For analysis, the use of extended competing risk models is recommended.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introduction Risk factor analyses for nosocomial infections (NIs) are complex. First, due to competing events for NI, the association between risk factors of NI as measured using hazard rates may not coincide with the association using cumulative probability (risk). Second, patients from the same intensive care unit (ICU) who share the same environmental exposure are likely to be more similar with regard to risk factors predisposing to a NI than patients from different ICUs. We aimed to develop an analytical approach to account for both features and to use it to evaluate associations between patient- and ICU-level characteristics with both rates of NI and competing risks and with the cumulative probability of infection. Methods We considered a multicenter database of 159 intensive care units containing 109,216 admissions (813,739 admission-days) from the Spanish HELICS-ENVIN ICU network. We analyzed the data using two models: an etiologic model (rate based) and a predictive model (risk based). In both models, random effects (shared frailties) were introduced to assess heterogeneity. Death and discharge without NI are treated as competing events for NI. Results There was a large heterogeneity across ICUs in NI hazard rates, which remained after accounting for multilevel risk factors, meaning that there are remaining unobserved ICU-specific factors that influence NI occurrence. Heterogeneity across ICUs in terms of cumulative probability of NI was even more pronounced. Several risk factors had markedly different associations in the rate-based and risk-based models. For some, the associations differed in magnitude. For example, high Acute Physiology and Chronic Health Evaluation II (APACHE II) scores were associated with modest increases in the rate of nosocomial bacteremia, but large increases in the risk. Others differed in sign, for example respiratory vs cardiovascular diagnostic categories were associated with a reduced rate of nosocomial bacteremia, but an increased risk. Conclusions A combination of competing risks and multilevel models is required to understand direct and indirect risk factors for NI and distinguish patient-level from ICU-level factors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose Education reform aimed at achieving improved student learning is a demanding challenge for leaders and managers at all levels of education across the globe. In 2010, the position of Assistant Regional Directors, School Performance (ARD-SP), was established to positively impact upon student learning across public schools in Queensland, Australia. This study explores the perceptions of the role and leadership understandings of ARDs in Queensland in order to understand more fully the tensions and opportunities they face within this reasonably newly created position. Design/methodology/approach This qualitative study is based on interviews with 18 Assistant Regional Directors and two of their supervisors to gauge a better understanding of the nature of the role as it relates to leadership and management in the Queensland context. Findings Interview data revealed three key themes pertaining to the nature of the role and these were performance, supervision, and professional challenges. A key finding was that the notion of supervision was experienced as problematic for ARDs-SP. Research limitations/implications This study has limitations and these include a sample that focused on Assistant Regional Directors within one State of Australia and one schooling system (i.e. public education); and interviews were the primary data collection source. Originality/value Although there have been studies of supervisors of principals (referred to as superintendents, directors) in other countries and other systems, this study is a first to explore the tensions and opportunities faced by executive leaders in Queensland.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this study was to identify pressure ulcer (PU) incidence and risk factors that are associated with PU development in patients in two adult intensive care units (ICU) in Saudi Arabia. A prospective cohort study design was used. A total of 84 participants were screened second daily basis until discharge or death, over a consecutive 30-day period, out of which 33 participants with new PUs were identified giving a cumulative hospital-acquired PU incidence of 39·3% (33/84 participants). The incidence of medical devices-related PUs was 8·3% (7/84). Age, length of stay in the ICU, history of cardiovascular disease and kidney disease, infrequent repositioning, time of operation, emergency admission, mechanical ventilation and lower Braden Scale scores independently predicted the development of a PU. According to binary logistic regression analyses, age, longer stay in ICU and infrequent repositioning were significant predictors of all stages of PUs, while the length of stay in the ICU and infrequent repositioning were associated with the development of stages II-IV PUs. In conclusion, PU incidence rate was higher than that reported in other international studies. This indicates that urgent attention is required for PU prevention strategies in this setting.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose This study tested the effectiveness of a pressure ulcer (PU) prevention bundle in reducing the incidence of PUs in critically ill patients in two Saudi intensive care units (ICUs). Design A two-arm cluster randomized experimental control trial. Methods Participants in the intervention group received the PU prevention bundle, while the control group received standard skin care as per the local ICU policies. Data collected included demographic variables (age, diagnosis, comorbidities, admission trajectory, length of stay) and clinical variables (Braden Scale score, severity of organ function score, mechanical ventilation, PU presence, and staging). All patients were followed every two days from admission through to discharge, death, or up to a maximum of 28 days. Data were analyzed with descriptive correlation statistics, Kaplan-Meier survival analysis, and Poisson regression. Findings The total number of participants recruited was 140: 70 control participants (with a total of 728 days of observation) and 70 intervention participants (784 days of observation). PU cumulative incidence was significantly lower in the intervention group (7.14%) compared to the control group (32.86%). Poisson regression revealed the likelihood of PU development was 70% lower in the intervention group. The intervention group had significantly less Stage I (p = 002) and Stage II PU development (p = 026). Conclusions Significant improvements were observed in PU-related outcomes with the implementation of the PU prevention bundle in the ICU; PU incidence, severity, and total number of PUs per patient were reduced. Clinical Relevance Utilizing a bundle approach and standardized nursing language through skin assessment and translation of the knowledge to practice has the potential to impact positively on the quality of care and patient outcome.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose To test an interventional patient skin integrity bundle, InSPiRE protocol, on the impact of pressure injuries (PrIs) in critically ill patients in an Australian adult intensive care unit (ICU). Methods Before and after design was used where the group of patients receiving the intervention (InSPiRE protocol) was compared with a similar control group who received standard care. Data collected included demographic and clinical variables, skin assessment, PrI presence and stage, and a Sequential Organ Failure Assessment (SOFA) score. Results Overall, 207 patients were enrolled, 105 in the intervention group and 102 in the control group. Most patients were men, mean age 55. The groups were similar on major demographic variables (age, SOFA scores, ICU length of stay). Pressure injury cumulative incidence was significantly lower in the intervention group (18%) compared to the control group for skin injuries(30.4%) (χ2=4.271, df=1, p=0.039) and mucous injuries (t test =3.27, p=<0.001) . Significantly fewer PrIs developing over time in the intervention group (Logrank= 11.842, df=1, p=<0.001) and patients developed fewer skin injuries (>3 PrIs/patient = 1/105) compared with the control group (>3 injuries/patient = 10/102) (p=0.018). Conclusion The intervention group, recieving the InSPiRE protocol, had lower PrI cumulative incidence, and reduced number and severity of PrIs that developed over time. Systematic and ongoing assessment of the patient's skin and PrI risk as well as implementation of tailored prevention measures are central to preventing PrIs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background It is often believed that by ensuring the ongoing completion of competency documents and life-long learning in nursing practice guarantees quality patient care. This is probably true in most cases where it provides reassurances that the nursing team is maintaining a safe “generalised” level of practice. However, competency does not always promise quality performance. There are a number of studies that have reported differences in what practitioners know and what they actually do despite being deemed competent. Aim The aim of this study was to assess whether our current competency documentation is fit for purpose and to ascertain whether performance assessment needs to be a key component in determining competence. Method 15 nurses within a General ICU who had been on the unit <4 years agreed to participate in this project. Using participant observation and assessing performance against key indicators of the Benner Novice to Expert5 model the participants were supported and assessed over the course of a ‘normal’ nursing shift. Results The results were surprising both positively and negatively. First, the nurses felt more empowered in their clinical decision making skills; second, it identified individual learning needs and milestones in educational development. There were some key challenges identified which included 5 nurses over estimating their level of competence, practice was still very much focused on task acquisition and skill and surprisingly some nurses still felt dominated by the other health professionals within the unit. Conclusion We found that the capacity and capabilities of our nursing workforce needs continual ongoing support especially if we want to move our staff from capable task-doer to competent performers. Using the key novice to expert indicators identified the way forward for us in how we assess performance and competence in practice particularly where promotion to higher grades is based on existing documentation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aims & Objectives - identify and diagnose the current problems associated with patient care with regard to the nursing management of patients with Sengstaken-Blakemore tubes insitu; - Identify current nursing practice currently in place within the ICU and the hospital; identify the method by which the assessment and provision of nursing care is delivered in the ICU

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background People admitted to intensive care units and those with chronic health care problems often require long-term vascular access. Central venous access devices (CVADs) are used for administering intravenous medications and blood sampling. CVADs are covered with a dressing and secured with an adhesive or adhesive tape to protect them from infection and reduce movement. Dressings are changed when they become soiled with blood or start to come away from the skin. Repeated removal and application of dressings can cause damage to the skin. The skin is an important barrier that protects the body against infection. Less frequent dressing changes may reduce skin damage, but it is unclear whether this practice affects the frequency of catheter-related infections. Objectives To assess the effect of the frequency of CVAD dressing changes on the incidence of catheter-related infections and other outcomes including pain and skin damage. Search methods In June 2015 we searched: The Cochrane Wounds Specialised Register; The Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library); Ovid MEDLINE; Ovid MEDLINE (In-Process & Other Non-Indexed Citations); Ovid EMBASE and EBSCO CINAHL. We also searched clinical trials registries for registered trials. There were no restrictions with respect to language, date of publication or study setting. Selection criteria All randomised controlled trials (RCTs) evaluating the effect of the frequency of CVAD dressing changes on the incidence of catheter-related infections on all patients in any healthcare setting. Data collection and analysis We used standard Cochrane review methodology. Two review authors independently assessed studies for inclusion, performed risk of bias assessment and data extraction. We undertook meta-analysis where appropriate or otherwise synthesised data descriptively when heterogeneous. Main results We included five RCTs (2277 participants) that compared different frequencies of CVAD dressing changes. The studies were all conducted in Europe and published between 1995 and 2009. Participants were recruited from the intensive care and cancer care departments of one children's and four adult hospitals. The studies used a variety of transparent dressings and compared a longer interval between dressing changes (5 to15 days; intervention) with a shorter interval between changes (2 to 5 days; control). In each study participants were followed up until the CVAD was removed or until discharge from ICU or hospital. - Confirmed catheter-related bloodstream infection (CRBSI) One trial randomised 995 people receiving central venous catheters to a longer or shorter interval between dressing changes and measured CRBSI. It is unclear whether there is a difference in the risk of CRBSI between people having long or short intervals between dressing changes (RR 1.42, 95% confidence interval (CI) 0.40 to 4.98) (low quality evidence). - Suspected catheter-related bloodstream infection Two trials randomised a total of 151 participants to longer or shorter dressing intervals and measured suspected CRBSI. It is unclear whether there is a difference in the risk of suspected CRBSI between people having long or short intervals between dressing changes (RR 0.70, 95% CI 0.23 to 2.10) (low quality evidence). - All cause mortality Three trials randomised a total of 896 participants to longer or shorter dressing intervals and measured all cause mortality. It is unclear whether there is a difference in the risk of death from any cause between people having long or short intervals between dressing changes (RR 1.06, 95% CI 0.90 to 1.25) (low quality evidence). - Catheter-site infection Two trials randomised a total of 371 participants to longer or shorter dressing intervals and measured catheter-site infection. It is unclear whether there is a difference in risk of catheter-site infection between people having long or short intervals between dressing changes (RR 1.07, 95% CI 0.71 to 1.63) (low quality evidence). - Skin damage One small trial (112 children) and three trials (1475 adults) measured skin damage. There was very low quality evidence for the effect of long intervals between dressing changes on skin damage compared with short intervals (children: RR of scoring ≥ 2 on the skin damage scale 0.33, 95% CI 0.16 to 0.68; data for adults not pooled). - Pain Two studies involving 193 participants measured pain. It is unclear if there is a difference between long and short interval dressing changes on pain during dressing removal (RR 0.80, 95% CI 0.46 to 1.38) (low quality evidence). Authors' conclusions The best available evidence is currently inconclusive regarding whether longer intervals between CVAD dressing changes are associated with more or less catheter-related infection, mortality or pain than shorter intervals.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Patients may need massive volume-replacement therapy after cardiac surgery because of large fluid transfer perioperatively, and the use of cardiopulmonary bypass. Hemodynamic stability is better maintained with colloids than crystalloids but colloids have more adverse effects such as coagulation disturbances and impairment of renal function than do crystalloids. The present study examined the effects of modern hydroxyethyl starch (HES) and gelatin solutions on blood coagulation and hemodynamics. The mechanism by which colloids disturb blood coagulation was investigated by thromboelastometry (TEM) after cardiac surgery and in vitro by use of experimental hemodilution. Materials and methods: Ninety patients scheduled for elective primary cardiac surgery (Studies I, II, IV, V), and twelve healthy volunteers (Study III) were included in this study. After admission to the cardiac surgical intensive care unit (ICU), patients were randomized to receive different doses of HES 130/0.4, HES 200/0.5, or 4% albumin solutions. Ringer’s acetate or albumin solutions served as controls. Coagulation was assessed by TEM, and hemodynamic measurements were based on thermodilutionally measured cardiac index (CI). Results: HES and gelatin solutions impaired whole blood coagulation similarly as measured by TEM even at a small dose of 7 mL/kg. These solutions reduced clot strength and prolonged clot formation time. These effects were more pronounced with increasing doses of colloids. Neither albumin nor Ringer’s acetate solution disturbed blood coagulation significantly. Coagulation disturbances after infusion of HES or gelatin solutions were clinically slight, and postoperative blood loss was comparable with that of Ringer’s acetate or albumin solutions. Both single and multiple doses of all the colloids increased CI postoperatively, and this effect was dose-dependent. Ringer’s acetate had no effect on CI. At a small dose (7 mL/kg), the effect of gelatin on CI was comparable with that of Ringer’s acetate and significantly less than that of HES 130/0.4 (Study V). However, when the dose was increased to 14 and 21 mL/kg, the hemodynamic effect of gelatin rose and became comparable with that of HES 130/0.4. Conclusions: After cardiac surgery, HES and gelatin solutions impaired clot strength in a dose-dependent manner. The potential mechanisms were interaction with fibrinogen and fibrin formation, resulting in decreased clot strength, and hemodilution. Although the use of HES and gelatin inhibited coagulation, postoperative bleeding on the first postoperative morning in all the study groups was similar. A single dose of HES solutions improved CI postoperatively more than did gelatin, albumin, or Ringer’s acetate. However, when administered in a repeated fashion, (cumulative dose of 14 mL/kg or more), no differences were evident between HES 130/0.4 and gelatin.