594 resultados para Uniscribe ICU
Barbara's story : a thematic analysis of a relative's reflection of being in the intensive care unit
Resumo:
Aim The aim of this reflective account is to provide a view of the intensive care unit (ICU) relative’s experiences of supporting and being supported in the ICU. Background Understanding the relatives’ experiences of ICU is important especially because a recent work has identified the potential for this group to develop post-traumatic stress disorder, a condition that is normally equated with the ICU survivor. Design A thematic analysis was used in identifying emerging themes that would be significant in an ICU nursing context. Setting The incident took place in two 8-bedded ICUs (Private and National Health Service) in October. Results Two emergent themes were identified from the reflective story – fear of the technological environment and feeling hopeless and helpless. Conclusion The use of relative stories as an insight into the live experiences of ICU relatives may give a deeper understanding of their life-world. The loneliness, anguish and pain of the ICU relative extends beyond the walls of the ICU, and this is often negated as the focus of the ICU team is the patient. Relevance to clinical practice: Developing strategies to support relatives might include the use of relative diaries used concurrently with patient diaries to support this groups recovery or at the very least a gaining a sense of understanding for their ICU experience. Relative follow-up clinics designed specifically to meet their needs where support and advice can be given by the ICU team, in addition to making timely and appropriate referrals to counselling services and perhaps involving spiritual leaders where appropriate.
Resumo:
Aim: The aim of this survey was to assess registered nurse’s perceptions of alarm setting and management in an Australian Regional Critical Care Unit. Background: The setting and management of alarms within the critical care environment is one of the key responsibilities of the nurse in this area. However, with up to 99% of alarms potentially being false-positives it is easy for the nurse to become desensitised or fatigued by incessant alarms; in some cases up to 400 per patient per day. Inadvertently ignoring, silencing or disabling alarms can have deleterious implications for the patient and nurse. Method: A total population sample of 48 nursing staff from a 13 bedded ICU/HDU/CCU within regional Australia were asked to participate. A 10 item open-ended and multiple choice questionnaire was distributed to determine their perceptions and attitudes of alarm setting and management within this clinical area. Results: Two key themes were identified from the open-ended questions: attitudes towards inappropriate alarm settings and annoyance at delayed responses to alarms. A significant number of respondents (93%) agreed that alarm fatigue can result in alarm desensitisation and the disabling of alarms, whilst 81% suggested the key factors are those associated with false-positive alarms and inappropriately set alarms.
Resumo:
There is a wide range of potential study designs for intervention studies to decrease nosocomial infections in hospitals. The analysis is complex due to competing events, clustering, multiple timescales and time-dependent period and intervention variables. This review considers the popular pre-post quasi-experimental design and compares it with randomized designs. Randomization can be done in several ways: randomization of the cluster [intensive care unit (ICU) or hospital] in a parallel design; randomization of the sequence in a cross-over design; and randomization of the time of intervention in a stepped-wedge design. We introduce each design in the context of nosocomial infections and discuss the designs with respect to the following key points: bias, control for nonintervention factors, and generalizability. Statistical issues are discussed. A pre-post-intervention design is often the only choice that will be informative for a retrospective analysis of an outbreak setting. It can be seen as a pilot study with further, more rigorous designs needed to establish causality. To yield internally valid results, randomization is needed. Generally, the first choice in terms of the internal validity should be a parallel cluster randomized trial. However, generalizability might be stronger in a stepped-wedge design because a wider range of ICU clinicians may be convinced to participate, especially if there are pilot studies with promising results. For analysis, the use of extended competing risk models is recommended.
Resumo:
Introduction Risk factor analyses for nosocomial infections (NIs) are complex. First, due to competing events for NI, the association between risk factors of NI as measured using hazard rates may not coincide with the association using cumulative probability (risk). Second, patients from the same intensive care unit (ICU) who share the same environmental exposure are likely to be more similar with regard to risk factors predisposing to a NI than patients from different ICUs. We aimed to develop an analytical approach to account for both features and to use it to evaluate associations between patient- and ICU-level characteristics with both rates of NI and competing risks and with the cumulative probability of infection. Methods We considered a multicenter database of 159 intensive care units containing 109,216 admissions (813,739 admission-days) from the Spanish HELICS-ENVIN ICU network. We analyzed the data using two models: an etiologic model (rate based) and a predictive model (risk based). In both models, random effects (shared frailties) were introduced to assess heterogeneity. Death and discharge without NI are treated as competing events for NI. Results There was a large heterogeneity across ICUs in NI hazard rates, which remained after accounting for multilevel risk factors, meaning that there are remaining unobserved ICU-specific factors that influence NI occurrence. Heterogeneity across ICUs in terms of cumulative probability of NI was even more pronounced. Several risk factors had markedly different associations in the rate-based and risk-based models. For some, the associations differed in magnitude. For example, high Acute Physiology and Chronic Health Evaluation II (APACHE II) scores were associated with modest increases in the rate of nosocomial bacteremia, but large increases in the risk. Others differed in sign, for example respiratory vs cardiovascular diagnostic categories were associated with a reduced rate of nosocomial bacteremia, but an increased risk. Conclusions A combination of competing risks and multilevel models is required to understand direct and indirect risk factors for NI and distinguish patient-level from ICU-level factors.
Resumo:
The purpose of this study was to identify pressure ulcer (PU) incidence and risk factors that are associated with PU development in patients in two adult intensive care units (ICU) in Saudi Arabia. A prospective cohort study design was used. A total of 84 participants were screened second daily basis until discharge or death, over a consecutive 30-day period, out of which 33 participants with new PUs were identified giving a cumulative hospital-acquired PU incidence of 39·3% (33/84 participants). The incidence of medical devices-related PUs was 8·3% (7/84). Age, length of stay in the ICU, history of cardiovascular disease and kidney disease, infrequent repositioning, time of operation, emergency admission, mechanical ventilation and lower Braden Scale scores independently predicted the development of a PU. According to binary logistic regression analyses, age, longer stay in ICU and infrequent repositioning were significant predictors of all stages of PUs, while the length of stay in the ICU and infrequent repositioning were associated with the development of stages II-IV PUs. In conclusion, PU incidence rate was higher than that reported in other international studies. This indicates that urgent attention is required for PU prevention strategies in this setting.
Resumo:
Purpose This study tested the effectiveness of a pressure ulcer (PU) prevention bundle in reducing the incidence of PUs in critically ill patients in two Saudi intensive care units (ICUs). Design A two-arm cluster randomized experimental control trial. Methods Participants in the intervention group received the PU prevention bundle, while the control group received standard skin care as per the local ICU policies. Data collected included demographic variables (age, diagnosis, comorbidities, admission trajectory, length of stay) and clinical variables (Braden Scale score, severity of organ function score, mechanical ventilation, PU presence, and staging). All patients were followed every two days from admission through to discharge, death, or up to a maximum of 28 days. Data were analyzed with descriptive correlation statistics, Kaplan-Meier survival analysis, and Poisson regression. Findings The total number of participants recruited was 140: 70 control participants (with a total of 728 days of observation) and 70 intervention participants (784 days of observation). PU cumulative incidence was significantly lower in the intervention group (7.14%) compared to the control group (32.86%). Poisson regression revealed the likelihood of PU development was 70% lower in the intervention group. The intervention group had significantly less Stage I (p = 002) and Stage II PU development (p = 026). Conclusions Significant improvements were observed in PU-related outcomes with the implementation of the PU prevention bundle in the ICU; PU incidence, severity, and total number of PUs per patient were reduced. Clinical Relevance Utilizing a bundle approach and standardized nursing language through skin assessment and translation of the knowledge to practice has the potential to impact positively on the quality of care and patient outcome.
Resumo:
Purpose To test an interventional patient skin integrity bundle, InSPiRE protocol, on the impact of pressure injuries (PrIs) in critically ill patients in an Australian adult intensive care unit (ICU). Methods Before and after design was used where the group of patients receiving the intervention (InSPiRE protocol) was compared with a similar control group who received standard care. Data collected included demographic and clinical variables, skin assessment, PrI presence and stage, and a Sequential Organ Failure Assessment (SOFA) score. Results Overall, 207 patients were enrolled, 105 in the intervention group and 102 in the control group. Most patients were men, mean age 55. The groups were similar on major demographic variables (age, SOFA scores, ICU length of stay). Pressure injury cumulative incidence was significantly lower in the intervention group (18%) compared to the control group for skin injuries(30.4%) (χ2=4.271, df=1, p=0.039) and mucous injuries (t test =3.27, p=<0.001) . Significantly fewer PrIs developing over time in the intervention group (Logrank= 11.842, df=1, p=<0.001) and patients developed fewer skin injuries (>3 PrIs/patient = 1/105) compared with the control group (>3 injuries/patient = 10/102) (p=0.018). Conclusion The intervention group, recieving the InSPiRE protocol, had lower PrI cumulative incidence, and reduced number and severity of PrIs that developed over time. Systematic and ongoing assessment of the patient's skin and PrI risk as well as implementation of tailored prevention measures are central to preventing PrIs.
Resumo:
Background It is often believed that by ensuring the ongoing completion of competency documents and life-long learning in nursing practice guarantees quality patient care. This is probably true in most cases where it provides reassurances that the nursing team is maintaining a safe “generalised” level of practice. However, competency does not always promise quality performance. There are a number of studies that have reported differences in what practitioners know and what they actually do despite being deemed competent. Aim The aim of this study was to assess whether our current competency documentation is fit for purpose and to ascertain whether performance assessment needs to be a key component in determining competence. Method 15 nurses within a General ICU who had been on the unit <4 years agreed to participate in this project. Using participant observation and assessing performance against key indicators of the Benner Novice to Expert5 model the participants were supported and assessed over the course of a ‘normal’ nursing shift. Results The results were surprising both positively and negatively. First, the nurses felt more empowered in their clinical decision making skills; second, it identified individual learning needs and milestones in educational development. There were some key challenges identified which included 5 nurses over estimating their level of competence, practice was still very much focused on task acquisition and skill and surprisingly some nurses still felt dominated by the other health professionals within the unit. Conclusion We found that the capacity and capabilities of our nursing workforce needs continual ongoing support especially if we want to move our staff from capable task-doer to competent performers. Using the key novice to expert indicators identified the way forward for us in how we assess performance and competence in practice particularly where promotion to higher grades is based on existing documentation.
Resumo:
Aims & Objectives - identify and diagnose the current problems associated with patient care with regard to the nursing management of patients with Sengstaken-Blakemore tubes insitu; - Identify current nursing practice currently in place within the ICU and the hospital; identify the method by which the assessment and provision of nursing care is delivered in the ICU
Resumo:
Background People admitted to intensive care units and those with chronic health care problems often require long-term vascular access. Central venous access devices (CVADs) are used for administering intravenous medications and blood sampling. CVADs are covered with a dressing and secured with an adhesive or adhesive tape to protect them from infection and reduce movement. Dressings are changed when they become soiled with blood or start to come away from the skin. Repeated removal and application of dressings can cause damage to the skin. The skin is an important barrier that protects the body against infection. Less frequent dressing changes may reduce skin damage, but it is unclear whether this practice affects the frequency of catheter-related infections. Objectives To assess the effect of the frequency of CVAD dressing changes on the incidence of catheter-related infections and other outcomes including pain and skin damage. Search methods In June 2015 we searched: The Cochrane Wounds Specialised Register; The Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library); Ovid MEDLINE; Ovid MEDLINE (In-Process & Other Non-Indexed Citations); Ovid EMBASE and EBSCO CINAHL. We also searched clinical trials registries for registered trials. There were no restrictions with respect to language, date of publication or study setting. Selection criteria All randomised controlled trials (RCTs) evaluating the effect of the frequency of CVAD dressing changes on the incidence of catheter-related infections on all patients in any healthcare setting. Data collection and analysis We used standard Cochrane review methodology. Two review authors independently assessed studies for inclusion, performed risk of bias assessment and data extraction. We undertook meta-analysis where appropriate or otherwise synthesised data descriptively when heterogeneous. Main results We included five RCTs (2277 participants) that compared different frequencies of CVAD dressing changes. The studies were all conducted in Europe and published between 1995 and 2009. Participants were recruited from the intensive care and cancer care departments of one children's and four adult hospitals. The studies used a variety of transparent dressings and compared a longer interval between dressing changes (5 to15 days; intervention) with a shorter interval between changes (2 to 5 days; control). In each study participants were followed up until the CVAD was removed or until discharge from ICU or hospital. - Confirmed catheter-related bloodstream infection (CRBSI) One trial randomised 995 people receiving central venous catheters to a longer or shorter interval between dressing changes and measured CRBSI. It is unclear whether there is a difference in the risk of CRBSI between people having long or short intervals between dressing changes (RR 1.42, 95% confidence interval (CI) 0.40 to 4.98) (low quality evidence). - Suspected catheter-related bloodstream infection Two trials randomised a total of 151 participants to longer or shorter dressing intervals and measured suspected CRBSI. It is unclear whether there is a difference in the risk of suspected CRBSI between people having long or short intervals between dressing changes (RR 0.70, 95% CI 0.23 to 2.10) (low quality evidence). - All cause mortality Three trials randomised a total of 896 participants to longer or shorter dressing intervals and measured all cause mortality. It is unclear whether there is a difference in the risk of death from any cause between people having long or short intervals between dressing changes (RR 1.06, 95% CI 0.90 to 1.25) (low quality evidence). - Catheter-site infection Two trials randomised a total of 371 participants to longer or shorter dressing intervals and measured catheter-site infection. It is unclear whether there is a difference in risk of catheter-site infection between people having long or short intervals between dressing changes (RR 1.07, 95% CI 0.71 to 1.63) (low quality evidence). - Skin damage One small trial (112 children) and three trials (1475 adults) measured skin damage. There was very low quality evidence for the effect of long intervals between dressing changes on skin damage compared with short intervals (children: RR of scoring ≥ 2 on the skin damage scale 0.33, 95% CI 0.16 to 0.68; data for adults not pooled). - Pain Two studies involving 193 participants measured pain. It is unclear if there is a difference between long and short interval dressing changes on pain during dressing removal (RR 0.80, 95% CI 0.46 to 1.38) (low quality evidence). Authors' conclusions The best available evidence is currently inconclusive regarding whether longer intervals between CVAD dressing changes are associated with more or less catheter-related infection, mortality or pain than shorter intervals.
Resumo:
Background: Patients may need massive volume-replacement therapy after cardiac surgery because of large fluid transfer perioperatively, and the use of cardiopulmonary bypass. Hemodynamic stability is better maintained with colloids than crystalloids but colloids have more adverse effects such as coagulation disturbances and impairment of renal function than do crystalloids. The present study examined the effects of modern hydroxyethyl starch (HES) and gelatin solutions on blood coagulation and hemodynamics. The mechanism by which colloids disturb blood coagulation was investigated by thromboelastometry (TEM) after cardiac surgery and in vitro by use of experimental hemodilution. Materials and methods: Ninety patients scheduled for elective primary cardiac surgery (Studies I, II, IV, V), and twelve healthy volunteers (Study III) were included in this study. After admission to the cardiac surgical intensive care unit (ICU), patients were randomized to receive different doses of HES 130/0.4, HES 200/0.5, or 4% albumin solutions. Ringer’s acetate or albumin solutions served as controls. Coagulation was assessed by TEM, and hemodynamic measurements were based on thermodilutionally measured cardiac index (CI). Results: HES and gelatin solutions impaired whole blood coagulation similarly as measured by TEM even at a small dose of 7 mL/kg. These solutions reduced clot strength and prolonged clot formation time. These effects were more pronounced with increasing doses of colloids. Neither albumin nor Ringer’s acetate solution disturbed blood coagulation significantly. Coagulation disturbances after infusion of HES or gelatin solutions were clinically slight, and postoperative blood loss was comparable with that of Ringer’s acetate or albumin solutions. Both single and multiple doses of all the colloids increased CI postoperatively, and this effect was dose-dependent. Ringer’s acetate had no effect on CI. At a small dose (7 mL/kg), the effect of gelatin on CI was comparable with that of Ringer’s acetate and significantly less than that of HES 130/0.4 (Study V). However, when the dose was increased to 14 and 21 mL/kg, the hemodynamic effect of gelatin rose and became comparable with that of HES 130/0.4. Conclusions: After cardiac surgery, HES and gelatin solutions impaired clot strength in a dose-dependent manner. The potential mechanisms were interaction with fibrinogen and fibrin formation, resulting in decreased clot strength, and hemodilution. Although the use of HES and gelatin inhibited coagulation, postoperative bleeding on the first postoperative morning in all the study groups was similar. A single dose of HES solutions improved CI postoperatively more than did gelatin, albumin, or Ringer’s acetate. However, when administered in a repeated fashion, (cumulative dose of 14 mL/kg or more), no differences were evident between HES 130/0.4 and gelatin.
Resumo:
Assessment of the outcome of critical illness is complex. Severity scoring systems and organ dysfunction scores are traditional tools in mortality and morbidity prediction in intensive care. Their ability to explain risk of death is impressive for large cohorts of patients, but insufficient for an individual patient. Although events before intensive care unit (ICU) admission are prognostically important, the prediction models utilize data collected at and just after ICU admission. In addition, several biomarkers have been evaluated to predict mortality, but none has proven entirely useful in clinical practice. Therefore, new prognostic markers of critical illness are vital when evaluating the intensive care outcome. The aim of this dissertation was to investigate new measures and biological markers of critical illness and to evaluate their predictive value and association with mortality and disease severity. The impact of delay in emergency department (ED) on intensive care outcome, measured as hospital mortality and health-related quality of life (HRQoL) at 6 months, was assessed in 1537 consecutive patients admitted to medical ICU. Two new biological markers were investigated in two separate patient populations: in 231 ICU patients and 255 patients with severe sepsis or septic shock. Cell-free plasma DNA is a surrogate marker of apoptosis. Its association with disease severity and mortality rate was evaluated in ICU patients. Next, the predictive value of plasma DNA regarding mortality and its association with the degree of organ dysfunction and disease severity was evaluated in severe sepsis or septic shock. Heme oxygenase-1 (HO-1) is a potential regulator of apoptosis. Finally, HO-1 plasma concentrations and HO-1 gene polymorphisms and their association with outcome were evaluated in ICU patients. The length of ED stay was not associated with outcome of intensive care. The hospital mortality rate was significantly lower in patients admitted to the medical ICU from the ED than from the non-ED, and the HRQoL in the critically ill at 6 months was significantly lower than in the age- and sex-matched general population. In the ICU patient population, the maximum plasma DNA concentration measured during the first 96 hours in intensive care correlated significantly with disease severity and degree of organ failure and was independently associated with hospital mortality. In patients with severe sepsis or septic shock, the cell-free plasma DNA concentrations were significantly higher in ICU and hospital nonsurvivors than in survivors and showed a moderate discriminative power regarding ICU mortality. Plasma DNA was an independent predictor for ICU mortality, but not for hospital mortality. The degree of organ dysfunction correlated independently with plasma DNA concentration in severe sepsis and plasma HO-1 concentration in ICU patients. The HO-1 -413T/GT(L)/+99C haplotype was associated with HO-1 plasma levels and frequency of multiple organ dysfunction. Plasma DNA and HO-1 concentrations may support the assessment of outcome or organ failure development in critically ill patients, although their value is limited and requires further evaluation.
Resumo:
Severe sepsis is associated with common occurrence, high costs of care and significant mortality. The incidence of severe sepsis has been reported to vary between 0.5/1000 and 3/1000 in different studies. The worldwide Severe Sepsis Campaign, guidelines and treatment protocols aim at decreasing severe sepsis associated high morbidity and mortality. Various mediators of inflammation, such as high mobility group box-1 protein (HMGB1) and vascular endothelial growth factor (VEGF), have been tested for severity of illness and outcome in severe sepsis. Long-term survival with quality of life (QOL) assessment is important outcome after severe sepsis. The objective of this study was to evaluate the incidence, severity of organ dysfunction and outcome of severe sepsis in intensive care treated patients in Finland (study I)). HMGB1 and VEGF were studied in predicting severity of illness, development and type of organ dysfunction and hospital mortality (studies II and III). The long-term outcome and quality of life were assessed and quality-adjusted life years and cost per one QALY were estimated (study IV). A total of 470 patients with severe sepsis were included in the Finnsepsis Study. Patients were treated in 24 Finnish intensive care units in a 4-month period from 1 November 2004 to 28 February 2005. The incidence of severe sepsis was 0.38 /1,000 in the adult population (95% confidence interval 0.34-0.41). Septic shock (77%), severe oxygenation impairment (71.4%) and acute renal failure (23.2%) were the most common organ failures. The ICU, hospital, one-year and two-year mortalities were 15.5%, 28.3%, 40.9% and 44.9% respectively. HMGB1 and VEGF were elevated in patients with severe sepsis. VEGF concentrations were lower in non-survivors than in survivors, but HMGB1 levels did not differ between patients. Neither HMGB1 nor VEGF were predictive of hospital mortality. The QOL was measured median 17 months after severe sepsis and QOL was lower than in reference population. The mean QALY was 15.2 years for a surviving patient and the cost for one QALY was 2,139 . The study showed that the incidence of severe sepsis is lower in Finland than in other countries. The short-term outcome is comparable with that in other countries, but long-term outcome is poor. HMGB1 and VEGF are not useful in predicting mortality in severe sepsis. The mean QALY for a surviving patient is 15.2 and as the cost for one QALY is reasonably low, the intensive care is cost-effective in patients with severe sepsis.
Resumo:
Septic shock is a common killer in intensive care units (ICU). The most crucial issue concerning the outcome is the early and aggressive start of treatment aimed at normalization of hemodynamics and the early start of antibiotics during the very first hours. The optimal targets of hemodynamic treatment, or impact of hemodynamic treatment on survival after first resuscitation period are less known. The objective of this study was to evaluate different aspects of the hemodynamic pattern in septic shock with special attention to prediction of outcome. In particular components of early treatment and monitoring in the ICU were assessed. A total of 401 patients, 218 with septic shock and 192 with severe sepsis or septic shock were included in the study. The patients were treated in 24 Finnish ICUs during 1999-2005. 295 of the patients were included in the Finnish national epidemiologic Finnsepsis study. We found that the most important hemodynamic variables concerning the outcome were the mean arterial pressures (MAP) and lactate during the first six hours in ICU and the MAP and mixed venous oxygen saturation (SvO2) under 70% during first 48 hours. The MAP levels under 65 mmHg and SvO2 below 70% were the best predictive thresholds. Also the high central venous pressure (CVP) correlated to adverse outcome. We assessed the correlation and agreement of SvO2 and mean central venous oxygen saturation (ScvO2) in septic shock during first day in ICU. The mean SvO2 was below ScvO2 during early sepsis. Bias of difference was 4.2% (95% limits of agreement 8.1% to 16.5%) by Bland-Altman analysis. The difference between saturation values correlated significantly to cardiac index and oxygen delivery. Thus, the ScvO2 can not be used as a substitute of SvO2 in hemodynamic monitoring in ICU. Several biomarkers have been investigated for their ability to help in diagnosis or outcome prediction in sepsis. We assessed the predictive value of N-terminal pro brain natriuretic peptide (NT-proBNP) on mortality in severe sepsis or septic shock. The NT-proBNP levels were significantly higher in hospital nonsurvivors. The NT-proBNP 72 hrs after inclusion was independent predictor of hospital mortality. The acute cardiac load contributed to NTproBNP values at admission, but renal failure was the main confounding factor later. The accuracy of NT-proBNP, however, was not sufficient for clinical decision-making concerning the outcome prediction. The delays in start of treatment are associated to poorer prognosis in sepsis. We assessed how the early treatment guidelines were adopted, and what was the impact of early treatment on mortality in septic shock in Finland. We found that the early treatment was not optimal in Finnish hospitals and this reflected to mortality. A delayed initiation of antimicrobial agents was especially associated with unfavorable outcome.
Resumo:
Intensive care is to be provided to patients benefiting from it, in an ethical, efficient, effective and cost-effective manner. This implies a long-term qualitative and quantitative analysis of intensive care procedures and related resources. The study population consists of 2709 patients treated in the general intensive care unit (ICU) of Helsinki University Hospital. Study sectors investigate intensive care patients mortality, quality of life (QOL), Quality-Adjusted Life-Years (QALY units) and factors related to severity of illness, length of stay (LOS), patient s age, evaluation period as well as experiences and memories connected with the ICU episode. In addition, the study examines the qualities of two QOL measures, the RAND 36 Item Health Survey 1.0 (RAND-36) and the 5 Item EuroQol-5D (EQ-5D) and assesses the correlation of the test results. Patients treated in 1995 responded to the RAND-36 questionnaire in 1996. All patients, treated from 1995-2000, received a QOL questionnaires in 2001, when 1 7 years had lapsed from the intensive treatment. Response rate was 79.5 %. Main Results 1) Of the patients who died within the first year (n = 1047) 66 % died during the intensive care period or within the following month. The non-survivors were more aged than the surviving patients, had generally a higher than average APACHE II and SOFA score depicting the severity of illness, their ICU LOS was longer and hospital stay shorter than of the surviving patients (p < 0.001). Mortality of patients receiving conservative treatment was higher than of those receiving surgical treatment. Patients replying to the QOL survey in 2001 (n = 1099) had recovered well: 97 % of those lived at home. More than half considered their QOL as good or extremely good, 40 % as satisfactory and 7 % as bad. All QOL indexes of those of working-age were considerably lower (p < 0.001) than comparable figures of the age- and gender-adjusted Finnish population. The 5-year monitoring period made evident that mental recovery was slower than physical recovery. 2) The results of RAND-36 and EQ-5D correlated well (p < 0.01). The RAND-36 profile measure distinguished more clearly between the different categories of QOL and their levels. EQ-5D measured well the patient groups general QOL and the sum index was used to calculate QALY units. 3) QALY units were calculated by multiplying the time the patient survived after ICU stay or expected life-years by the EQ-5D sum index. Aging automatically lowers the number of QALY units. Patients under the age of 65 receiving conservative treatment benefited from treatment to a greater extent measured in QALY units than their peers receiving surgical treatment, but in the age group 65 and over patients with surgical treatment received higher QALY ratings than recipients of conservative treatment. 4) The intensive care experience and QOL ratings were connected. The QOL indices were statistically highest for those recipients with memories of intensive care as a positive experience, albeit their illness requiring intensive care treatment was less serious than average. No statistically significant differences were found in the QOL indices of those with negative memories, no memories or those who did not express the quality of their experiences.