644 resultados para Admissions


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective--To determine whether heart failure with preserved systolic function (HFPSF) has different natural history from left ventricular systolic dysfunction (LVSD). Design and setting--A retrospective analysis of 10 years of data (for patients admitted between 1 July 1994 and 30 June 2004, and with a study census date of 30 June 2005) routinely collected as part of clinical practice in a large tertiary referral hospital.Main outcome measures-- Sociodemographic characteristics, diagnostic features, comorbid conditions, pharmacotherapies, readmission rates and survival.Results--Of the 2961 patients admitted with chronic heart failure, 753 had echocardiograms available for this analysis. Of these, 189 (25%) had normal left ventricular size and systolic function. In comparison to patients with LVSD, those with HFPSF were more often female (62.4% v 38.5%; P = 0.001), had less social support, and were more likely to live in nursing homes (17.9% v 7.6%; P < 0.001), and had a greater prevalence of renal impairment (86.7% v 6.2%; P = 0.004), anaemia (34.3% v 6.3%; P = 0.013) and atrial fibrillation (51.3% v 47.1%; P = 0.008), but significantly less ischaemic heart disease (53.4% v 81.2%; P = 0.001). Patients with HFPSF were less likely to be prescribed an angiotensin-converting enzyme inhibitor (61.9% v 72.5%; P = 0.008); carvedilol was used more frequently in LVSD (1.5% v 8.8%; P < 0.001). Readmission rates were higher in the HFPSF group (median, 2 v 1.5 admissions; P = 0.032), particularly for malignancy (4.2% v 1.8%; P < 0.001) and anaemia (3.9% v 2.3%; P < 0.001). Both groups had the same poor survival rate (P = 0.912). Conclusions--Patients with HFPSF were predominantly older women with less social support and higher readmission rates for associated comorbid illnesses. We therefore propose that reduced survival in HFPSF may relate more to comorbid conditions than suboptimal cardiac management.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objectives To explore the extent of and factors associated with male residents who change wandering status post nursing home admission. Design Longitudinal design with secondary data analyses. Admissions over a 4-year period were examined using repeat assessments with the Minimum Data Set (MDS) to formulate a model understanding the development of wandering behavior. Setting One hundred thirty-four Veterans Administration (VA) nursing homes throughout the United States. Participants: Included 6673 residents admitted to VA nursing homes between October 2000 and October 2004. Measurements MDS variables (cognitive impairment, mood, behavior problems, activities of daily living and wandering) included ratings recorded at residents’ admission to the nursing home and a minimum of two other time points at quarterly intervals. Results The majority (86%) of the sample were classified as non wanderers at admission and most of these (94%) remained non wanderers until discharge or the end of the study. Fifty one per cent of the wanderers changed status to non wanderers with 6% of these residents fluctuating in status more than two times. Admission variables associated with an increased risk of changing status from non-wandering to wandering included older age, greater cognitive impairment, more socially inappropriate behavior, resisting care, easier distractibility, and needing less help with personal hygiene. Requiring assistance with locomotion and having three or more medical comorbidities were associated with a decreased chance of changing from non-wandering to wandering status. Conclusion A resident’s change from non-wandering to wandering status may reflect an undetected medical event that affects cognition, but spares mobility.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: An estimated 285 million people worldwide have diabetes and its prevalence is predicted to increase to 439 million by 2030. For the year 2010, it is estimated that 3.96 million excess deaths in the age group 20-79 years are attributable to diabetes around the world. Self-management is recognised as an integral part of diabetes care. This paper describes the protocol of a randomised controlled trial of an automated interactive telephone system aiming to improve the uptake and maintenance of essential diabetes self-management behaviours. ---------- Methods/Design: A total of 340 individuals with type 2 diabetes will be randomised, either to the routine care arm, or to the intervention arm in which participants receive the Telephone-Linked Care (TLC) Diabetes program in addition to their routine care. The intervention requires the participants to telephone the TLC Diabetes phone system weekly for 6 months. They receive the study handbook and a glucose meter linked to a data uploading device. The TLC system consists of a computer with software designed to provide monitoring, tailored feedback and education on key aspects of diabetes self-management, based on answers voiced or entered during the current or previous conversations. Data collection is conducted at baseline (Time 1), 6-month follow-up (Time 2), and 12-month follow-up (Time 3). The primary outcomes are glycaemic control (HbA1c) and quality of life (Short Form-36 Health Survey version 2). Secondary outcomes include anthropometric measures, blood pressure, blood lipid profile, psychosocial measures as well as measures of diet, physical activity, blood glucose monitoring, foot care and medication taking. Information on utilisation of healthcare services including hospital admissions, medication use and costs is collected. An economic evaluation is also planned.---------- Discussion: Outcomes will provide evidence concerning the efficacy of a telephone-linked care intervention for self-management of diabetes. Furthermore, the study will provide insight into the potential for more widespread uptake of automated telehealth interventions, globally.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective: To understand the levels of substance abuse and dependence among impaired drivers by comparing the differences in patients in substance abuse treatment programs with and without a past-year DUI arrest based on their primary problem substance at admission (alcohol, cocaine, cannabis, or methamphetamine). Method: Records on 345,067 admissions to Texas treatment programs between 2005 and 2008 have been analyzed for differences in demographic characteristics, levels of severity, and mental health problems at admission, treatment completion, and 90-day follow-up. Methods will include t-tests,??, and multivariate logistic regression. Results: The analysis found that DUI arrestees with a primary problem with alcohol were less impaired than non-DUI alcohol patients, had fewer mental health problems, and were more likely to complete treatment. DUI arrestees with a primary problem with cannabis were more impaired than non-DUI cannabis patients and there was no difference in treatment completion. DUI arrestees with a primary problem with cocaine were less impaired and more likely to complete treatment than other cocaine patients, and there was little difference in levels of mental health problems. DUI arrestees with a primary problem with methamphetamine were more similar to methamphetamine non-arrestees, with no difference in mental health problems and treatment completion. Conclusions: This study provides evidence of the extent of abuse and dependence among DUI arrestees and their need for treatment for their alcohol and drug problems in order to decrease recidivism. Treatment patients with past-year DUI arrests had good treatment outcomes but closer supervision during 90 day follow-up after treatment can lead to even better long-term outcomes, including reduced recidivism. Information will be provided on the latest treatment methodologies, including medication assisted therapies and screening and brief interventions, and ways impaired driving programs and substance dependence programs can be integrated to benefit the driver and society.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background/aim In response to the high burden of disease associated with chronic heart failure (CHF), in particular the high rates of hospital admissions, dedicated CHF management programs (CHF-MP) have been developed. Over the past five years there has been a rapid growth of CHF-MPs in Australia. Given the apparent mismatch between the demand for, and availability of CHF-MPs, this paper has been designed to discuss the accessibility to and quality of current CHF-MPs in Australia. Methods The data presented in this report has been combined from the research of the co-authors, in particular a review of the inequities in access to chronic heart failure which utilised geographical information systems (GIS) and the survey of heterogeneity in quality and service provision in Australian. Results Of the 62 CHF-MPs surveyed in this study 93% (58) centres had been located areas that are rated as Highly Accessible. This result indicated that most of the CHF-MPs have been located in capital cities or large regional cities. Six percent (4 CHF-MPs) had been located in Accessible areas which were country towns or cities. No CHF-MPs had been established outside of cities to service the estimated 72,000 individuals with CHF living in rural and remote areas. 16% of programs recruited NYHA Class I patients and of these 20% lacked confirmation (echocardiogram) of their diagnosis. Conclusion Overall, these data highlight the urgent need to provide equitable access to CHF-MP's. When establishing CHF-MPs consideration of current evidence based models to ensure quality in practice.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background There are minimal reports of seasonal variations in chronic heart failure (CHF)-related morbidity and mortality beyond the northern hemisphere. Aims and methods We examined potential seasonal variations with respect to morbidity and all-cause mortality over more than a decade in a cohort of 2961 patients with CHF from a tertiary referral hospital in South Australia subject to mild winters and hot summers. Results Seasonal variation across all event-types was observed. CHF-related morbidity peaked in winter (July) and was lowest in summer (February): 70 (95% CI: 65 to 76) vs. 33 (95% CI: 30 to 37) admissions/1000 at risk (p<0.005). All-cause admissions (113 (95% CI: 107 to 120) vs. 73 (95% CI 68 to 79) admissions/1000 at risk, p<0.001) and concurrent respiratory disease (21% vs. 12%,p<0.001) were consistently higher in winter. 2010 patients died, mortality was highest in August relative to February: 23 (95% CI: 20 to 27) vs. 12 (95% CI: 10 to 15) deaths per 1000 at risk, p<0.001. Those aged 75 years or older were most at risk of seasonal variations in morbidity and mortality. Conclusion Seasonal variations in CHF-related morbidity and mortality occur in the hot climate of South Australia, suggesting that relative (rather than absolute) changes in temperature drive this global phenomenon.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective: To determine whether primary care management of chronic heart failure (CHF) differed between rural and urban areas in Australia. Design: A cross-sectional survey stratified by Rural, Remote and Metropolitan Areas (RRMA) classification. The primary source of data was the Cardiac Awareness Survey and Evaluation (CASE) study. Setting: Secondary analysis of data obtained from 341 Australian general practitioners and 23 845 adults aged 60 years or more in 1998. Main outcome measures: CHF determined by criteria recommended by the World Health Organization, diagnostic practices, use of pharmacotherapy, and CHF-related hospital admissions in the 12 months before the study. Results: There was a significantly higher prevalence of CHF among general practice patients in large and small rural towns (16.1%) compared with capital city and metropolitan areas (12.4%) (P < 0.001). Echocardiography was used less often for diagnosis in rural towns compared with metropolitan areas (52.0% v 67.3%, P < 0.001). Rates of specialist referral were also significantly lower in rural towns than in metropolitan areas (59.1% v 69.6%, P < 0.001), as were prescribing rates of angiotensin-converting enzyme inhibitors (51.4% v 60.1%, P < 0.001). There was no geographical variation in prescribing rates of β-blockers (12.6% [rural] v 11.8% [metropolitan], P = 0.32). Overall, few survey participants received recommended “evidence-based practice” diagnosis and management for CHF (metropolitan, 4.6%; rural, 3.9%; and remote areas, 3.7%). Conclusions: This study found a higher prevalence of CHF, and significantly lower use of recommended diagnostic methods and pharmacological treatment among patients in rural areas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background and aim Falls are the leading cause of injury in older adults. Identifying people at risk before they experience a serious fall requiring hospitalisation allows an opportunity to intervene earlier and potentially reduce further falls and subsequent healthcare costs. The purpose of this project was to develop a referral pathway to a community falls-prevention team for older people who had experienced a fall attended by a paramedic service and who were not transported to hospital. It was also hypothesised that providing intervention to this group of clients would reduce future falls-related ambulance call-outs, emergency department presentations and hospital admissions. Methods An education package, referral pathway and follow-up procedures were developed. Both services had regular meetings, and work shadowing with the paramedics was also trialled to encourage more referrals. A range of demographic and other outcome measures were collected to compare people referred through the paramedic pathway and through traditional pathways. Results Internal data from the Queensland Ambulance Service indicated that there were approximately six falls per week by community-dwelling older persons in the eligible service catchment area (south west Brisbane metropolitan area) who were attended to by Queensland Ambulance Service paramedics, but not transported to hospital during the 2-year study period (2008–2009). Of the potential 638 eligible patients, only 17 (2.6%) were referred for a falls assessment. Conclusion Although this pilot programme had support from all levels of management as well as from the service providers, it did not translate into actual referrals. Several explanations are provided for these preliminary findings.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Within Australia, motor vehicle injury is the leading cause of hospital admissions and fatalities. Road crash data reveals that among the factors contributing to crashes in Queensland, speed and alcohol continue to be overrepresented. While alcohol is the number one contributing factor to fatal crashes, speeding also contributes to a high proportion of crashes. Research indicates that risky driving is an important contributor to road crashes. However, it has been debated whether all risky driving behaviours are similar enough to be explained by the same combination of factors. Further, road safety authorities have traditionally relied upon deterrence based countermeasures to reduce the incidence of illegal driving behaviours such as speeding and drink driving. However, more recent research has focussed on social factors to explain illegal driving behaviours. The purpose of this research was to examine and compare the psychological, legal, and social factors contributing to two illegal driving behaviours: exceeding the posted speed limit and driving when over the legal blood alcohol concentration (BAC) for the drivers licence type. Complementary theoretical perspectives were chosen to comprehensively examine these two behaviours including Akers’ social learning theory, Stafford and Warr’s expanded deterrence theory, and personality perspectives encompassing alcohol misuse, sensation seeking, and Type-A behaviour pattern. The program of research consisted of two phases: a preliminary pilot study, and the main quantitative phase. The preliminary pilot study was undertaken to inform the development of the quantitative study and to ensure the clarity of the theoretical constructs operationalised in this research. Semi-structured interviews were conducted with 11 Queensland drivers recruited from Queensland Transport Licensing Centres and Queensland University of Technology (QUT). These interviews demonstrated that the majority of participants had engaged in at least one of the behaviours, or knew of someone who had. It was also found among these drivers that the social environment in which both behaviours operated, including family and friends, and the social rewards and punishments associated with the behaviours, are important in their decision making. The main quantitative phase of the research involved a cross-sectional survey of 547 Queensland licensed drivers. The aim of this study was to determine the relationship between speeding and drink driving and whether there were any similarities or differences in the factors that contribute to a driver’s decision to engage in one or the other. A comparison of the participants self-reported speeding and self-reported drink driving behaviour demonstrated that there was a weak positive association between these two behaviours. Further, participants reported engaging in more frequent speeding at both low (i.e., up to 10 kilometres per hour) and high (i.e., 10 kilometres per hour or more) levels, than engaging in drink driving behaviour. It was noted that those who indicated they drove when they may be over the legal limit for their licence type, more frequently exceeded the posted speed limit by 10 kilometres per hour or more than those who complied with the regulatory limits for drink driving. A series of regression analyses were conducted to investigate the factors that predict self-reported speeding, self-reported drink driving, and the preparedness to engage in both behaviours. In relation to self-reported speeding (n = 465), it was found that among the sociodemographic and person-related factors, younger drivers and those who score high on measures of sensation seeking were more likely to report exceeding the posted speed limit. In addition, among the legal and psychosocial factors it was observed that direct exposure to punishment (i.e., being detected by police), direct punishment avoidance (i.e., engaging in an illegal driving behaviour and not being detected by police), personal definitions (i.e., personal orientation or attitudes toward the behaviour), both the normative and behavioural dimensions of differential association (i.e., refers to both the orientation or attitude of their friends and family, as well as the behaviour of these individuals), and anticipated punishments were significant predictors of self-reported speeding. It was interesting to note that associating with significant others who held unfavourable definitions towards speeding (the normative dimension of differential association) and anticipating punishments from others were both significant predictors of a reduction in self-reported speeding. In relation to self-reported drink driving (n = 462), a logistic regression analysis indicated that there were a number of significant predictors which increased the likelihood of whether participants had driven in the last six months when they thought they may have been over the legal alcohol limit. These included: experiences of direct punishment avoidance; having a family member convicted of drink driving; higher levels of Type-A behaviour pattern; greater alcohol misuse (as measured by the AUDIT); and the normative dimension of differential association (i.e., associating with others who held favourable attitudes to drink driving). A final logistic regression analysis examined the predictors of whether the participants reported engaging in both drink driving and speeding versus those who reported engaging in only speeding (the more common of the two behaviours) (n = 465). It was found that experiences of punishment avoidance for speeding decreased the likelihood of engaging in both speeding and drink driving; whereas in the case of drink driving, direct punishment avoidance increased the likelihood of engaging in both behaviours. It was also noted that holding favourable personal definitions toward speeding and drink driving, as well as higher levels of on Type-A behaviour pattern, and greater alcohol misuse significantly increased the likelihood of engaging in both speeding and drink driving. This research has demonstrated that the compliance with the regulatory limits was much higher for drink driving than it was for speeding. It is acknowledged that while speed limits are a fundamental component of speed management practices in Australia, the countermeasures applied to both speeding and drink driving do not appear to elicit the same level of compliance across the driving population. Further, the findings suggest that while the principles underpinning the current regime of deterrence based countermeasures are sound, current enforcement practices are insufficient to force compliance among the driving population, particularly in the case of speeding. Future research should further examine the degree of overlap between speeding and drink driving behaviour and whether punishment avoidance experiences for a specific illegal driving behaviour serve to undermine the deterrent effect of countermeasures aimed at reducing the incidence of another illegal driving behaviour. Furthermore, future work should seek to understand the factors which predict engaging in speeding and drink driving behaviours at the same time. Speeding has shown itself to be a pervasive and persistent behaviour, hence it would be useful to examine why road safety authorities have been successful in convincing the majority of drivers of the dangers of drink driving, but not those associated with speeding. In conclusion, the challenge for road safety practitioners will be to convince drivers that speeding and drink driving are equally risky behaviours, with the ultimate goal to reduce the prevalence of both behaviours.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In Australia, as elsewhere, universities are being encouraged to grow their postgraduate research candidature base while at the same time there is increasing pressure on resources with which to manage the burgeoning groups. In this environment HDR supervision strategies are seen as increasingly important as research managers seek the best possible ‘fit’ for an applicant: the candidate who will provide a sound return on investment and demonstrate endurance in the pursuit of a timely completion. As research managers know, the admissions process can be a risky business. The process may be tested further in the context of the new models of doctoral cohort supervision that are being discussed in the higher degree research management sector. The focus of this paper is an examination of the results of investigations of two models of postgraduate cohort supervision in the creative arts Master of Arts research program at QUT with a view to identifying attributes that may be useful for the formation of cohort models of supervision in the doctoral area.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: The effect of extreme temperature has become an increasing public health concern. Evaluating the impact of ambient temperature on morbidity has received less attention than its impact on mortality. METHODS: We performed a systematic literature review and extracted quantitative estimates of the effects of hot temperatures on cardiorespiratory morbidity. There were too few studies on effects of cold temperatures to warrant a summary. Pooled estimates of effects of heat were calculated using a Bayesian hierarchical approach that allowed multiple results to be included from the same study, particularly results at different latitudes and with varying lagged effects. RESULTS: Twenty-one studies were included in the final meta-analysis. The pooled results suggest an increase of 3.2% (95% posterior interval = -3.2% to 10.1%) in respiratory morbidity with 1°C increase on hot days. No apparent association was observed for cardiovascular morbidity (-0.5% [-3.0% to 2.1%]). The length of lags had inconsistent effects on the risk of respiratory and cardiovascular morbidity, whereas latitude had little effect on either. CONCLUSIONS: The effects of temperature on cardiorespiratory morbidity seemed to be smaller and more variable than previous findings related to mortality.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Patients with chest pain contribute substantially to emergency department attendances, lengthy hospital stay, and inpatient admissions. A reliable, reproducible, and fast process to identify patients presenting with chest pain who have a low short-term risk of a major adverse cardiac event is needed to facilitate early discharge. We aimed to prospectively validate the safety of a predefined 2-h accelerated diagnostic protocol (ADP) to assess patients presenting to the emergency department with chest pain symptoms suggestive of acute coronary syndrome. Methods: This observational study was undertaken in 14 emergency departments in nine countries in the Asia-Pacific region, in patients aged 18 years and older with at least 5 min of chest pain. The ADP included use of a structured pre-test probability scoring method (Thrombolysis in Myocardial Infarction [TIMI] score), electrocardiograph, and point-of-care biomarker panel of troponin, creatine kinase MB, and myoglobin. The primary endpoint was major adverse cardiac events within 30 days after initial presentation (including initial hospital attendance). This trial is registered with the Australia-New Zealand Clinical Trials Registry, number ACTRN12609000283279. Findings: 3582 consecutive patients were recruited and completed 30-day follow-up. 421 (11•8%) patients had a major adverse cardiac event. The ADP classified 352 (9•8%) patients as low risk and potentially suitable for early discharge. A major adverse cardiac event occurred in three (0•9%) of these patients, giving the ADP a sensitivity of 99•3% (95% CI 97•9–99•8), a negative predictive value of 99•1% (97•3–99•8), and a specificity of 11•0% (10•0–12•2). Interpretation: This novel ADP identifies patients at very low risk of a short-term major adverse cardiac event who might be suitable for early discharge. Such an approach could be used to decrease the overall observation periods and admissions for chest pain. The components needed for the implementation of this strategy are widely available. The ADP has the potential to affect health-service delivery worldwide.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Poor nutritional status in chronic obstructive pulmonary disease (COPD) is associated with increased mortality independently of disease-severity (Collins et al).1 Epidemiological studies have suggested a protective role of obesity against mortality in COPD (Vestbo et al)2 which is contrary to data from the general population where obesity is associated with decreased life expectancy. This relationship has been referred to as the ‘obesity paradox’ and has been demonstrated in a number of chronic wasting conditions (Kalantar-Zadeh et al).3 This study investigated the existence of the obesity paradox in outpatients with COPD by examining the effect of body mass index (BMI) on 1-year healthcare use and clinical outcome in terms of hospital admission rates, length of hospital stay, outpatient appointments and mortality. BMI was assessed in 424 outpatients with COPD, with measurements performed by specialist respiratory nurses during outpatient clinics. 1-year healthcare use was retrospectively collected from the date of BMI measurement. Abstract S163 Table 1 Patients classified as overweight (25.0–29.9 kg/m2) or obese (>30 kg/m2) experienced significantly fewer emergency hospital admissions, as well as a reduced length of hospital stay, in comparison to normal weight (20.0–24.9 kg/m2) or underweight (<20 kg/m2) outpatients. There was a significant negative trend between BMI classification and mortality. This study supports the existence of the ‘obesity paradox’ in COPD, not only in relation to reduced 1 year mortality rates but also in terms of reduced emergency hospital admissions and reduced length of hospital stay.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Deprivation assessed using the index of multiple deprivation (IMD) has been shown to be an independent risk factor for 1-year mortality in outpatients with chronic obstructive pulmonary disease; COPD (Collins et al, 2010). IMD combines a number of economic and social issues (eg, health, education, employment) into one overall deprivation score, the higher the score the higher an individual's deprivation. Whilst malnutrition in COPD has been linked to increased healthcare use it is not clear if deprivation is also independently associated. This study aimed to investigate the influence of deprivation on 1-year healthcare utilisation in outpatients with COPD. IMD was established in 424 outpatients with COPD according to the geographical location for each patient's address (postcode) and related to their healthcare use in the year post-date screened (Nobel et al, 2008). Patients were routinely screened in outpatient clinics for malnutrition using the ‘Malnutrition Universal Screening Tool’, ‘MUST’ (Elia 2003); mean age 73 (SD 9.9) years; body mass index 25.8 (SD 6.3) kg/m2 with healthcare use collected 1 year from screening (Abstract P147 Table 1). Deprivation assessed using IMD (mean 15.9; SD 11.1) was found to be a significant predictor for the frequency and duration of emergency hospital admissions as well as the duration of elective hospital admission. Deprivation was also linked to reduced secondary care outpatient appointment attendance but not an increase in failure to attend and deprivation was not associated with increased disease severity, as classified by the GOLD criteria (p=0.580). COPD outpatients residing in more deprived areas experience increased hospitalisation rates but decreased outpatient appointment attendance. The underlying reason behind this disparity in healthcare use requires further investigation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introduction and objectives Early recognition of deteriorating patients results in better patient outcomes. Modified early warning scores (MEWS) attempt to identify deteriorating patients early so timely interventions can occur thus reducing serious adverse events. We compared frequencies of vital sign recording 24 h post-ICU discharge and 24 h preceding unplanned ICU admission before and after a new observation chart using MEWS and an associated educational programme was implemented into an Australian Tertiary referral hospital in Brisbane. Design Prospective before-and-after intervention study, using a convenience sample of ICU patients who have been discharged to the hospital wards, and in patients with an unplanned ICU admission, during November 2009 (before implementation; n = 69) and February 2010 (after implementation; n = 70). Main outcome measures Any change in a full set or individual vital sign frequency before-and-after the new MEWS observation chart and associated education programme was implemented. A full set of vital signs included Blood pressure (BP), heart rate (HR), temperature (T°), oxygen saturation (SaO2) respiratory rate (RR) and urine output (UO). Results After the MEWS observation chart implementation, we identified a statistically significant increase (210%) in overall frequency of full vital sign set documentation during the first 24 h post-ICU discharge (95% CI 148, 288%, p value <0.001). Frequency of all individual vital sign recordings increased after the MEWS observation chart was implemented. In particular, T° recordings increased by 26% (95% CI 8, 46%, p value = 0.003). An increased frequency of full vital sign set recordings for unplanned ICU admissions were found (44%, 95% CI 2, 102%, p value = 0.035). The only statistically significant improvement in individual vital sign recordings was urine output, demonstrating a 27% increase (95% CI 3, 57%, p value = 0.029). Conclusions The implementation of a new MEWS observation chart plus a supporting educational programme was associated with statistically significant increases in frequency of combined and individual vital sign set recordings during the first 24 h post-ICU discharge. There were no significant changes to frequency of individual vital sign recordings in unplanned admissions to ICU after the MEWS observation chart was implemented, except for urine output. Overall increases in the frequency of full vital sign sets were seen.