880 resultados para Heat Illness Index Score
Resumo:
Objective: To evaluate the ease of application of a heat illness prevention program (HIPP). Design: A mixed-method research design was used: questionnaire and semi-structured interview. Setting: Eleven South Florida high schools in August (mean ambient temperature=84.0°F, mean relative humidity=69.5%) participated in the HIPP. Participants: Certified Athletic Trainers (AT) (n=11; age=22.2+1.2yr; 63.6% female, 36.4% male; 63.6%) implemented the HIPP with their football athletes which included a pre-screening tool, the Heat Illness Index Score- Risk Assessment. Data Collection and Analysis: Participants completed a 17-item questionnaire, 4 of which provided space for open-ended responses. Additionally, semi-structured interviews were voice recorded, and separately transcribed. Results: Three participants (27.7%) were unable to implement the HIPP with any of their athletes. Of the 7 participants (63.6%) who implemented the HIPP to greater than 50% of their athletes, a majority reported that the HIPP was difficult (54.5%) or exceedingly difficult (18.2%) to implement. Lack of appropriate instrumentation (81.8%, n=9/11), lack of coaching staff/administrative support (54.5%, n=6/11), insufficient support staff (54.5%, n=6/11), too many athletes (45.5%, n=5/11), and financial restrictions (36.4%, n=4/11) deterred complete implementation of the HIPP. Conclusions: Because AT in the high school setting often lack the resources, time, and coaches’ support to identify risk factors, predisposing athletes to exertional heat Illnesses (EHI) researchers should develop and validate a suitable screening tool. Further, ATs charged with the health care of high school athletes should seek out prevention programs and screening tools to identify high-risk athletes and monitor athletes throughout exercise in extreme environments.
Resumo:
Context: Accurately determining hydration status is a preventative measure for exertional heat illnesses (EHI). Objective: To determine the validity of various field measures of urine specific gravity (Usg) compared to laboratory instruments. Design: Observational research design to compare measures of hydration status: urine reagent strips (URS) and a urine color (Ucol) chart to a refractometer. Setting: We utilized the athletic training room of a Division I-A collegiate American football team. Participants: Trial 1 involved urine samples of 69 veteran football players (age=20.1+1.2yr; body mass=229.7+44.4lb; height=72.2+2.1in). Trial 2 involved samples from 5 football players (age=20.4+0.5yr; body mass=261.4+39.2lb; height=72.3+2.3in). Interventions: We administered the Heat Illness Index Score (HIIS) Risk Assessment, to identify athletes at-risk for EHI (Trial 1). For individuals “at-risk” (Trial 2), we collected urine samples before and after 15 days of pre-season “two-a-day” practices in a hot, humid environment(mean on-field WBGT=28.84+2.36oC). Main Outcome Measures: Urine samples were immediately analyzed for Usg using a refractometer, Diascreen 7® (URS1), Multistix® (URS2), and Chemstrip10® (URS3). Ucol was measured using Ucol chart. We calculated descriptive statistics for all main measures; Pearson correlations to assess relationships between the refractometer, each URS, and Ucol, and transformed Ucol data to Z-scores for comparison to the refractometer. Results: In Trial 1, we found a moderate relationship (r=0.491, p<.01) between URS1 (1.020+0.006μg) and the refractometer (1.026+0.010μg). In Trial 2, we found marked relationships for Ucol (5.6+1.6shades, r=0.619, p<0.01), URS2 (1.019+0.008μg, r=0.712, p<0.01), and URS3 (1.022+0.007μg, r=0.689, p<0.01) compared to the refractometer (1.028+0.008μg). Conclusions: Our findings suggest that URS were inconsistent between manufacturers, suggesting practitioners use the clinical refractometer to accurately determine Usg and monitor hydration status.
Resumo:
The aim of the research program was to evaluate the heat strain, hydration status, and heat illness symptoms experienced by surface mine workers. An initial investigation involved 91 surface miners completing a heat stress questionnaire; assessing the work environment, hydration practices, and heat illness symptom experience. The key findings included 1) more than 80 % of workers experienced at least one symptom of heat illness over a 12 month period; and 2) the risk of moderate symptoms of heat illness increased with the severity of dehydration. These findings highlight a health and safety concern for surface miners, as experiencing symptoms of heat illness is an indication that the physiological systems of the body may be struggling to meet the demands of thermoregulation. To illuminate these findings a field investigation to monitor the heat strain and hydration status of surface miners was proposed. Two preliminary studies were conducted to ensure accurate and reliable data collection techniques. Firstly, a study was undertaken to determine a calibration procedure to ensure the accuracy of core body temperature measurement via an ingestible sensor. A water bath was heated to several temperatures between 23 . 51 ¢ªC, allowing for comparison of the temperature recorded by the sensors and a traceable thermometer. A positive systematic bias was observed and indicated a need for calibration. It was concluded that a linear regression should be developed for each sensor prior to ingestion, allowing for a correction to be applied to the raw data. Secondly, hydration status was to be assessed through urine specific gravity measurement. It was foreseeable that practical limitations on mine sites would delay the time between urine collection and analysis. A study was undertaken to assess the reliability of urine analysis over time. Measurement of urine specific gravity was found to be reliable up to 24 hours post urine collection and was suitable to be used in the field study. Twenty-nine surface miners (14 drillers [winter] and 15 blast crew [summer]) were monitored during a normal work shift. Core body temperature was recorded continuously. Average mean core body temperature was 37.5 and 37.4 ¢ªC for blast crew and drillers, with average maximum body temperatures of 38.0 and 37.9 ¢ªC respectively. The highest body temperature recorded was 38.4 ¢ªC. Urine samples were collected at each void for specific gravity measurement. The average mean urine specific gravity was 1.024 and 1.021 for blast crew and drillers respectively. The Heat Illness Symptoms Index was used to evaluate the experience of heat illness symptoms on shift. Over 70 % of drillers and over 80 % of blast crew reported at least one symptom. It was concluded that 1) heat strain remained within the recommended limits for acclimatised workers; and 2) the majority of workers were dehydrated before commencing their shift, and tend to remain dehydrated for the duration. Dehydration was identified as the primary issue for surface miners working in the heat. Therefore continued study focused on investigating a novel approach to monitoring hydration status. The final aim of this research program was to investigate the influence dehydration has on intraocular pressure (IOP); and subsequently, whether IOP could provide a novel indicator of hydration status. Seven males completed 90 minutes of walking in both a cool and hot climate with fluid restriction. Hydration variables and intraocular pressure were measured at baseline and at 30 minute intervals. Participants became dehydrated during the trial in the heat but maintained hydration status in the cool. Intraocular pressure progressively declined in the trial in the heat but remained relatively stable when hydration was maintained. A significant relationship was observed between intraocular pressure and both body mass loss and plasma osmolality. This evidence suggests that intraocular pressure is influenced by changes in hydration status. Further research is required to determine if intraocular pressure could be utilised as an indirect indicator of hydration status.
Resumo:
The climatic conditions of tropical and subtropical regions within Australia present, at times, extreme risk of physical activity induced heat illness. Many administrators and teachers in school settings are aware of the general risks of heat related illness. In the absence of reliable information applied at the local level, there is a risk that inappropriate decisions may be made concerning school events that incorporate opportunities to be physically active. Such events may be prematurely cancelled resulting in the loss of necessary time for physical activity. Under high or extremely high risk conditions however, the absence of appropriate modifications or continuation could place the health of students, staff and other parties at risk. School staff and other key stakeholders should understand the mechanisms of escalating risk and be supported to undertake action to reduce the level of risk through appropriate policies, procedures, resources and action plans.
Resumo:
Objective: To assess the symptoms of heat illness experienced by surface mine workers. Methods: Ninety-one surface mine workers across three mine sites in northern Australia completed a heat stress questionnaire evaluating their symptoms for heat illness. A cohort of 56 underground mine workers also participated for comparative purposes. Participants were allocated into asymptomatic, minor or moderate heat illness categories depending on the number of symptoms they reported. Participants also reported the frequency of symptom experience, as well as their hydration status (average urine colour). Results: Heat illness symptoms were experienced by 87 and 79 % of surface and underground mine workers, respectively (p = 0.189), with 81–82 % of the symptoms reported being experienced by miners on more than one occasion. The majority (56 %) of surface workers were classified as experiencing minor heat illness symptoms, with a further 31 % classed as moderate; 13 % were asymptomatic. A similar distribution of heat illness classification was observed among underground miners (p = 0.420). Only 29 % of surface miners were considered well hydrated, with 61 % minimally dehydrated and 10 % significantly dehydrated, proportions that were similar among underground miners (p = 0.186). Heat illness category was significantly related to hydration status (p = 0.039) among surface mine workers, but only a trend was observed when data from surface and underground miners was pooled (p = 0.073). Compared to asymptomatic surface mine workers, the relative risk of experiencing minor and moderate symptoms of heat illness was 1.5 and 1.6, respectively, when minimally dehydrated. Conclusions: These findings show that surface mine workers routinely experience symptoms of heat illness and highlight that control measures are required to prevent symptoms progressing to medical cases of heat exhaustion or heat stroke.
Resumo:
OBJECTIVES: To assess health care utilisation for patients co-infected with TB and HIV (TB-HIV), and to develop a weighted health care index (HCI) score based on commonly used interventions and compare it with patient outcome. METHODS: A total of 1061 HIV patients diagnosed with TB in four regions, Central/Northern, Southern and Eastern Europe and Argentina, between January 2004 and December 2006 were enrolled in the TB-HIV study. A weighted HCI score (range 0–5), based on independent prognostic factors identified in multivariable Cox models and the final score, included performance of TB drug susceptibility testing (DST), an initial TB regimen containing a rifamycin, isoniazid and pyrazinamide, and start of combination antiretroviral treatment (cART). RESULTS: The mean HCI score was highest in Central/Northern Europe (3.2, 95%CI 3.1–3.3) and lowest in Eastern Europe (1.6, 95%CI 1.5–1.7). The cumulative probability of death 1 year after TB diagnosis decreased from 39% (95%CI 31–48) among patients with an HCI score of 0, to 9% (95%CI 6–13) among those with a score of ≥4. In an adjusted Cox model, a 1-unit increase in the HCI score was associated with 27% reduced mortality (relative hazard 0.73, 95%CI 0.64–0.84). CONCLUSIONS: Our results suggest that DST, standard anti-tuberculosis treatment and early cART may improve outcome for TB-HIV patients. The proposed HCI score provides a tool for future research and monitoring of the management of TB-HIV patients. The highest HCI score may serve as a benchmark to assess TB-HIV management, encouraging continuous health care improvement.
Resumo:
The main objective of this study was to develop and validate a computer-based statistical algorithm based on a multivariable logistic model that can be translated into a simple scoring system in order to ascertain stroke cases using hospital admission medical records data. This algorithm, the Risk Index Score (RISc), was developed using data collected prospectively by the Brain Attack Surveillance in Corpus Christ (BASIC) project. The validity of the RISc was evaluated by estimating the concordance of scoring system stroke ascertainment to stroke ascertainment accomplished by physician review of hospital admission records. The goal of this study was to develop a rapid, simple, efficient, and accurate method to ascertain the incidence of stroke from routine hospital admission hospital admission records for epidemiologic investigations. ^ The main objectives of this study were to develop and validate a computer-based statistical algorithm based on a multivariable logistic model that could be translated into a simple scoring system to ascertain stroke cases using hospital admission medical records data. (Abstract shortened by UMI.)^
Resumo:
RATIONALE: In bronchiectasis there is a need for improved markers of lung function to determine disease severity and response to therapy.
OBJECTIVES: To assess whether the lung clearance index is a repeatable and more sensitive indicator of computed tomography (CT) scan abnormalities than spirometry in bronchiectasis.
METHODS: Thirty patients with stable bronchiectasis were recruited and lung clearance index, spirometry, and health-related quality of life measures were assessed on two occasions, 2 weeks apart when stable (study 1). A separate group of 60 patients with stable bronchiectasis was studied on a single visit with the same measurements and a CT scan (study 2).
MEASUREMENTS AND MAIN RESULTS: In study 1, the intervisit intraclass correlation coefficient for the lung clearance index was 0.94 (95% confidence interval, 0.89 to 0.97; P < 0.001). In study 2, the mean age was 62 (10) years, FEV1 76.5% predicted (18.9), lung clearance index 9.1 (2.0), and total CT score 14.1 (10.2)%. The lung clearance index was abnormal in 53 of 60 patients (88%) and FEV1 was abnormal in 37 of 60 patients (62%). FEV1 negatively correlated with the lung clearance index (r = -0.51, P < 0.0001). Across CT scores, there was a relationship with the lung clearance index, with little evidence of an effect of FEV1. There were no significant associations between the lung clearance index or FEV1 and health-related quality of life.
CONCLUSIONS: The lung clearance index is repeatable and a more sensitive measure than FEV1 in the detection of abnormalities demonstrated on CT scan. The lung clearance index has the potential to be a useful clinical and research tool in patients with bronchiectasis.
Resumo:
BACKGROUND: To date, there is no quality assurance program that correlates patient outcome to perfusion service provided during cardiopulmonary bypass (CPB). A score was devised, incorporating objective parameters that would reflect the likelihood to influence patient outcome. The purpose was to create a new method for evaluating the quality of care the perfusionist provides during CPB procedures and to deduce whether it predicts patient morbidity and mortality. METHODS: We analysed 295 consecutive elective patients. We chose 10 parameters: fluid balance, blood transfused, Hct, ACT, PaO2, PaCO2, pH, BE, potassium and CPB time. Distribution analysis was performed using the Shapiro-Wilcoxon test. This made up the PerfSCORE and we tried to find a correlation to mortality rate, patient stay in the ICU and length of mechanical ventilation. Univariate analysis (UA) using linear regression was established for each parameter. Statistical significance was established when p < 0.05. Multivariate analysis (MA) was performed with the same parameters. RESULTS: The mean age was 63.8 +/- 12.6 years with 70% males. There were 180 CABG, 88 valves, and 27 combined CABG/valve procedures. The PerfSCORE of 6.6 +/- 2.4 (0-20), mortality of 2.7% (8/295), CPB time 100 +/- 41 min (19-313), ICU stay 52 +/- 62 hrs (7-564) and mechanical ventilation of 10.5 +/- 14.8 hrs (0-564) was calculated. CPB time, fluid balance, PaO2, PerfSCORE and blood transfused were significantly correlated to mortality (UA, p < 0.05). Also, CPB time, blood transfused and PaO2 were parameters predicting mortality (MA, p < 0.01). Only pH was significantly correlated for predicting ICU stay (UA). Ultrafiltration (UF) and CPB time were significantly correlated (UA, p < 0.01) while UF (p < 0.05) was the only parameter predicting mechanical ventilation duration (MA). CONCLUSIONS: CPB time, blood transfused and PaO2 are independent risk factors of mortality. Fluid balance, blood transfusion, PaO2, PerfSCORE and CPB time are independent parameters for predicting morbidity. PerfSCORE is a quality of perfusion measure that objectively quantifies perfusion performance.
Resumo:
OBJECTIVE: To compare low and high MELD scores and investigate whether existing renal dysfunction has an effect on transplant outcome. METHODS: Data was prospectively collected among 237 liver transplants (216 patients) between March 2003 and March 2009. Patients with cirrhotic disease submitted to transplantation were divided into three groups: MELD > 30, MELD < 30, and hepatocellular carcinoma. Renal failure was defined as a ± 25% decline in estimated glomerular filtration rate as observed 1 week after the transplant. Median MELD scores were 35, 21, and 13 for groups MELD > 30, MELD < 30, and hepatocellular carcinoma, respectively. RESULTS: Recipients with MELD > 30 had more days in Intensive Care Unit, longer hospital stay, and received more blood product transfusions. Moreover, their renal function improved after liver transplant. All other groups presented with impairment of renal function. Mortality was similar in all groups, but renal function was the most important variable associated with morbidity and length of hospital stay. CONCLUSION: High MELD score recipients had an improvement in the glomerular filtration rate after 1 week of liver transplantation.
Resumo:
As the relative burden of community-acquired bacterial pneumonia among HIV-positive patients increases, adequate prediction of case severity on presentation is crucial. We sought to determine what characteristics measurable on presentation are predictive of worse outcomes. We studied all admissions for community-acquired bacterial pneumonia over one year at a tertiary centre. Patient demographics, comorbidities, HIV-specific markers and CURB-65 scores on Emergency Department presentation were reviewed. Outcomes of interest included mortality, bacteraemia, intensive care unit admission and orotracheal intubation. A total of 396 patients were included: 49 HIV-positive and 347 HIV-negative. Mean CURB-65 score was 1.3 for HIV-positive and 2.2 for HIV-negative patients (p < 0.0001), its predictive value for mortality being maintained in both groups (p = 0.03 and p < 0.001, respectively). Adjusting for CURB-65 scores, HIV infection by itself was only associated with bacteraemia (adjusted odds ratio [AOR] 7.1, 95% CI [2.6-19.5]). Patients with < 200 CD4 cells/µL presented similar CURB-65 adjusted mortality (aOR 1.7, 95% CI [0.2-15.2]), but higher risk of intensive care unit admission (aOR 5.7, 95% CI [1.5-22.0]) and orotracheal intubation (aOR 9.1, 95% CI [2.2-37.1]), compared to HIV-negative patients. These two associations were not observed in the > 200 CD4 cells/µL subgroup (aOR 2.2, 95% CI [0.7-7.6] and aOR 0.8, 95% CI [0.1-6.5], respectively). Antiretroviral therapy and viral load suppression were not associated with different outcomes (p > 0.05). High CURB-65 scores and CD4 counts < 200 cells/µL were both associated with worse outcomes. Severity assessment scales and CD4 counts may both be helpful in predicting severity in HIV-positive patients presenting with community-acquired bacterial pneumonia.
Resumo:
Objectives: To evaluate the validity, reliability and responsiveness of EDC using the WOMAC® NRS 3.1 Index on Motorola V3 mobile phones. ---------- Methods: Patients with osteoarthritis (OA) undergoing primary unilateral hip or knee joint replacement surgery were assessed pre-operatively and 3-4 months post-operatively. Patients completed the WOMAC® Index in paper (p-WOMAC®) and electronic (m-WOMAC®) format in random order. ---------- Results: 24 men and 38 women with hip and knee OA participated and successfully completed the m-WOMAC® questionnaire. Pearson correlations between the summated total index scores for the p-WOMAC® and m-WOMAC® pre- and post-surgery were 0.98 and 0.99 (p<0.0001). There was no clinically important or statistically significant between-method difference in the adjusted total summated scores, pre- and post-surgery (adjusted mean difference = 4.44, p = 0.474 and 1.73, p = 0.781). Internal consistency estimates of m-WOMAC® reliability were 0.87 – 0.98. The m-WOMAC® detected clinically important, statistically significant (p<0.0001) improvements in pain, stiffness, function and total index score. ---------- Conclusions: Sixty-two patients with hip and knee OA successfully completed EDC by Motorola V3 mobile phone using the m-WOMAC® NRS3.1 Index; completion times averaging only 1-1.5 minutes longer than the p-WOMAC® Index. Data were successfully and securely transmitted from patients in Australia to a server in the USA. There was close agreement and no significant differences between m-WOMAC® and p-WOMAC® scores. This study confirms the validity, reliability and responsiveness of the Exco InTouch engineered, Java-based m-WOMAC® Index application. EDC with the m-WOMAC® Index provides unique opportunities for using quantitative measurement in clinical research and practice.
Resumo:
Older adults, especially those acutely ill, are vulnerable to developing malnutrition due to a range of risk factors. The high prevalence and extensive consequences of malnutrition in hospitalised older adults have been reported extensively. However, there are few well-designed longitudinal studies that report the independent relationship between malnutrition and clinical outcomes after adjustment for a wide range of covariates. Acutely ill older adults are exceptionally prone to nutritional decline during hospitalisation, but few reports have studied this change and impact on clinical outcomes. In the rapidly ageing Singapore population, all this evidence is lacking, and the characteristics associated with the risk of malnutrition are also not well-documented. Despite the evidence on malnutrition prevalence, it is often under-recognised and under-treated. It is therefore crucial that validated nutrition screening and assessment tools are used for early identification of malnutrition. Although many nutrition screening and assessment tools are available, there is no universally accepted method for defining malnutrition risk and nutritional status. Most existing tools have been validated amongst Caucasians using various approaches, but they are rarely reported in the Asian elderly and none has been validated in Singapore. Due to the multiethnicity, cultural, and language differences in Singapore older adults, the results from non-Asian validation studies may not be applicable. Therefore it is important to identify validated population and setting specific nutrition screening and assessment methods to accurately detect and diagnose malnutrition in Singapore. The aims of this study are therefore to: i) characterise hospitalised elderly in a Singapore acute hospital; ii) describe the extent and impact of admission malnutrition; iii) identify and evaluate suitable methods for nutritional screening and assessment; and iv) examine changes in nutritional status during admission and their impact on clinical outcomes. A total of 281 participants, with a mean (+SD) age of 81.3 (+7.6) years, were recruited from three geriatric wards in Tan Tock Seng Hospital over a period of eight months. They were predominantly Chinese (83%) and community-dwellers (97%). They were screened within 72 hours of admission by a single dietetic technician using four nutrition screening tools [Tan Tock Seng Hospital Nutrition Screening Tool (TTSH NST), Nutritional Risk Screening 2002 (NRS 2002), Mini Nutritional Assessment-Short Form (MNA-SF), and Short Nutritional Assessment Questionnaire (SNAQ©)] that were administered in no particular order. The total scores were not computed during the screening process so that the dietetic technician was blinded to the results of all the tools. Nutritional status was assessed by a single dietitian, who was blinded to the screening results, using four malnutrition assessment methods [Subjective Global Assessment (SGA), Mini Nutritional Assessment (MNA), body mass index (BMI), and corrected arm muscle area (CAMA)]. The SGA rating was completed prior to computation of the total MNA score to minimise bias. Participants were reassessed for weight, arm anthropometry (mid-arm circumference, triceps skinfold thickness), and SGA rating at discharge from the ward. The nutritional assessment tools and indices were validated against clinical outcomes (length of stay (LOS) >11days, discharge to higher level care, 3-month readmission, 6-month mortality, and 6-month Modified Barthel Index) using multivariate logistic regression. The covariates included age, gender, race, dementia (defined using DSM IV criteria), depression (defined using a single question “Do you often feel sad or depressed?”), severity of illness (defined using a modified version of the Severity of Illness Index), comorbidities (defined using Charlson Comorbidity Index, number of prescribed drugs and admission functional status (measured using Modified Barthel Index; MBI). The nutrition screening tools were validated against the SGA, which was found to be the most appropriate nutritional assessment tool from this study (refer section 5.6) Prevalence of malnutrition on admission was 35% (defined by SGA), and it was significantly associated with characteristics such as swallowing impairment (malnourished vs well-nourished: 20% vs 5%), poor appetite (77% vs 24%), dementia (44% vs 28%), depression (34% vs 22%), and poor functional status (MBI 48.3+29.8 vs 65.1+25.4). The SGA had the highest completion rate (100%) and was predictive of the highest number of clinical outcomes: LOS >11days (OR 2.11, 95% CI [1.17- 3.83]), 3-month readmission (OR 1.90, 95% CI [1.05-3.42]) and 6-month mortality (OR 3.04, 95% CI [1.28-7.18]), independent of a comprehensive range of covariates including functional status, disease severity and cognitive function. SGA is therefore the most appropriate nutritional assessment tool for defining malnutrition. The TTSH NST was identified as the most suitable nutritional screening tool with the best diagnostic performance against the SGA (AUC 0.865, sensitivity 84%, specificity 79%). Overall, 44% of participants experienced weight loss during hospitalisation, and 27% had weight loss >1% per week over median LOS 9 days (range 2-50). Wellnourished (45%) and malnourished (43%) participants were equally prone to experiencing decline in nutritional status (defined by weight loss >1% per week). Those with reduced nutritional status were more likely to be discharged to higher level care (adjusted OR 2.46, 95% CI [1.27-4.70]). This study is the first to characterise malnourished hospitalised older adults in Singapore. It is also one of the very few studies to (a) evaluate the association of admission malnutrition with clinical outcomes in a multivariate model; (b) determine the change in their nutritional status during admission; and (c) evaluate the validity of nutritional screening and assessment tools amongst hospitalised older adults in an Asian population. Results clearly highlight that admission malnutrition and deterioration in nutritional status are prevalent and are associated with adverse clinical outcomes in hospitalised older adults. With older adults being vulnerable to risks and consequences of malnutrition, it is important that they are systematically screened so timely and appropriate intervention can be provided. The findings highlighted in this thesis provide an evidence base for, and confirm the validity of the current nutrition screening and assessment tools used among hospitalised older adults in Singapore. As the older adults may have developed malnutrition prior to hospital admission, or experienced clinically significant weight loss of >1% per week of hospitalisation, screening of the elderly should be initiated in the community and continuous nutritional monitoring should extend beyond hospitalisation.